id
stringlengths
2
7
url
stringlengths
31
119
title
stringlengths
1
69
text
stringlengths
80
123k
5626
https://en.wikipedia.org/wiki/Cognitive%20science
Cognitive science
Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures." The goal of cognitive science is to understand and formulate the principles of intelligence with the hope that this will lead to a better comprehension of the mind and of learning. The cognitive sciences began as an intellectual movement in the 1950s often referred to as the cognitive revolution. History The cognitive sciences began as an intellectual movement in the 1950s, called the cognitive revolution. Cognitive science has a prehistory traceable back to ancient Greek philosophical texts (see Plato's Meno and Aristotle's De Anima); Modern philosophers such as Descartes, David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre Cabanis, Leibniz and John Locke, rejected scholasticism while mostly having never read Aristotle, and they were working with an entirely different set of tools and core concepts than those of the cognitive scientist. The modern culture of cognitive science can be traced back to the early cyberneticists in the 1930s and 1940s, such as Warren McCulloch and Walter Pitts, who sought to understand the organizing principles of the mind. McCulloch and Pitts developed the first variants of what are now known as artificial neural networks, models of computation inspired by the structure of biological neural networks. Another precursor was the early development of the theory of computation and the digital computer in the 1940s and 1950s. Kurt Gödel, Alonzo Church, Alan Turing, and John von Neumann were instrumental in these developments. The modern computer, or Von Neumann machine, would play a central role in cognitive science, both as a metaphor for the mind, and as a tool for investigation. The first instance of cognitive science experiments being done at an academic institution took place at MIT Sloan School of Management, established by J.C.R. Licklider working within the psychology department and conducting experiments using computer memory as models for human cognition. In 1959, Noam Chomsky published a scathing review of B. F. Skinner's book Verbal Behavior. At the time, Skinner's behaviorist paradigm dominated the field of psychology within the United States. Most psychologists focused on functional relations between stimulus and response, without positing internal representations. Chomsky argued that in order to explain language, we needed a theory like generative grammar, which not only attributed internal representations but characterized their underlying order. The term cognitive science was coined by Christopher Longuet-Higgins in his 1973 commentary on the Lighthill report, which concerned the then-current state of artificial intelligence research. In the same decade, the journal Cognitive Science and the Cognitive Science Society were founded. The founding meeting of the Cognitive Science Society was held at the University of California, San Diego in 1979, which resulted in cognitive science becoming an internationally visible enterprise. In 1972, Hampshire College started the first undergraduate education program in Cognitive Science, led by Neil Stillings. In 1982, with assistance from Professor Stillings, Vassar College became the first institution in the world to grant an undergraduate degree in Cognitive Science. In 1986, the first Cognitive Science Department in the world was founded at the University of California, San Diego. In the 1970s and early 1980s, as access to computers increased, artificial intelligence research expanded. Researchers such as Marvin Minsky would write computer programs in languages such as LISP to attempt to formally characterize the steps that human beings went through, for instance, in making decisions and solving problems, in the hope of better understanding human thought, and also in the hope of creating artificial minds. This approach is known as "symbolic AI". Eventually the limits of the symbolic AI research program became apparent. For instance, it seemed to be unrealistic to comprehensively list human knowledge in a form usable by a symbolic computer program. The late 80s and 90s saw the rise of neural networks and connectionism as a research paradigm. Under this point of view, often attributed to James McClelland and David Rumelhart, the mind could be characterized as a set of complex associations, represented as a layered network. Critics argue that there are some phenomena which are better captured by symbolic models, and that connectionist models are often so complex as to have little explanatory power. Recently symbolic and connectionist models have been combined, making it possible to take advantage of both forms of explanation. While both connectionism and symbolic approaches have proven useful for testing various hypotheses and exploring approaches to understanding aspects of cognition and lower level brain functions, neither are biologically realistic and therefore, both suffer from a lack of neuroscientific plausibility. Connectionism has proven useful for exploring computationally how cognition emerges in development and occurs in the human brain, and has provided alternatives to strictly domain-specific / domain general approaches. For example, scientists such as Jeff Elman, Liz Bates, and Annette Karmiloff-Smith have posited that networks in the brain emerge from the dynamic interaction between them and environmental input. Principles Levels of analysis A central tenet of cognitive science is that a complete understanding of the mind/brain cannot be attained by studying only a single level. An example would be the problem of remembering a phone number and recalling it later. One approach to understanding this process would be to study behavior through direct observation, or naturalistic observation. A person could be presented with a phone number and be asked to recall it after some delay of time; then the accuracy of the response could be measured. Another approach to measure cognitive ability would be to study the firings of individual neurons while a person is trying to remember the phone number. Neither of these experiments on its own would fully explain how the process of remembering a phone number works. Even if the technology to map out every neuron in the brain in real-time were available and it were known when each neuron fired it would still be impossible to know how a particular firing of neurons translates into the observed behavior. Thus an understanding of how these two levels relate to each other is imperative. Francisco Varela, in The Embodied Mind: Cognitive Science and Human Experience, argues that "the new sciences of the mind need to enlarge their horizon to encompass both lived human experience and the possibilities for transformation inherent in human experience". On the classic cognitivist view, this can be provided by a functional level account of the process. Studying a particular phenomenon from multiple levels creates a better understanding of the processes that occur in the brain to give rise to a particular behavior. Marr gave a famous description of three levels of analysis: The computational theory, specifying the goals of the computation; Representation and algorithms, giving a representation of the inputs and outputs and the algorithms which transform one into the other; and The hardware implementation, or how algorithm and representation may be physically realized. Interdisciplinary nature Cognitive science is an interdisciplinary field with contributors from various fields, including psychology, neuroscience, linguistics, philosophy of mind, computer science, anthropology and biology. Cognitive scientists work collectively in hope of understanding the mind and its interactions with the surrounding world much like other sciences do. The field regards itself as compatible with the physical sciences and uses the scientific method as well as simulation or modeling, often comparing the output of models with aspects of human cognition. Similarly to the field of psychology, there is some doubt whether there is a unified cognitive science, which have led some researchers to prefer 'cognitive sciences' in plural. Many, but not all, who consider themselves cognitive scientists hold a functionalist view of the mind—the view that mental states and processes should be explained by their function – what they do. According to the multiple realizability account of functionalism, even non-human systems such as robots and computers can be ascribed as having cognition. Cognitive science: the term The term "cognitive" in "cognitive science" is used for "any kind of mental operation or structure that can be studied in precise terms" (Lakoff and Johnson, 1999). This conceptualization is very broad, and should not be confused with how "cognitive" is used in some traditions of analytic philosophy, where "cognitive" has to do only with formal rules and truth-conditional semantics. The earliest entries for the word "cognitive" in the OED take it to mean roughly "pertaining to the action or process of knowing". The first entry, from 1586, shows the word was at one time used in the context of discussions of Platonic theories of knowledge. Most in cognitive science, however, presumably do not believe their field is the study of anything as certain as the knowledge sought by Plato. Scope Cognitive science is a large field, and covers a wide array of topics on cognition. However, it should be recognized that cognitive science has not always been equally concerned with every topic that might bear relevance to the nature and operation of minds. Classical cognitivists have largely de-emphasized or avoided social and cultural factors, embodiment, emotion, consciousness, animal cognition, and comparative and evolutionary psychologies. However, with the decline of behaviorism, internal states such as affects and emotions, as well as awareness and covert attention became approachable again. For example, situated and embodied cognition theories take into account the current state of the environment as well as the role of the body in cognition. With the newfound emphasis on information processing, observable behavior was no longer the hallmark of psychological theory, but the modeling or recording of mental states. Below are some of the main topics that cognitive science is concerned with. This is not an exhaustive list. See List of cognitive science topics for a list of various aspects of the field. Artificial intelligence Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One of the practical goals of AI is to implement aspects of human intelligence in computers. Computers are also widely used as a tool with which to study cognitive phenomena. Computational modeling uses simulations to study how human intelligence may be structured. (See .) There is some debate in the field as to whether the mind is best viewed as a huge array of small but individually feeble elements (i.e. neurons), or as a collection of higher-level structures such as symbols, schemes, plans, and rules. The former view uses connectionism to study the mind, whereas the latter emphasizes symbolic artificial intelligence. One way to view the issue is whether it is possible to accurately simulate a human brain on a computer without accurately simulating the neurons that make up the human brain. Attention Attention is the selection of important information. The human mind is bombarded with millions of stimuli and it must have a way of deciding which of this information to process. Attention is sometimes seen as a spotlight, meaning one can only shine the light on a particular set of information. Experiments that support this metaphor include the dichotic listening task (Cherry, 1957) and studies of inattentional blindness (Mack and Rock, 1998). In the dichotic listening task, subjects are bombarded with two different messages, one in each ear, and told to focus on only one of the messages. At the end of the experiment, when asked about the content of the unattended message, subjects cannot report it. Bodily processes related to cognition Embodied cognition approaches to cognitive science emphasize the role of body and environment in cognition. This includes both neural and extra-neural bodily processes, and factors that range from affective and emotional processes, to posture, motor control, proprioception, and kinaesthesis, to autonomic processes that involve heartbeat and respiration, to the role of the enteric gut microbiome. It also includes accounts of how the body engages with or is coupled to social and physical environments. 4E (embodied, embedded, extended and enactive) cognition includes a broad range of views about brain-body-environment interaction, from causal embeddedness to stronger claims about how the mind extends to include tools and instruments, as well as the role of social interactions, action-oriented processes, and affordances. 4E theories range from those closer to classic cognitivism (so-called "weak" embodied cognition) to stronger extended and enactive versions that are sometimes referred to as radical embodied cognitive science. Knowledge and processing of language The ability to learn and understand language is an extremely complex process. Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language?, and (3) How are humans able to understand novel sentences? The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences. Linguistics often divides language processing into orthography, phonetics, phonology, morphology, syntax, semantics, and pragmatics. Many aspects of language can be studied from each of these components and from their interaction. The study of language processing in cognitive science is closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of. Linguists have found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech. Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration. Learning and development Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, and recognize people and objects. Research in learning and development aims to explain the mechanisms by which these processes might take place. A major question in the study of cognitive development is the extent to which certain abilities are innate or learned. This is often framed in terms of the nature and nurture debate. The nativist view emphasizes that certain features are innate to an organism and are determined by its genetic endowment. The empiricist view, on the other hand, emphasizes that certain abilities are learned from the environment. Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains about how genetic information might guide cognitive development. In the area of language acquisition, for example, some (such as Steven Pinker) have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues in Rethinking Innateness) have argued that Pinker's claims are biologically unrealistic. They argue that genes determine the architecture of a learning system, but that specific "facts" about how grammar works can only be learned as a result of experience. Memory Memory allows us to store information for later retrieval. Memory is often thought of as consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes). Memory is also often grouped into declarative and procedural forms. Declarative memory—grouped into subsets of semantic and episodic forms of memory—refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e.g. "Are apples food?", or "What did I eat for breakfast four days ago?"). Procedural memory allows us to remember actions and motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or memory . Cognitive scientists study memory just as psychologists do, but tend to focus more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in "fill-in-the-blank")? Perception and action Perception is the ability to take in information via the senses, and process it in some way. Vision and hearing are two dominant senses that allow us to perceive the environment. Some questions in the study of visual perception, for example, include: (1) How are we able to recognize objects?, (2) Why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? One tool for studying visual perception is by looking at how people process optical illusions. The image on the right of a Necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions. The study of haptic (tactile), olfactory, and gustatory stimuli also fall into the domain of perception. Action is taken to refer to the output of a system. In humans, this is accomplished through motor responses. Spatial planning and movement, speech production, and complex motor movements are all aspects of action. Consciousness Consciousness is the awareness of experiences within oneself. This helps the mind with having the ability to experience or feel a sense of self. Research methods Many different methodologies are used to study cognitive science. As the field is highly interdisciplinary, research often cuts across multiple areas of study, drawing on research methods from psychology, neuroscience, computer science and systems theory. Behavioral experiments In order to have a description of what constitutes intelligent behavior, one must study behavior itself. This type of research is closely tied to that in cognitive psychology and psychophysics. By measuring behavioral responses to different stimuli, one can understand something about how those stimuli are processed. Lewandowski & Strohmetz (2009) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. Behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present (e.g., litter in a parking lot or readings on an electric meter). Behavioral observations involve the direct witnessing of the actor engaging in the behavior (e.g., watching how close a person sits next to another person). Behavioral choices are when a person selects between two or more options (e.g., voting behavior, choice of a punishment for another participant). Reaction time. The time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. For example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. Psychophysical responses. Psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. They typically involve making judgments of some physical property, e.g. the loudness of a sound. Correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. Some examples include: sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. Eye tracking. This methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. The fixation point of the eyes is linked to an individual's focus of attention. Thus, by monitoring eye movements, we can study what information is being processed at a given time. Eye tracking allows us to study cognitive processes on extremely short time scales. Eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. Brain imaging Brain imaging involves analyzing activity within the brain while performing various tasks. This allows us to link behavior and brain function to help understand how information is processed. Different types of imaging techniques vary in their temporal (time-based) and spatial (location-based) resolution. Brain imaging is often used in cognitive neuroscience. Single-photon emission computed tomography and positron emission tomography. SPECT and PET use radioactive isotopes, which are injected into the subject's bloodstream and taken up by the brain. By observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. PET has similar spatial resolution to fMRI, but it has extremely poor temporal resolution. Electroencephalography. EEG measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. This technique has an extremely high temporal resolution, but a relatively poor spatial resolution. Functional magnetic resonance imaging. fMRI measures the relative amount of oxygenated blood flowing to different parts of the brain. More oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. This allows us to localize particular functions within different brain regions. fMRI has moderate spatial and temporal resolution. Optical imaging. This technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. Since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active (i.e., those that have more oxygenated blood). Optical imaging has moderate temporal resolution, but poor spatial resolution. It also has the advantage that it is extremely safe and can be used to study infants' brains. Magnetoencephalography. MEG measures magnetic fields resulting from cortical activity. It is similar to EEG, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in EEG is. MEG uses SQUID sensors to detect tiny magnetic fields. Computational modeling Computational models require a mathematically and logically formal representation of a problem. Computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. Computational modeling can help us understand the functional organization of a particular cognitive phenomenon. Approaches to cognitive modeling can be categorized as: (1) symbolic, on abstract mental functions of an intelligent mind by means of symbols; (2) subsymbolic, on the neural and associative properties of the human brain; and (3) across the symbolic–subsymbolic border, including hybrid. Symbolic modeling evolved from the computer science paradigms using the technologies of knowledge-based systems, as well as a philosophical perspective (e.g. "Good Old-Fashioned Artificial Intelligence" (GOFAI)). They were developed by the first cognitive researchers and later used in information engineering for expert systems. Since the early 1990s it was generalized in systemics for the investigation of functional human-like intelligence models, such as personoids, and, in parallel, developed as the SOAR environment. Recently, especially in the context of cognitive decision-making, symbolic cognitive modeling has been extended to the socio-cognitive approach, including social and organizational cognition, interrelated with a sub-symbolic non-conscious layer. Subsymbolic modeling includes connectionist/neural network models. Connectionism relies on the idea that the mind/brain is composed of simple nodes and its problem-solving capacity derives from the connections between them. Neural nets are textbook implementations of this approach. Some critics of this approach feel that while these models approach biological reality as a representation of how the system works, these models lack explanatory powers because, even in systems endowed with simple connection rules, the emerging high complexity makes them less interpretable at the connection-level than they apparently are at the macroscopic level. Other approaches gaining in popularity include (1) dynamical systems theory, (2) mapping symbolic models onto connectionist models (Neural-symbolic integration or hybrid intelligent systems), and (3) and Bayesian models, which are often drawn from machine learning. All the above approaches tend either to be generalized to the form of integrated computational models of a synthetic/abstract intelligence (i.e. cognitive architecture) in order to be applied to the explanation and improvement of individual and social/organizational decision-making and reasoning or to focus on single simulative programs (or microtheories/"middle-range" theories) modelling specific cognitive faculties (e.g. vision, language, categorization etc.). Neurobiological methods Research methods borrowed directly from neuroscience and neuropsychology can also help us to understand aspects of intelligence. These methods allow us to understand how intelligent behavior is implemented in a physical system. Single-unit recording Direct brain stimulation Animal models Postmortem studies Key findings Cognitive science has given rise to models of human cognitive bias and risk perception, and has been influential in the development of behavioral finance, part of economics. It has also given rise to a new theory of the philosophy of mathematics (related to denotational mathematics), and many theories of artificial intelligence, persuasion and coercion. It has made its presence known in the philosophy of language and epistemology as well as constituting a substantial wing of modern linguistics. Fields of cognitive science have been influential in understanding the brain's particular functional systems (and functional deficits) ranging from speech production to auditory processing and visual perception. It has made progress in understanding how damage to particular areas of the brain affect cognition, and it has helped to uncover the root causes and results of specific dysfunction, such as dyslexia, anopia, and hemispatial neglect. Notable researchers Some of the more recognized names in cognitive science are usually either the most controversial or the most cited. Within philosophy, some familiar names include Daniel Dennett, who writes from a computational systems perspective, John Searle, known for his controversial Chinese room argument, and Jerry Fodor, who advocates functionalism. Others include David Chalmers, who advocates Dualism and is also known for articulating the hard problem of consciousness, and Douglas Hofstadter, famous for writing Gödel, Escher, Bach, which questions the nature of words and thought. In the realm of linguistics, Noam Chomsky and George Lakoff have been influential (both have also become notable as political commentators). In artificial intelligence, Marvin Minsky, Herbert A. Simon, and Allen Newell are prominent. Popular names in the discipline of psychology include George A. Miller, James McClelland, Philip Johnson-Laird, Lawrence Barsalou, Vittorio Guidano, Howard Gardner and Steven Pinker. Anthropologists Dan Sperber, Edwin Hutchins, Bradd Shore, James Wertsch and Scott Atran, have been involved in collaborative projects with cognitive and social psychologists, political scientists and evolutionary biologists in attempts to develop general theories of culture formation, religion, and political association. Computational theories (with models and simulations) have also been developed, by David Rumelhart, James McClelland and Philip Johnson-Laird. Epistemics Epistemics is a term coined in 1969 by the University of Edinburgh with the foundation of its School of Epistemics. Epistemics is to be distinguished from epistemology in that epistemology is the philosophical theory of knowledge, whereas epistemics signifies the scientific study of knowledge. Christopher Longuet-Higgins has defined it as "the construction of formal models of the processes (perceptual, intellectual, and linguistic) by which knowledge and understanding are achieved and communicated." In his 1978 essay "Epistemics: The Regulative Theory of Cognition", Alvin I. Goldman claims to have coined the term "epistemics" to describe a reorientation of epistemology. Goldman maintains that his epistemics is continuous with traditional epistemology and the new term is only to avoid opposition. Epistemics, in Goldman's version, differs only slightly from traditional epistemology in its alliance with the psychology of cognition; epistemics stresses the detailed study of mental processes and information-processing mechanisms that lead to knowledge or beliefs. In the mid-1980s, the School of Epistemics was renamed as The Centre for Cognitive Science (CCS). In 1998, CCS was incorporated into the University of Edinburgh's School of Informatics. Binding problem in cognitive science One of the core aims of cognitive science is to achieve an integrated theory of cognition. This requires integrative mechanisms explaining how the information processing that occurs simultaneously in spatially segregated (sub-)cortical areas in the brain is coordinated and bound together to give rise to coherent perceptual and symbolic representations. One approach is to solve this "Binding problem" (that is, the problem of dynamically representing conjunctions of informational elements, from the most basic perceptual representations ("feature binding") to the most complex cognitive representations, like symbol structures ("variable binding")), by means of integrative synchronization mechanisms. In other words, one of the coordinating mechanisms appears to be the temporal (phase) synchronization of neural activity based on dynamical self-organizing processes in neural networks, described by the Binding-by-synchrony (BBS) Hypothesis from neurophysiology. Connectionist cognitive neuroarchitectures have been developed that use integrative synchronization mechanisms to solve this binding problem in perceptual cognition and in language cognition. In perceptual cognition the problem is to explain how elementary object properties and object relations, like the object color or the object form, can be dynamically bound together or can be integrated to a representation of this perceptual object by means of a synchronization mechanism ("feature binding", "feature linking"). In language cognition the problem is to explain how semantic concepts and syntactic roles can be dynamically bound together or can be integrated to complex cognitive representations like systematic and compositional symbol structures and propositions by means of a synchronization mechanism ("variable binding") (see also the "Symbolism vs. connectionism debate" in connectionism). See also Affective science Cognitive anthropology Cognitive biology Cognitive computing Cognitive ethology Cognitive linguistics Cognitive neuropsychology Cognitive neuroscience Cognitive psychology Cognitive science of religion Computational neuroscience Computational-representational understanding of mind Concept mining Decision field theory Decision theory Dynamicism Educational neuroscience Educational psychology Embodied cognition Embodied cognitive science Enactivism Epistemology Folk psychology Heterophenomenology Human Cognome Project Human–computer interaction Indiana Archives of Cognitive Science Informatics (academic field) List of cognitive scientists List of psychology awards Malleable intelligence Neural Darwinism Personal information management (PIM) Qualia Quantum cognition Simulated consciousness Situated cognition Society of Mind theory Spatial cognition Speech–language pathology Outlines Outline of human intelligence – topic tree presenting the traits, capacities, models, and research fields of human intelligence, and more. Outline of thought – topic tree that identifies many types of thoughts, types of thinking, aspects of thought, related fields, and more. References External links "Cognitive Science" on the Stanford Encyclopedia of Philosophy Cognitive Science Society Cognitive Science Movie Index: A broad list of movies showcasing themes in the Cognitive Sciences List of leading thinkers in cognitive science
5630
https://en.wikipedia.org/wiki/Copula%20%28linguistics%29
Copula (linguistics)
In linguistics, a copula (plural: copulas or copulae; abbreviated ) is a word or phrase that links the subject of a sentence to a subject complement, such as the word is in the sentence "The sky is blue" or the phrase was not being in the sentence "It was not being co-operative." The word copula derives from the Latin noun for a "link" or "tie" that connects two different things. A copula is often a verb or a verb-like word, though this is not universally the case. A verb that is a copula is sometimes called a copulative or copular verb. In English primary education grammar courses, a copula is often called a linking verb. In other languages, copulas show more resemblances to pronouns, as in Classical Chinese and Guarani, or may take the form of suffixes attached to a noun, as in Korean, Beja, and Inuit languages. Most languages have one main copula (in English, the verb "to be"), although some (like Spanish, Portuguese and Thai) have more than one, while others have none. While the term copula is generally used to refer to such principal verbs, it may also be used for a wider group of verbs with similar potential functions (like become, get, feel and seem in English); alternatively, these might be distinguished as "semi-copulas" or "pseudo-copulas". Grammatical function The principal use of a copula is to link the subject of a clause to a subject complement. A copular verb is often considered to be part of the predicate, the remainder being called a predicative expression. A simple clause containing a copula is illustrated below: The book is on the table. In that sentence, the noun phrase the book is the subject, the verb is serves as the copula, and the prepositional phrase on the table is the predicative expression. The whole expression is on the table may (in some theories of grammar) be called a predicate or a verb phrase. The predicative expression accompanying the copula, also known as the complement of the copula, may take any of several possible forms: it may be a noun or noun phrase, an adjective or adjective phrase, a prepositional phrase (as above) or an adverb or another adverbial phrase expressing time or location. Examples are given below (with the copula in bold and the predicative expression in italics): The three components (subject, copula and predicative expression) do not necessarily appear in that order: their positioning depends on the rules for word order applicable to the language in question. In English (an SVO language), the ordering given above is the normal one, but certain variation is possible: In many questions and other clauses with subject–auxiliary inversion, the copula moves in front of the subject: Are you happy? In inverse copular constructions (see below) the predicative expression precedes the copula, but the subject follows it: In the room were three men. It is also possible, in certain circumstances, for one (or even two) of the three components to be absent: In null-subject (pro-drop) languages, the subject may be omitted, as it may from other types of sentence. In Italian, means ‘I am tired’, literally ‘am tired’. In non-finite clauses in languages like English, the subject is often absent, as in the participial phrase being tired or the infinitive phrase to be tired. The same applies to most imperative sentences like Be good! For cases in which no copula appears, see below. Any of the three components may be omitted as a result of various general types of ellipsis. In particular, in English, the predicative expression may be elided in a construction similar to verb phrase ellipsis, as in short sentences like I am; Are they? (where the predicative expression is understood from the previous context). Inverse copular constructions, in which the positions of the predicative expression and the subject are reversed, are found in various languages. They have been the subject of much theoretical analysis, particularly in regard to the difficulty of maintaining, in the case of such sentences, the usual division into a subject noun phrase and a predicate verb phrase. Another issue is verb agreement when both subject and predicative expression are noun phrases (and differ in number or person): in English, the copula typically agrees with the syntactical subject even if it is not logically (i.e. semantically) the subject, as in the cause of the riot is (not are) these pictures of the wall. Compare Italian ; notice the use of the plural to agree with plural "these photos" rather than with singular "the cause". In instances where an English syntactical subject comprises a prepositional object that is pluralized, however, the prepositional object agrees with the predicative expression, e.g. "What kind of birds are those?" The definition and scope of the concept of a copula is not necessarily precise in any language. As noted above, though the concept of the copula in English is most strongly associated with the verb to be, there are many other verbs that can be used in a copular sense as well. The boy became a man. The girl grew more excited as the holiday preparations intensified. The dog felt tired from the activity. And more tenuously The milk turned sour. The food smells good. You seem upset. Other functions A copular verb may also have other uses supplementary to or distinct from its uses as a copula. Some co-occurrences are common. Auxiliary verb The English verb to be is also used as an auxiliary verb, especially for expressing passive voice (together with the past participle) or expressing progressive aspect (together with the present participle): Other languages' copulas have additional uses as auxiliaries. For example, French can be used to express passive voice similarly to English be; both French and German are used to express the perfect forms of certain verbs (formerly English be was also): The auxiliary functions of these verbs derived from their copular function, and could be interpreted as special cases of the copular function (with the verbal forms it precedes being considered adjectival). Another auxiliary usage in English is to denote an obligatory action or expected occurrence: "I am to serve you;" "The manager is to resign." This can be put also into past tense: "We were to leave at 9." For forms like "if I was/were to come," see English conditional sentences. (By certain criteria, the English copula be may always be considered an auxiliary verb; see Diagnostics for identifying auxiliary verbs in English.) Existential verb The English to be and its equivalents in certain other languages also have a non-copular use as an existential verb, meaning "to exist." This use is illustrated in the following sentences: I want only to be, and that is enough; I think therefore I am; To be or not to be, that is the question. In these cases, the verb itself expresses a predicate (that of existence), rather than linking to a predicative expression as it does when used as a copula. In ontology it is sometimes suggested that the "is" of existence is reducible to the "is" of property attribution or class membership; to be, Aristotle held, is to be something. However, Abelard in his Dialectica made a reductio ad absurdum argument against the idea that the copula can express existence. Similar examples can be found in many other languages; for example, the French and Latin equivalents of I think therefore I am are and , where and are the equivalents of English "am," normally used as copulas. However, other languages prefer a different verb for existential use, as in the Spanish version (where the verb "to exist" is used rather than the copula or ‘to be’). Another type of existential usage is in clauses of the there is… or there are… type. Languages differ in the way they express such meanings; some of them use the copular verb, possibly with an expletive pronoun like the English there, while other languages use different verbs and constructions, like the French (which uses parts of the verb ‘to have,’ not the copula) or the Swedish (the passive voice of the verb for "to find"). For details, see existential clause. Relying on a unified theory of copular sentences, it has been proposed that the English there-sentences are subtypes of inverse copular constructions. Meanings Predicates formed using a copula may express identity: that the two noun phrases (subject and complement) have the same referent or express an identical concept: They may also express membership of a class or a subset relationship: Similarly they may express some property, relation or position, permanent or temporary: Essence vs. state Some languages use different copulas, or different syntax, to denote a permanent, essential characteristic of something versus a temporary state. For examples, see the sections on the Romance languages, Slavic languages and Irish. Forms In many languages the principal copula is a verb, like English (to) be, German , Mixtec , Touareg emous, etc. It may inflect for grammatical categories like tense, aspect and mood, like other verbs in the language. Being a very commonly used verb, it is likely that the copula has irregular inflected forms; in English, the verb be has a number of highly irregular (suppletive) forms and has more different inflected forms than any other English verb (am, is, are, was, were, etc.; see English verbs for details). Other copulas show more resemblances to pronouns. That is the case for Classical Chinese and Guarani, for instance. In highly synthetic languages, copulas are often suffixes, attached to a noun, but they may still behave otherwise like ordinary verbs: in Inuit languages. In some other languages, like Beja and Ket, the copula takes the form of suffixes that attach to a noun but are distinct from the person agreement markers used on predicative verbs. This phenomenon is known as nonverbal person agreement (or nonverbal subject agreement), and the relevant markers are always established as deriving from cliticized independent pronouns. Zero copula In some languages, copula omission occurs within a particular grammatical context. For example, speakers of Russian, Indonesian, Turkish, Hungarian, Arabic, Hebrew, Geʽez and Quechuan languages consistently drop the copula in present tense: Russian: , ‘I (am a) human;’ Indonesian: ‘I (am) a human;’ Turkish: ‘s/he (is a) human;’ Hungarian: ‘s/he (is) a human;’ Arabic: أنا إنسان, ‘I (am a) human;’ Hebrew: אני אדם, ʔani ʔadam "I (am a) human;" Geʽez: አነ ብእሲ/ብእሲ አነ ʔana bəʔəsi / bəʔəsi ʔana "I (am a) man" / "(a) man I (am)"; Southern Quechua: payqa runam "s/he (is) a human." The usage is known generically as the zero copula. In other tenses (sometimes in forms other than third person singular), the copula usually reappears. Some languages drop the copula in poetic or aphorismic contexts. Examples in English include The more, the better. Out of many, one. True that. Such poetic copula dropping is more pronounced in some languages other than English, like the Romance languages. In informal speech of English, the copula may also be dropped in general sentences, as in "She a nurse." It is a feature of African-American Vernacular English, but is also used by a variety of other English speakers. An example is the sentence "I saw twelve men, each a soldier." Examples in specific languages In Ancient Greek, when an adjective precedes a noun with an article, the copula is understood: , "the house is large," can be written , "large the house (is)." In Quechua (Southern Quechua used for the examples), zero copula is restricted to present tense in third person singular (kan): Payqa runam  — "(s)he is a human;" but: (paykuna) runakunam kanku "(they) are human."ap In Māori, the zero copula can be used in predicative expressions and with continuous verbs (many of which take a copulative verb in many Indo-European languages) — He nui te whare, literally "a big the house," "the house (is) big;" I te tēpu te pukapuka, literally "at (past locative particle) the table the book," "the book (was) on the table;" Nō Ingarangi ia, literally "from England (s)he," "(s)he (is) from England," Kei te kai au, literally "at the (act of) eating I," "I (am) eating." Alternatively, in many cases, the particle ko can be used as a copulative (though not all instances of ko are used as thus, like all other Maori particles, ko has multiple purposes): Ko nui te whare "The house is big;" Ko te pukapuka kei te tēpu "It is the book (that is) on the table;" Ko au kei te kai "It is me eating." However, when expressing identity or class membership, ko must be used: Ko tēnei tāku pukapuka "This is my book;" Ko Ōtautahi he tāone i Te Waipounamu "Christchurch is a city in the South Island (of New Zealand);" Ko koe tōku hoa "You are my friend." When expressing identity, ko can be placed on either object in the clause without changing the meaning (ko tēnei tāku pukapuka is the same as ko tāku pukapuka tēnei) but not on both (ko tēnei ko tāku pukapuka would be equivalent to saying "it is this, it is my book" in English). In Hungarian, zero copula is restricted to present tense in third person singular and plural: Ő ember/Ők emberek — "s/he is a human"/"they are humans;" but: (én) ember vagyok "I am a human," (te) ember vagy "you are a human," mi emberek vagyunk "we are humans," (ti) emberek vagytok "you (all) are humans." The copula also reappears for stating locations: az emberek a házban vannak, "the people are in the house," and for stating time: hat óra van, "it is six o'clock." However, the copula may be omitted in colloquial language: hat óra (van), "it is six o'clock." Hungarian uses copula lenni for expressing location: Itt van Róbert "Bob is here," but it is omitted in the third person present tense for attribution or identity statements: Róbert öreg "Bob is old;" ők éhesek "They are hungry;" Kati nyelvtudós "Cathy is a linguist" (but Róbert öreg volt "Bob was old," éhesek voltak "They were hungry," Kati nyelvtudós volt "Cathy was a linguist). In Turkish, both the third person singular and the third person plural copulas are omittable. Ali burada and Ali buradadır both mean "Ali is here," and Onlar aç and Onlar açlar both mean "They are hungry." Both of the sentences are acceptable and grammatically correct, but sentences with the copula are more formal. The Turkish first person singular copula suffix is omitted when introducing oneself. Bora ben (I am Bora) is grammatically correct, but "Bora benim" (same sentence with the copula) is not for an introduction (but is grammatically correct in other cases). Further restrictions may apply before omission is permitted. For example, in the Irish language, is, the present tense of the copula, may be omitted when the predicate is a noun. Ba, the past/conditional, cannot be deleted. If the present copula is omitted, the pronoun (e.g., é, í, iad) preceding the noun is omitted as well. Copula-like words Sometimes, the term copula is taken to include not only a language's equivalent(s) to the verb be but also other verbs or forms that serve to link a subject to a predicative expression (while adding semantic content of their own). For example, English verbs like become, get, feel, look, taste, smell, and seem can have this function, as in the following sentences (the predicative expression, the complement of the verb, is in italics): (This usage should be distinguished from the use of some of these verbs as "action" verbs, as in They look at the wall, in which look denotes an action and cannot be replaced by the basic copula are.) Some verbs have rarer, secondary uses as copular verbs, like the verb fall in sentences like The zebra fell victim to the lion. These extra copulas are sometimes called "semi-copulas" or "pseudo-copulas." For a list of common verbs of this type in English, see List of English copulae. In particular languages Indo-European In Indo-European languages, the words meaning to be are sometimes similar to each other. Due to the high frequency of their use, their inflection retains a considerable degree of similarity in some cases. Thus, for example, the English form is is a cognate of German ist, Latin est, Persian ast and Russian jest', even though the Germanic, Italic, Iranian and Slavic language groups split at least 3000 years ago. The origins of the copulas of most Indo-European languages can be traced back to four Proto-Indo-European stems: *es- (*h1es-), *sta- (*steh2-), *wes- and *bhu- (*bʰuH-). English The English copular verb be has eight forms (more than any other English verb): be, am, is, are, being, was, were, been. Additional archaic forms include art, wast, wert, and occasionally beest (as a subjunctive). For more details see English verbs. For the etymology of the various forms, see Indo-European copula. The main uses of the copula in English are described in the above sections. The possibility of copula omission is mentioned under . A particular construction found in English (particularly in speech) is the use of two successive copulas when only one appears necessary, as in My point is, is that.... The acceptability of this construction is a disputed matter in English prescriptive grammar. The simple English copula "be" may on occasion be substituted by other verbs with near identical meanings. Persian In Persian, the verb to be can either take the form of ast (cognate to English is) or budan (cognate to be). {| border="0" cellspacing="2" cellpadding="1" |- | Aseman abi ast. |آسمان آبی است | the sky is blue |- | Aseman abi khahad bood. |آسمان آبی خواهد بود | the sky will be blue |- | Aseman abi bood. |آسمان آبی بود | the sky was blue |} Hindustani In Hindustani (Hindi and Urdu), the copula होना ɦonɑ ہونا can be put into four grammatical aspects (simple, habitual, perfective, and progressive) and each of those four aspects can be put into five grammatical moods (indicative, presumptive, subjunctive, contrafactual, and imperative). Some example sentences using the simple aspect are shown below: Besides the verb होना honā (to be), there are three other verbs which can also be used as the copula, they are रहना rêhnā (to stay), जाना jānā (to go), and आना ānā (to come). The following table shows the conjugations of the copula होना honā in the five grammatical moods in the simple aspect. The transliteration scheme used is ISO 15919. Romance Copulas in the Romance languages usually consist of two different verbs that can be translated as "to be," the main one from the Latin esse (via Vulgar Latin essere; esse deriving from *es-), often referenced as sum (another of the Latin verb's principal parts) and a secondary one from stare (from *sta-), often referenced as sto. The resulting distinction in the modern forms is found in all the Iberian Romance languages, and to a lesser extent Italian, but not in French or Romanian. The difference is that the first usually refers to essential characteristics, while the second refers to states and situations, e.g., "Bob is old" versus "Bob is well." A similar division is found in the non-Romance Basque language (viz. egon and izan). (The English words just used, "essential" and "state," are also cognate with the Latin infinitives esse and stare. The word "stay" also comes from Latin stare, through Middle French estai, stem of Old French ester.) In Spanish and Portuguese, the high degree of verbal inflection, plus the existence of two copulas (ser and estar), means that there are 105 (Spanish) and 110 (Portuguese) separate forms to express the copula, compared to eight in English and one in Chinese. In some cases, the verb itself changes the meaning of the adjective/sentence. The following examples are from Portuguese: Slavic Some Slavic languages make a distinction between essence and state (similar to that discussed in the above section on the Romance languages), by putting a predicative expression denoting a state into the instrumental case, and essential characteristics are in the nominative. This can apply with other copula verbs as well: the verbs for "become" are normally used with the instrumental case. As noted above under , Russian and other North Slavic languages generally or often omit the copula in the present tense. Irish In Irish and Scottish Gaelic, there are two copulas, and the syntax is also changed when one is distinguishing between states or situations and essential characteristics. Describing the subject's state or situation typically uses the normal VSO ordering with the verb bí. The copula is is used to state essential characteristics or equivalences. {| border="0" cellspacing="2" cellpadding="1" valign="top" | align=left valign=top| || align=right valign=top | || align=left valign=top | |- |Is fear é Liam.|| "Liam is a man." ||(Lit., "Is man Liam.") |- |Is leabhar é sin.|| "That is a book." ||(Lit., "Is book it that.") |} The word is is the copula (rhymes with the English word "miss"). The pronoun used with the copula is different from the normal pronoun. For a masculine singular noun, é is used (for "he" or "it"), as opposed to the normal pronoun sé; for a feminine singular noun, í is used (for "she" or "it"), as opposed to normal pronoun sí; for plural nouns, iad is used (for "they" or "those"), as opposed to the normal pronoun siad. To describe being in a state, condition, place, or act, the verb "to be" is used: Tá mé ag rith. "I am running." Arabic dialects North Levantine Arabic The North Levantine Arabic dialect, spoken in Syria and Lebanon, has a negative copula formed by and a suffixed pronoun. Bantu languages Chichewa In Chichewa, a Bantu language spoken mainly in Malawi, a very similar distinction exists between permanent and temporary states as in Spanish and Portuguese, but only in the present tense. For a permanent state, in the 3rd person, the copula used in the present tense is ndi (negative sí): iyé ndi mphunzitsi "he is a teacher" iyé sí mphunzitsi "he is not a teacher" For the 1st and 2nd persons the particle ndi is combined with pronouns, e.g. ine "I": ine ndine mphunzitsi "I am a teacher" iwe ndiwe mphunzitsi "you (singular) are a teacher" ine síndine mphunzitsi "I am not a teacher" For temporary states and location, the copula is the appropriate form of the defective verb -li: iyé ali bwino "he is well" iyé sáli bwino "he is not well" iyé ali ku nyumbá "he is in the house" For the 1st and 2nd persons the person is shown, as normally with Chichewa verbs, by the appropriate pronominal prefix: ine ndili bwino "I am well" iwe uli bwino "you (sg.) are well" kunyumbá kuli bwino "at home (everything) is fine" In the past tenses, -li is used for both types of copula: iyé analí bwino "he was well (this morning)" iyé ánaalí mphunzitsi "he was a teacher (at that time)" In the future, subjunctive, or conditional tenses, a form of the verb khala ("sit/dwell") is used as a copula: máwa ákhala bwino "he'll be fine tomorrow" Muylaq' Aymaran Uniquely, the existence of the copulative verbalizer suffix in the Southern Peruvian Aymaran language variety, Muylaq' Aymara, is evident only in the surfacing of a vowel that would otherwise have been deleted because of the presence of a following suffix, lexically prespecified to suppress it. As the copulative verbalizer has no independent phonetic structure, it is represented by the Greek letter ʋ in the examples used in this entry. Accordingly, unlike in most other Aymaran variants, whose copulative verbalizer is expressed with a vowel-lengthening component, -:, the presence of the copulative verbalizer in Muylaq' Aymara is often not apparent on the surface at all and is analyzed as existing only meta-linguistically. However, in a verb phrase like "It is old," the noun thantha meaning "old" does not require the copulative verbalizer, thantha-wa "It is old." It is now pertinent to make some observations about the distribution of the copulative verbalizer. The best place to start is with words in which its presence or absence is obvious. When the vowel-suppressing first person simple tense suffix attaches to a verb, the vowel of the immediately preceding suffix is suppressed (in the examples in this subsection, the subscript "c" appears prior to vowel-suppressing suffixes in the interlinear gloss to better distinguish instances of deletion that arise from the presence of a lexically pre-specified suffix from those that arise from other (e.g. phonotactic) motivations). Consider the verb sara- which is inflected for the first person simple tense and so, predictably, loses its final root vowel: sar(a)-ct-wa "I go." However, prior to the suffixation of the first person simple suffix -ct to the same root nominalized with the agentive nominalizer -iri, the word must be verbalized. The fact that the final vowel of -iri below is not suppressed indicates the presence of an intervening segment, the copulative verbalizer: sar(a)-iri-ʋ-t-wa "I usually go." It is worthwhile to compare of the copulative verbalizer in Muylaq' Aymara as compared to La Paz Aymara, a variant which represents this suffix with vowel lengthening. Consider the near-identical sentences below, both translations of "I have a small house" in which the nominal root uta-ni "house-attributive" is verbalized with the copulative verbalizer, but the correspondence between the copulative verbalizer in these two variants is not always a strict one-to-one relation. {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | La Paz Aymara: |ma: jisk'a uta-ni-:-ct(a)-wa |- | Muylaq' Aymara: |ma isk'a uta-ni-ʋ-ct-wa |} Georgian As in English, the verb "to be" (qopna) is irregular in Georgian (a Kartvelian language); different verb roots are employed in different tenses. The roots -ar-, -kn-, -qav-, and -qop- (past participle) are used in the present tense, future tense, past tense and the perfective tenses respectively. Examples: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Masc'avlebeli var. | "I am a teacher." |- | Masc'avlebeli viknebi. | "I will be a teacher." |- | Masc'avlebeli viqavi. | "I was a teacher." |- | Masc'avlebeli vqopilvar. | "I have been a teacher." |- | Masc'avlebeli vqopiliqavi. | "I had been a teacher." |} In the last two examples (perfective and pluperfect), two roots are used in one verb compound. In the perfective tense, the root qop (which is the expected root for the perfective tense) is followed by the root ar, which is the root for the present tense. In the pluperfective tense, again, the root qop is followed by the past tense root qav. This formation is very similar to German (an Indo-European language), where the perfect and the pluperfect are expressed in the following way: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Ich bin Lehrer gewesen. | "I have been a teacher," literally "I am teacher been." |- | Ich war Lehrer gewesen. | "I had been a teacher," literally "I was teacher been." |} Here, gewesen is the past participle of sein ("to be") in German. In both examples, as in Georgian, this participle is used together with the present and the past forms of the verb in order to conjugate for the perfect and the pluperfect aspects. Haitian Creole Haitian Creole, a French-based creole language, has three forms of the copula: se, ye, and the zero copula, no word at all (the position of which will be indicated with Ø, just for purposes of illustration). Although no textual record exists of Haitian-Creole at its earliest stages of development from French, se is derived from French (written c'est), which is the normal French contraction of (that, written ce) and the copula (is, written est) (a form of the verb être). The derivation of ye is less obvious; but we can assume that the French source was ("he/it is," written il est), which, in rapidly spoken French, is very commonly pronounced as (typically written y est). The use of a zero copula is unknown in French, and it is thought to be an innovation from the early days when Haitian-Creole was first developing as a Romance-based pidgin. Latin also sometimes used a zero copula. Which of se / ye / Ø is used in any given copula clause depends on complex syntactic factors that we can superficially summarize in the following four rules: 1. Use Ø (i.e., no word at all) in declarative sentences where the complement is an adjective phrase, prepositional phrase, or adverb phrase: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Li te Ø an Ayiti. | "She was in Haiti." || (Lit., "She past-tense in Haiti.") |- | Liv-la Ø jon. | "The book is yellow." || (Lit., "Book-the yellow.") |- | Timoun-yo Ø lakay. | "The kids are [at] home." || (Lit., "Kids-the home.") |} 2. Use se when the complement is a noun phrase. But, whereas other verbs come after any tense/mood/aspect particles (like pa to mark negation, or te to explicitly mark past tense, or ap to mark progressive aspect), se comes before any such particles: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Chal se ekriven. | "Charles is writer." |- | Chal, ki se ekriven, pa vini. | "Charles, who is writer, not come." |} 3. Use se where French and English have a dummy "it" subject: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Se mwen! | "It's me!" French C'est moi! |- | Se pa fasil. | "It's not easy," colloquial French C'est pas facile. |} 4. Finally, use the other copula form ye in situations where the sentence's syntax leaves the copula at the end of a phrase: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | Kijan ou ye? | "How you are?" |- | Pou kimoun liv-la te ye? | "Whose book was it?" || (Lit., "Of who book-the past-tense is?) |- | M pa konnen kimoun li ye. | "I don't know who he is." || (Lit., "I not know who he is.") |- | Se yon ekriven Chal ye. | "Charles is a writer!" || (Lit., "It's a writer Charles is;" cf. French C'est un écrivain qu'il est.) |} The above is, however, only a simplified analysis. Japanese The Japanese copula (most often translated into English as an inflected form of "to be") has many forms. E.g., The form is used predicatively, attributively, adverbially or as a connector, and predicatively or as a politeness indicator. Examples: {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | | || "I'm a student." || (lit., I TOPIC student COPULA) |- | | || "This is a pen." || (lit., this TOPIC pen COPULA-POLITE) |} is the polite form of the copula. Thus, many sentences like the ones below are almost identical in meaning and differ only in the speaker's politeness to the addressee and in nuance of how assured the person is of their statement. {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | | || "That's a hotel." || (lit., that TOPIC hotel COPULA) |- | | || "That is a hotel." || (lit., that TOPIC hotel COPULA-POLITE) |} A predicate in Japanese is expressed by the predicative form of a verb, the predicative form of an adjective or noun + the predicative form of a copula. {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | | || "This beer is delicious." |- | | || "This beer is delicious." |- | * | * || colspan=2 | This is grammatically incorrect because can only be coupled with a noun to form a predicate. |} Other forms of copula: , (used in writing and formal speaking) (used in public announcements, notices, etc.) The copula is subject to dialectal variation throughout Japan, resulting in forms like in Kansai and in Hiroshima (see map above). Japanese also has two verbs corresponding to English "to be": and . They are not copulas but existential verbs. is used for inanimate objects, including plants, whereas is used for animate things like people, animals, and robots, though there are exceptions to this generalization. {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | | || "The book is on a table." |- | | || "Kobayashi is here." |} Japanese speakers, when learning English, often drop the auxiliary verbs "be" and "do," incorrectly believing that "be" is a semantically empty copula equivalent to and . Korean For sentences with predicate nominatives, the copula "이" (i-) is added to the predicate nominative (with no space in between). {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 바나나는 과일이다. | Ba-na-na-neun gwa-il-i-da. || "Bananas are a fruit." |} Some adjectives (usually colour adjectives) are nominalized and used with the copula "이"(i-). 1. Without the copula "이"(i-): {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 장미는 빨개요. | Jang-mi-neun ppal-gae-yo.|| "Roses are red." |} 2. With the copula "이"(i-): {| border="0" cellspacing="2" cellpadding="1" | align=left | || align=right | || align=left | |- | 장미는 빨간색이다. | Jang-mi-neun ppal-gan-saek-i-da.|| "Roses are red-coloured." |} Some Korean adjectives are derived using the copula. Separating these articles and nominalizing the former part will often result in a sentence with a related, but different meaning. Using the separated sentence in a situation where the un-separated sentence is appropriate is usually acceptable as the listener can decide what the speaker is trying to say using the context. Chinese In Chinese, both states and qualities are, in general, expressed with stative verbs (SV) with no need for a copula, e.g., in Chinese, "to be tired" (累 lèi), "to be hungry" (饿 è), "to be located at" (在 zài), "to be stupid" (笨 bèn) and so forth. A sentence can consist simply of a pronoun and such a verb: for example, 我饿 wǒ è ("I am hungry"). Usually, however, verbs expressing qualities are qualified by an adverb (meaning "very," "not," "quite," etc.); when not otherwise qualified, they are often preceded by 很 hěn, which in other contexts means "very," but in this use often has no particular meaning. Only sentences with a noun as the complement (e.g., "This is my sister") use the copular verb "to be": . This is used frequently; for example, instead of having a verb meaning "to be Chinese," the usual expression is "to be a Chinese person" (; "I am a Chinese person;" "I am Chinese"). This is sometimes called an equative verb. Another possibility is for the complement to be just a noun modifier (ending in ), the noun being omitted: Before the Han dynasty, the character 是 served as a demonstrative pronoun meaning "this." (This usage survives in some idioms and proverbs.) Some linguists believe that 是 developed into a copula because it often appeared, as a repetitive subject, after the subject of a sentence (in classical Chinese we can say, for example: "George W. Bush, this president of the United States" meaning "George W. Bush is the president of the United States). The character 是 appears to be formed as a compound of characters with the meanings of "early" and "straight." Another use of 是 in modern Chinese is in combination with the modifier 的 de to mean "yes" or to show agreement. For example: Question: 你的汽车是不是红色的? nǐ de qìchē shì bú shì hóngsè de? "Is your car red or not?"Response: 是的 shì de "Is," meaning "Yes," or 不是 bú shì "Not is," meaning "No." (A more common way of showing that the person asking the question is correct is by simply saying "right" or "correct," 对 duì; the corresponding negative answer is 不对 bú duì, "not right.") Yet another use of 是 is in the shì...(de) construction, which is used to emphasize a particular element of the sentence; see . In Hokkien 是 sī acts as the copula, and 是 is the equivalent in Wu Chinese. Cantonese uses 係 () instead of 是; similarly, Hakka uses 係 he55. Siouan languages In Siouan languages like Lakota, in principle almost all words—according to their structure—are verbs. So not only (transitive, intransitive and so-called "stative") verbs but even nouns often behave like verbs and do not need to have copulas. For example, the word wičháša refers to a man, and the verb "to-be-a-man" is expressed as wimáčhaša/winíčhaša/wičháša (I am/you are/he is a man). Yet there also is a copula héčha (to be a ...) that in most cases is used: wičháša hemáčha/heníčha/héčha (I am/you are/he is a man). In order to express the statement "I am a doctor of profession," one has to say pezuta wičháša hemáčha. But, in order to express that that person is THE doctor (say, that had been phoned to help), one must use another copula iyé (to be the one): pežúta wičháša (kiŋ) miyé yeló (medicine-man DEF ART I-am-the-one MALE ASSERT). In order to refer to space (e.g., Robert is in the house), various verbs are used, e.g., yaŋkÁ (lit., to sit) for humans, or háŋ/hé (to stand upright) for inanimate objects of a certain shape. "Robert is in the house" could be translated as Robert thimáhel yaŋké (yeló), whereas "There's one restaurant next to the gas station" translates as Owótethipi wígli-oínažiŋ kiŋ hél isákhib waŋ hé. Constructed languages The constructed language Lojban has two words that act similar to a copula in natural languages. The clause me ... me'u turns whatever follows it into a predicate that means to be (among) what it follows. For example, me la .bob. (me'u) means "to be Bob," and me le ci mensi (me'u) means "to be one of the three sisters." Another one is du, which is itself a predicate that means all its arguments are the same thing (equal). One word which is often confused for a copula in Lojban, but isn't one, is cu. It merely indicates that the word which follows is the main predicate of the sentence. For example, lo pendo be mi cu zgipre means "my friend is a musician," but the word cu does not correspond to English is; instead, the word zgipre, which is a predicate, corresponds to the entire phrase "is a musician". The word cu is used to prevent lo pendo be mi zgipre, which would mean "the friend-of-me type of musician". See also Indo-European copula Nominal sentence Stative verb Subject complement Zero copula Citations General references (See "copular sentences" and "existential sentences and expletive there" in Volume II.) Moro, A. (1997) The Raising of Predicates. Cambridge University Press, Cambridge, England. Tüting, A. W. (December 2003). Essay on Lakota syntax. . Further reading Parts of speech Verb types
5635
https://en.wikipedia.org/wiki/Christopher%20Columbus
Christopher Columbus
Christopher Columbus (; between 25 August and 31 October 1451 – 20 May 1506) was an Italian explorer and navigator from the Republic of Genoa who completed four Spanish-based voyages across the Atlantic Ocean sponsored by the Catholic Monarchs, opening the way for the widespread European exploration and European colonization of the Americas. His expeditions were the first known European contact with the Caribbean and Central and South America. The name Christopher Columbus is the anglicisation of the Latin . Growing up on the coast of Liguria, he went to sea at a young age and travelled widely, as far north as the British Isles and as far south as what is now Ghana. He married Portuguese noblewoman Filipa Moniz Perestrelo, who bore a son Diego, and was based in Lisbon for several years. He later took a Castilian mistress, Beatriz Enríquez de Arana, who bore a son, Ferdinand. Largely self-educated, Columbus was knowledgeable in geography, astronomy, and history. He developed a plan to seek a western sea passage to the East Indies, hoping to profit from the lucrative spice trade. After the Granada War, and Columbus's persistent lobbying in multiple kingdoms, the Catholic Monarchs, Queen Isabella I and King Ferdinand II agreed to sponsor a journey west. Columbus left Castile in August 1492 with three ships and made landfall in the Americas on 12 October, ending the period of human habitation in the Americas now referred to as the pre-Columbian era. His landing place was an island in the Bahamas, known by its native inhabitants as Guanahani. He then visited the islands now known as Cuba and Hispaniola, establishing a colony in what is now Haiti. Columbus returned to Castile in early 1493, with captured natives. Word of his voyage soon spread throughout Europe. Columbus made three further voyages to the Americas, exploring the Lesser Antilles in 1493, Trinidad and the northern coast of South America in 1498, and the east coast of Central America in 1502. Many names he gave to geographical features, particularly islands, are still in use. He gave the name indios ("Indians") to the indigenous peoples he encountered. The extent to which he was aware the Americas were a wholly separate landmass is uncertain; he never clearly renounced his belief he had reached the Far East. As a colonial governor, Columbus was accused by some of his contemporaries of significant brutality and removed from the post. Columbus's strained relationship with the Crown of Castile and its colonial administrators in America led to his arrest and removal from Hispaniola in 1500, and later to protracted litigation over the privileges he and his heirs claimed were owed to them by the crown. Columbus's expeditions inaugurated a period of exploration, conquest, and colonization that lasted for centuries, thus bringing the Americas into the European sphere of influence. The transfer of commodities, ideas, and people between the Old World and New World that followed his first voyage are known as the Columbian exchange. These events and the effects which persist to the present are often cited as the beginning of the modern era. Columbus was widely celebrated in the centuries after his death, but public perception fractured in the 21st century due to greater attention to the harms committed under his governance, particularly the beginning of the depopulation of Hispaniola's indigenous Taínos, caused by Old World diseases and mistreatment, including slavery. Many places in the Western Hemisphere bear his name, including the South American country of Colombia, the Canadian province of British Columbia, the American city Columbus, Ohio, and the U.S. capital, the District of Columbia. Early life Columbus's early life is obscure, but scholars believe he was born in the Republic of Genoa between 25 August and 31 October 1451. His father was Domenico Colombo, a wool weaver who worked in Genoa and Savona and owned a cheese stand at which young Christopher worked. His mother was Susanna Fontanarossa. He had three brothers—Bartholomew, Giovanni Pellegrino, and Giacomo (also called Diego)—as well as a sister, Bianchinetta. Bartholomew ran a cartography workshop in Lisbon for at least part of his adulthood. His native language is presumed to have been a Genoese dialect (Ligurian) as his first language, though Columbus probably never wrote in it. His name in 16th-century Genoese was Cristoffa Corombo, in Italian, Cristoforo Colombo, and in Spanish Cristóbal Colón. In one of his writings, he says he went to sea at 14. In 1470, the family moved to Savona, where Domenico took over a tavern. Some modern authors have argued that he was not from Genoa, but from the Aragon region of Spain or from Portugal. These competing hypotheses have been discounted by most scholars. In 1473, Columbus began his apprenticeship as business agent for the wealthy Spinola, Centurione, and Di Negro families of Genoa. Later, he made a trip to the Greek island Chios in the Aegean Sea, then ruled by Genoa. In May 1476, he took part in an armed convoy sent by Genoa to carry valuable cargo to northern Europe. He probably visited Bristol, England, and Galway, Ireland, where he may have visited St. Nicholas' Collegiate Church. It has been speculated he went to Iceland in 1477, though many scholars doubt this. It is known that in the autumn of 1477, he sailed on a Portuguese ship from Galway to Lisbon, where he found his brother Bartholomew, and they continued trading for the Centurione family. Columbus based himself in Lisbon from 1477 to 1485. In 1478, the Centuriones sent Columbus on a sugar-buying trip to Madeira. He married Felipa Perestrello e Moniz, daughter of Bartolomeu Perestrello, a Portuguese nobleman of Lombard origin, who had been the donatary captain of Porto Santo. In 1479 or 1480, Columbus's son Diego was born. Between 1482 and 1485, Columbus traded along the coasts of West Africa, reaching the Portuguese trading post of Elmina at the Guinea coast in present-day Ghana. Before 1484, Columbus returned to Porto Santo to find that his wife had died. He returned to Portugal to settle her estate and take Diego with him. He left Portugal for Castile in 1485, where he took a mistress in 1487, a 20-year-old orphan named Beatriz Enríquez de Arana. It is likely that Beatriz met Columbus when he was in Córdoba, a gathering place for Genoese merchants and where the court of the Catholic Monarchs was located at intervals. Beatriz, unmarried at the time, gave birth to Columbus's second son, Fernando Columbus, in July 1488, named for the monarch of Aragon. Columbus recognized the boy as his offspring. Columbus entrusted his older, legitimate son Diego to take care of Beatriz and pay the pension set aside for her following his death, but Diego was negligent in his duties. Columbus learned Latin, Portuguese, and Castilian. He read widely about astronomy, geography, and history, including the works of Ptolemy, Pierre d'Ailly's Imago Mundi, the travels of Marco Polo and Sir John Mandeville, Pliny's Natural History, and Pope Pius II's Historia rerum ubique gestarum. According to historian Edmund Morgan, Columbus was not a scholarly man. Yet he studied these books, made hundreds of marginal notations in them and came out with ideas about the world that were characteristically simple and strong and sometimes wrong ... Quest for Asia Background Under the Mongol Empire's hegemony over Asia and the Pax Mongolica, Europeans had long enjoyed a safe land passage on the Silk Road to India, parts of East Asia, including China and Maritime Southeast Asia, which were sources of valuable goods. With the fall of Constantinople to the Ottoman Empire in 1453, the Silk Road was closed to Christian traders. In 1474, the Florentine astronomer Paolo dal Pozzo Toscanelli suggested to King Afonso V of Portugal that sailing west across the Atlantic would be a quicker way to reach the Maluku (Spice) Islands, China, Japan and India than the route around Africa, but Afonso rejected his proposal. In the 1480s, Columbus and his brother proposed a plan to reach the East Indies by sailing west. Columbus supposedly wrote Toscanelli in 1481 and received encouragement, along with a copy of a map the astronomer had sent Afonso implying that a westward route to Asia was possible. Columbus's plans were complicated by Bartolomeu Dias's rounding of the Cape of Good Hope in 1488, which suggested the Cape Route around Africa to Asia. Carol Delaney and other commentators have argued that Columbus was a Christian millennialist and apocalypticist and that these beliefs motivated his quest for Asia in a variety of ways. Columbus often wrote about seeking gold in the log books of his voyages and writes about acquiring it "in such quantity that the sovereigns... will undertake and prepare to go conquer the Holy Sepulcher" in a fulfillment of Biblical prophecy. Columbus often wrote about converting all races to Christianity. Abbas Hamandi argues that Columbus was motivated by the hope of "[delivering] Jerusalem from Muslim hands" by "using the resources of newly discovered lands". Geographical considerations Despite a popular misconception to the contrary, nearly all educated Westerners of Columbus's time knew that the Earth is spherical, a concept that had been understood since antiquity. The techniques of celestial navigation, which uses the position of the Sun and the stars in the sky, had long been in use by astronomers and were beginning to be implemented by mariners. As far back as the 3rd century BC, Eratosthenes had correctly computed the circumference of the Earth by using simple geometry and studying the shadows cast by objects at two remote locations. In the 1st century BC, Posidonius confirmed Eratosthenes's results by comparing stellar observations at two separate locations. These measurements were widely known among scholars, but Ptolemy's use of the smaller, old-fashioned units of distance led Columbus to underestimate the size of the Earth by about a third. Three cosmographical parameters determined the bounds of Columbus's enterprise: the distance across the ocean between Europe and Asia, which depended on the extent of the oikumene, i.e., the Eurasian land-mass stretching east–west between Spain and China; the circumference of the Earth; and the number of miles or leagues in a degree of longitude, which was possible to deduce from the theory of the relationship between the size of the surfaces of water and the land as held by the followers of Aristotle in medieval times. From Pierre d'Ailly's Imago Mundi (1410), Columbus learned of Alfraganus's estimate that a degree of latitude (equal to approximately a degree of longitude along the equator) spanned 56.67 Arabic miles (equivalent to or 76.2 mi), but he did not realize that this was expressed in the Arabic mile (about ) rather than the shorter Roman mile (about 1,480 m) with which he was familiar. Columbus therefore estimated the size of the Earth to be about 75% of Eratosthenes's calculation, and the distance westward from the Canary Islands to the Indies as only 68 degrees, equivalent to (a 58% error). Most scholars of the time accepted Ptolemy's estimate that Eurasia spanned 180° longitude, rather than the actual 130° (to the Chinese mainland) or 150° (to Japan at the latitude of Spain). Columbus believed an even higher estimate, leaving a smaller percentage for water. In d'Ailly's Imago Mundi, Columbus read Marinus of Tyre's estimate that the longitudinal span of Eurasia was 225° at the latitude of Rhodes. Some historians, such as Samuel Morison, have suggested that he followed the statement in the apocryphal book 2 Esdras (6:42) that "six parts [of the globe] are habitable and the seventh is covered with water." He was also aware of Marco Polo's claim that Japan (which he called "Cipangu") was some to the east of China ("Cathay"), and closer to the equator than it is. He was influenced by Toscanelli's idea that there were inhabited islands even farther to the east than Japan, including the mythical Antillia, which he thought might lie not much farther to the west than the Azores. Based on his sources, Columbus estimated a distance of from the Canary Islands west to Japan; the actual distance is . No ship in the 15th century could have carried enough food and fresh water for such a long voyage, and the dangers involved in navigating through the uncharted ocean would have been formidable. Most European navigators reasonably concluded that a westward voyage from Europe to Asia was unfeasible. The Catholic Monarchs, however, having completed the Reconquista, an expensive war against the Moors in the Iberian Peninsula, were eager to obtain a competitive edge over other European countries in the quest for trade with the Indies. Columbus's project, though far-fetched, held the promise of such an advantage. Nautical considerations Though Columbus was wrong about the number of degrees of longitude that separated Europe from the Far East and about the distance that each degree represented, he did take advantage of the trade winds, which would prove to be the key to his successful navigation of the Atlantic Ocean. He planned to first sail to the Canary Islands before continuing west with the northeast trade wind. Part of the return to Spain would require traveling against the wind using an arduous sailing technique called beating, during which progress is made very slowly. To effectively make the return voyage, Columbus would need to follow the curving trade winds northeastward to the middle latitudes of the North Atlantic, where he would be able to catch the "westerlies" that blow eastward to the coast of Western Europe. The navigational technique for travel in the Atlantic appears to have been exploited first by the Portuguese, who referred to it as the volta do mar ('turn of the sea'). Through his marriage to his first wife, Felipa Perestrello, Columbus had access to the nautical charts and logs that had belonged to her deceased father, Bartolomeu Perestrello, who had served as a captain in the Portuguese navy under Prince Henry the Navigator. In the mapmaking shop where he worked with his brother Bartholomew, Columbus also had ample opportunity to hear the stories of old seamen about their voyages to the western seas, but his knowledge of the Atlantic wind patterns was still imperfect at the time of his first voyage. By sailing due west from the Canary Islands during hurricane season, skirting the so-called horse latitudes of the mid-Atlantic, he risked being becalmed and running into a tropical cyclone, both of which he avoided by chance. Quest for financial support for a voyage By about 1484, Columbus proposed his planned voyage to King John II of Portugal. The king submitted Columbus's proposal to his advisors, who rejected it, correctly, on the grounds that Columbus's estimate for a voyage of 2,400 nmi was only a quarter of what it should have been. In 1488, Columbus again appealed to the court of Portugal, and John II again granted him an audience. That meeting also proved unsuccessful, in part because not long afterwards Bartolomeu Dias returned to Portugal with news of his successful rounding of the southern tip of Africa (near the Cape of Good Hope). Columbus sought an audience with the monarchs Ferdinand II of Aragon and Isabella I of Castile, who had united several kingdoms in the Iberian Peninsula by marrying and now ruled together. On 1 May 1486, permission having been granted, Columbus presented his plans to Queen Isabella, who, in turn, referred it to a committee. The learned men of Spain, like their counterparts in Portugal, replied that Columbus had grossly underestimated the distance to Asia. They pronounced the idea impractical and advised the Catholic Monarchs to pass on the proposed venture. To keep Columbus from taking his ideas elsewhere, and perhaps to keep their options open, the sovereigns gave him an allowance, totaling about 14,000 maravedis for the year, or about the annual salary of a sailor. In May 1489, the queen sent him another 10,000 maravedis, and the same year the monarchs furnished him with a letter ordering all cities and towns under their dominion to provide him food and lodging at no cost. Columbus also dispatched his brother Bartholomew to the court of Henry VII of England to inquire whether the English crown might sponsor his expedition, but he was captured by pirates en route, and only arrived in early 1491. By that time, Columbus had retreated to La Rábida Friary, where the Spanish crown sent him 20,000 maravedis to buy new clothes and instructions to return to the Spanish court for renewed discussions. Agreement with the Spanish crown Columbus waited at King Ferdinand's camp until Ferdinand and Isabella conquered Granada, the last Muslim stronghold on the Iberian Peninsula, in January 1492. A council led by Isabella's confessor, Hernando de Talavera, found Columbus's proposal to reach the Indies implausible. Columbus had left for France when Ferdinand intervened, first sending Talavera and Bishop Diego Deza to appeal to the queen. Isabella was finally convinced by the king's clerk Luis de Santángel, who argued that Columbus would take his ideas elsewhere, and offered to help arrange the funding. Isabella then sent a royal guard to fetch Columbus, who had traveled 2 leagues (over 10 km) toward Córdoba. In the April 1492 "Capitulations of Santa Fe", King Ferdinand and Queen Isabella promised Columbus that if he succeeded he would be given the rank of Admiral of the Ocean Sea and appointed Viceroy and Governor of all the new lands he might claim for Spain. He had the right to nominate three persons, from whom the sovereigns would choose one, for any office in the new lands. He would be entitled to 10% (diezmo) of all the revenues from the new lands in perpetuity. He also would have the option of buying one-eighth interest in any commercial venture in the new lands, and receive one-eighth (ochavo) of the profits. In 1500, during his third voyage to the Americas, Columbus was arrested and dismissed from his posts. He and his sons, Diego and Fernando, then conducted a lengthy series of court cases against the Castilian crown, known as the pleitos colombinos, alleging that the Crown had illegally reneged on its contractual obligations to Columbus and his heirs. The Columbus family had some success in their first litigation, as a judgment of 1511 confirmed Diego's position as viceroy but reduced his powers. Diego resumed litigation in 1512, which lasted until 1536, and further disputes initiated by heirs continued until 1790. Voyages Between 1492 and 1504, Columbus completed four round-trip voyages between Spain and the Americas, each voyage being sponsored by the Crown of Castile. On his first voyage he reached the Americas, initiating the European exploration and colonization of the continent, as well as the Columbian exchange. His role in history is thus important to the Age of Discovery, Western history, and human history writ large. In Columbus's letter on the first voyage, published following his first return to Spain, he claimed that he had reached Asia, as previously described by Marco Polo and other Europeans. Over his subsequent voyages, Columbus refused to acknowledge that the lands he visited and claimed for Spain were not part of Asia, in the face of mounting evidence to the contrary. This might explain, in part, why the American continent was named after the Florentine explorer Amerigo Vespucci—who received credit for recognizing it as a "New World"—and not after Columbus. First voyage (1492–1493) On the evening of 3 August 1492, Columbus departed from Palos de la Frontera with three ships. The largest was a carrack, the Santa María, owned and captained by Juan de la Cosa, and under Columbus's direct command. The other two were smaller caravels, the Pinta and the Niña, piloted by the Pinzón brothers. Columbus first sailed to the Canary Islands. There he restocked provisions and made repairs then departed from San Sebastián de La Gomera on 6 September, for what turned out to be a five-week voyage across the ocean. On 7 October, the crew spotted "[i]mmense flocks of birds". On 11 October, Columbus changed the fleet's course to due west, and sailed through the night, believing land was soon to be found. At around 02:00 the following morning, a lookout on the Pinta, Rodrigo de Triana, spotted land. The captain of the Pinta, Martín Alonso Pinzón, verified the sight of land and alerted Columbus. Columbus later maintained that he had already seen a light on the land a few hours earlier, thereby claiming for himself the lifetime pension promised by Ferdinand and Isabella to the first person to sight land. Columbus called this island (in what is now the Bahamas) San Salvador (meaning "Holy Savior"); the natives called it Guanahani. Christopher Columbus's journal entry of 12 October 1492 states:I saw some who had marks of wounds on their bodies and I made signs to them asking what they were; and they showed me how people from other islands nearby came there and tried to take them, and how they defended themselves; and I believed and believe that they come here from tierra firme to take them captive. They should be good and intelligent servants, for I see that they say very quickly everything that is said to them; and I believe they would become Christians very easily, for it seemed to me that they had no religion. Our Lord pleasing, at the time of my departure I will take six of them from here to Your Highnesses in order that they may learn to speak.Columbus called the inhabitants of the lands that he visited Los Indios (Spanish for "Indians"). He initially encountered the Lucayan, Taíno, and Arawak peoples. Noting their gold ear ornaments, Columbus took some of the Arawaks prisoner and insisted that they guide him to the source of the gold. Columbus did not believe he needed to create a fortified outpost, writing, "the people here are simple in war-like matters ... I could conquer the whole of them with fifty men, and govern them as I pleased." The Taínos told Columbus that another indigenous tribe, the Caribs, were fierce warriors and cannibals, who made frequent raids on the Taínos, often capturing their women, although this may have been a belief perpetuated by the Spaniards to justify enslaving them. Columbus also explored the northeast coast of Cuba, where he landed on 28 October. On the night of 26 November, Martín Alonso Pinzón took the Pinta on an unauthorized expedition in search of an island called "Babeque" or "Baneque", which the natives had told him was rich in gold. Columbus, for his part, continued to the northern coast of Hispaniola, where he landed on 6 December. There, the Santa María ran aground on 25 December 1492 and had to be abandoned. The wreck was used as a target for cannon fire to impress the native peoples. Columbus was received by the native cacique Guacanagari, who gave him permission to leave some of his men behind. Columbus left 39 men, including the interpreter Luis de Torres, and founded the settlement of La Navidad, in present-day Haiti. Columbus took more natives prisoner and continued his exploration. He kept sailing along the northern coast of Hispaniola with a single ship until he encountered Pinzón and the Pinta on 6 January. On 13 January 1493, Columbus made his last stop of this voyage in the Americas, in the Bay of Rincón in northeast Hispaniola. There he encountered the Ciguayos, the only natives who offered violent resistance during this voyage. The Ciguayos refused to trade the amount of bows and arrows that Columbus desired; in the ensuing clash one Ciguayo was stabbed in the buttocks and another wounded with an arrow in his chest. Because of these events, Columbus called the inlet the Golfo de Las Flechas (Bay of Arrows). Columbus headed for Spain on the Niña, but a storm separated him from the Pinta, and forced the Niña to stop at the island of Santa Maria in the Azores. Half of his crew went ashore to say prayers of thanksgiving in a chapel for having survived the storm. But while praying, they were imprisoned by the governor of the island, ostensibly on suspicion of being pirates. After a two-day standoff, the prisoners were released, and Columbus again set sail for Spain. Another storm forced Columbus into the port at Lisbon. From there he went to Vale do Paraíso north of Lisbon to meet King John II of Portugal, who told Columbus that he believed the voyage to be in violation of the 1479 Treaty of Alcáçovas. After spending more than a week in Portugal, Columbus set sail for Spain. Returning to Palos on 15 March 1493, he was given a hero's welcome and soon afterward received by Isabella and Ferdinand in Barcelona. Columbus's letter on the first voyage, dispatched to the Spanish court, was instrumental in spreading the news throughout Europe about his voyage. Almost immediately after his arrival in Spain, printed versions began to appear, and word of his voyage spread rapidly. Most people initially believed that he had reached Asia. The Bulls of Donation, three papal bulls of Pope Alexander VI delivered in 1493, purported to grant overseas territories to Portugal and the Catholic Monarchs of Spain. They were replaced by the Treaty of Tordesillas of 1494. The two earliest published copies of Columbus's letter on the first voyage aboard the Niña were donated in 2017 by the Jay I. Kislak Foundation to the University of Miami library in Coral Gables, Florida, where they are housed. Second voyage (1493–1496) On 24 September 1493, Columbus sailed from Cádiz with 17 ships, and supplies to establish permanent colonies in the Americas. He sailed with nearly 1,500 men, including sailors, soldiers, priests, carpenters, stonemasons, metalworkers, and farmers. Among the expedition members were Alvarez Chanca, a physician who wrote a detailed account of the second voyage; Juan Ponce de León, the first governor of Puerto Rico and Florida; the father of Bartolomé de las Casas; Juan de la Cosa, a cartographer who is credited with making the first world map depicting the New World; and Columbus's youngest brother Diego. The fleet stopped at the Canary Islands to take on more supplies, and set sail again on 7 October, deliberately taking a more southerly course than on the first voyage. On 3 November, they arrived in the Windward Islands; the first island they encountered was named Dominica by Columbus, but not finding a good harbor there, they anchored off a nearby smaller island, which he named Mariagalante, now a part of Guadeloupe and called Marie-Galante. Other islands named by Columbus on this voyage were Montserrat, Antigua, Saint Martin, the Virgin Islands, as well as many others. On 22 November, Columbus returned to Hispaniola to visit La Navidad, where 39 Spaniards had been left during the first voyage. Columbus found the fort in ruins, destroyed by the Taínos after some of the Spaniards reportedly antagonized their hosts with their unrestrained lust for gold and women. Columbus then established a poorly located and short-lived settlement to the east, La Isabela, in the present-day Dominican Republic. From April to August 1494, Columbus explored Cuba and Jamaica, then returned to Hispaniola. By the end of 1494, disease and famine had killed two-thirds of the Spanish settlers. Columbus implemented encomienda, a Spanish labor system that rewarded conquerors with the labor of conquered non-Christian people. Columbus executed Spanish colonists for minor crimes, and used dismemberment as punishment. Columbus and the colonists enslaved the indigenous people, including children. Natives were beaten, raped, and tortured for the location of imagined gold. Thousands committed suicide rather than face the oppression. In February 1495, Columbus rounded up about 1,500 Arawaks, some of whom had rebelled, in a great slave raid. About 500 of the strongest were shipped to Spain as slaves, with about two hundred of those dying en route. In June 1495, the Spanish crown sent ships and supplies to Hispaniola. In October, Florentine merchant Gianotto Berardi, who had won the contract to provision the fleet of Columbus's second voyage and to supply the colony on Hispaniola, received almost 40,000 maravedís worth of enslaved Indians. He renewed his effort to get supplies to Columbus, and was working to organize a fleet when he suddenly died in December. On 10 March 1496, having been away about 30 months, the fleet departed La Isabela. On 8 June the crew sighted land somewhere between Lisbon and Cape St. Vincent, and disembarked in Cádiz on 11 June. Third voyage (1498–1500) On 30 May 1498, Columbus left with six ships from Sanlúcar, Spain. The fleet called at Madeira and the Canary Islands, where it divided in two, with three ships heading for Hispaniola and the other three vessels, commanded by Columbus, sailing south to the Cape Verde Islands and then westward across the Atlantic. It is probable that this expedition was intended at least partly to confirm rumors of a large continent south of the Caribbean Sea, that is, South America. On 31 July they sighted Trinidad, the most southerly of the Caribbean islands. On 5 August, Columbus sent several small boats ashore on the southern side of the Paria Peninsula in what is now Venezuela, near the mouth of the Orinoco river. This was the first recorded landing of Europeans on the mainland of South America, which Columbus realized must be a continent. The fleet then sailed to the islands of Chacachacare and Margarita, reaching the latter on 14 August, and sighted Tobago and Grenada from afar, according to some scholars. On 19 August, Columbus returned to Hispaniola. There he found settlers in rebellion against his rule, and his unfulfilled promises of riches. Columbus had some of the Europeans tried for their disobedience; at least one rebel leader was hanged. In October 1499, Columbus sent two ships to Spain, asking the Court of Spain to appoint a royal commissioner to help him govern. By this time, accusations of tyranny and incompetence on the part of Columbus had also reached the Court. The sovereigns sent Francisco de Bobadilla, a relative of Marquesa Beatriz de Bobadilla, a patron of Columbus and a close friend of Queen Isabella, to investigate the accusations of brutality made against the Admiral. Arriving in Santo Domingo while Columbus was away, Bobadilla was immediately met with complaints about all three Columbus brothers. He moved into Columbus's house and seized his property, took depositions from the Admiral's enemies, and declared himself governor. Bobadilla reported to Spain that Columbus once punished a man found guilty of stealing corn by having his ears and nose cut off and then selling him into slavery. He claimed that Columbus regularly used torture and mutilation to govern Hispaniola. Testimony recorded in the report stated that Columbus congratulated his brother Bartholomew on "defending the family" when the latter ordered a woman paraded naked through the streets and then had her tongue cut because she had "spoken ill of the admiral and his brothers". The document also describes how Columbus put down native unrest and revolt: he first ordered a brutal suppression of the uprising in which many natives were killed, and then paraded their dismembered bodies through the streets in an attempt to discourage further rebellion. Columbus vehemently denied the charges. The neutrality and accuracy of the accusations and investigations of Bobadilla toward Columbus and his brothers have been disputed by historians, given the anti-Italian sentiment of the Spaniards and Bobadilla's desire to take over Columbus's position. In early October 1500, Columbus and Diego presented themselves to Bobadilla, and were put in chains aboard La Gorda, the caravel on which Bobadilla had arrived at Santo Domingo. They were returned to Spain, and languished in jail for six weeks before King Ferdinand ordered their release. Not long after, the king and queen summoned the Columbus brothers to the Alhambra palace in Granada. The sovereigns expressed indignation at the actions of Bobadilla, who was then recalled and ordered to make restitutions of the property he had confiscated from Columbus. The royal couple heard the brothers' pleas; restored their freedom and wealth; and, after much persuasion, agreed to fund Columbus's fourth voyage. However, Nicolás de Ovando was to replace Bobadilla and be the new governor of the West Indies. New light was shed on the seizure of Columbus and his brother Bartholomew, the Adelantado, with the discovery by archivist Isabel Aguirre of an incomplete copy of the testimonies against them gathered by Francisco de Bobadilla at Santo Domingo in 1500. She found a manuscript copy of this pesquisa (inquiry) ‌in the Archive of Simancas, Spain, uncatalogued until she and Consuelo Varela published their book, La caída de Cristóbal Colón: el juicio de Bobadilla (The fall of Christopher Colón: the judgement of Bobadilla) in 2006. Fourth voyage (1502–1504) On 9 May 1502, Columbus left Cádiz with his flagship Santa María and three other vessels. The ships were crewed by 140 men, including his brother Bartholomew as second in command and his son Fernando. He sailed to Asilah on the Moroccan coast to rescue Portuguese soldiers said to be besieged by the Moors. The siege had been lifted by the time they arrived, so the Spaniards stayed only a day and continued on to the Canary Islands. On 15 June, the fleet arrived at Martinique, where it lingered for several days. A hurricane was forming, so Columbus continued westward, hoping to find shelter on Hispaniola. He arrived at Santo Domingo on 29 June, but was denied port, and the new governor Francisco de Bobadilla refused to listen to his warning that a hurricane was approaching. Instead, while Columbus's ships sheltered at the mouth of the Rio Jaina, the first Spanish treasure fleet sailed into the hurricane. Columbus's ships survived with only minor damage, while 20 of the 30 ships in the governor's fleet were lost along with 500 lives (including that of Francisco de Bobadilla). Although a few surviving ships managed to straggle back to Santo Domingo, Aguja, the fragile ship carrying Columbus's personal belongings and his 4,000 pesos in gold was the sole vessel to reach Spain. The gold was his tenth (décimo) of the profits from Hispaniola, equal to 240,000 maravedis, guaranteed by the Catholic Monarchs in 1492. After a brief stop at Jamaica, Columbus sailed to Central America, arriving at the coast of Honduras on 30 July. Here Bartholomew found native merchants and a large canoe. On 14 August, Columbus landed on the continental mainland at Punta Caxinas, now Puerto Castilla, Honduras. He spent two months exploring the coasts of Honduras, Nicaragua, and Costa Rica, seeking a strait in the western Caribbean through which he could sail to the Indian Ocean. Sailing south along the Nicaraguan coast, he found a channel that led into Almirante Bay in Panama on 5 October. As soon as his ships anchored in Almirante Bay, Columbus encountered Ngäbe people in canoes who were wearing gold ornaments. In January 1503, he established a garrison at the mouth of the Belén River. Columbus left for Hispaniola on 16 April. On 10 May he sighted the Cayman Islands, naming them "Las Tortugas" after the numerous sea turtles there. His ships sustained damage in a storm off the coast of Cuba. Unable to travel farther, on 25 June 1503 they were beached in Saint Ann Parish, Jamaica. For six months Columbus and 230 of his men remained stranded on Jamaica. Diego Méndez de Segura, who had shipped out as a personal secretary to Columbus, and a Spanish shipmate called Bartolomé Flisco, along with six natives, paddled a canoe to get help from Hispaniola. The governor, Nicolás de Ovando y Cáceres, detested Columbus and obstructed all efforts to rescue him and his men. In the meantime Columbus, in a desperate effort to induce the natives to continue provisioning him and his hungry men, won their favor by predicting a lunar eclipse for 29 February 1504, using Abraham Zacuto's astronomical charts. Despite the governor's obstruction, Christopher Columbus and his men were rescued on 28 June 1504, and arrived in Sanlúcar, Spain, on 7 November. Later life, illness, and death Columbus had always claimed that the conversion of non-believers was one reason for his explorations, and he grew increasingly religious in his later years. Probably with the assistance of his son Diego and his friend the Carthusian monk Gaspar Gorricio, Columbus produced two books during his later years: a Book of Privileges (1502), detailing and documenting the rewards from the Spanish Crown to which he believed he and his heirs were entitled, and a Book of Prophecies (1505), in which passages from the Bible were used to place his achievements as an explorer in the context of Christian eschatology. In his later years, Columbus demanded that the Crown of Castile give him his tenth of all the riches and trade goods yielded by the new lands, as stipulated in the Capitulations of Santa Fe. Because he had been relieved of his duties as governor, the Crown did not feel bound by that contract and his demands were rejected. After his death, his heirs sued the Crown for a part of the profits from trade with America, as well as other rewards. This led to a protracted series of legal disputes known as the pleitos colombinos ("Columbian lawsuits"). During a violent storm on his first return voyage, Columbus, then 41, had suffered an attack of what was believed at the time to be gout. In subsequent years, he was plagued with what was thought to be influenza and other fevers, bleeding from the eyes, temporary blindness and prolonged attacks of gout. The attacks increased in duration and severity, sometimes leaving Columbus bedridden for months at a time, and culminated in his death 14 years later. Based on Columbus's lifestyle and the described symptoms, some modern commentators suspect that he suffered from reactive arthritis, rather than gout. Reactive arthritis is a joint inflammation caused by intestinal bacterial infections or after acquiring certain sexually transmitted diseases (primarily chlamydia or gonorrhea). In 2006, Frank C. Arnett, a medical doctor, and historian Charles Merrill, published their paper in The American Journal of the Medical Sciences proposing that Columbus had a form of reactive arthritis; Merrill made the case in that same paper that Columbus was the son of Catalans and his mother possibly a member of a prominent converso (converted Jew) family. "It seems likely that [Columbus] acquired reactive arthritis from food poisoning on one of his ocean voyages because of poor sanitation and improper food preparation", says Arnett, a rheumatologist and professor of internal medicine, pathology and laboratory medicine at the University of Texas Medical School at Houston. Some historians such as H. Micheal Tarver and Emily Slape, as well as medical doctors such as Arnett and Antonio Rodríguez Cuartero, believe that Columbus had such a form of reactive arthritis, but according to other authorities, this is "speculative", or "very speculative". After his arrival to Sanlúcar from his fourth voyage (and Queen Isabella's death), an ill Columbus settled in Seville in April 1505. He stubbornly continued to make pleas to the Crown to defend his own personal privileges and his family's. He moved to Segovia (where the court was at the time) on a mule by early 1506, and, on the occasion of the wedding of King Ferdinand with Germaine of Foix in Valladolid, Spain, in March 1506, Columbus moved to that city to persist with his demands. On 20 May 1506, aged 54, Columbus died in Valladolid. Location of remains Columbus's remains were first buried at a convent in Valladolid, then moved to the monastery of La Cartuja in Seville (southern Spain) by the will of his son Diego. They may have been exhumed in 1513 and interred at the Seville Cathedral. In about 1536, the remains of both Columbus and his son Diego were moved to a cathedral in Colonial Santo Domingo, in the present-day Dominican Republic; Columbus had requested to be buried on the island. By some accounts, in 1793, when France took over the entire island of Hispaniola, Columbus's remains were moved to Havana, Cuba. After Cuba became independent following the Spanish–American War in 1898, at least some of these remains were moved back to the Seville Cathedral, where they were placed on an elaborate catafalque. In June 2003, DNA samples were taken from these remains as well as those of Columbus's brother Diego and younger son Fernando. Initial observations suggested that the bones did not appear to match Columbus's physique or age at death. DNA extraction proved difficult; only short fragments of mitochondrial DNA could be isolated. These matched corresponding DNA from Columbus's brother, supporting that both individuals had shared the same mother. Such evidence, together with anthropologic and historic analyses, led the researchers to conclude that the remains belonged to Christopher Columbus. In 1877, a priest discovered a lead box at Santo Domingo inscribed: "Discoverer of America, First Admiral". Inscriptions found the next year read "Last of the remains of the first admiral, Sire Christopher Columbus, discoverer." The box contained bones of an arm and a leg, as well as a bullet. These remains were considered legitimate by physician and U.S. Assistant Secretary of State John Eugene Osborne, who suggested in 1913 that they travel through the Panama Canal as a part of its opening ceremony. These remains were kept at the Basilica Cathedral of Santa María la Menor (in the Colonial City of Santo Domingo) before being moved to the Columbus Lighthouse (Santo Domingo Este, inaugurated in 1992). The authorities in Santo Domingo have never allowed these remains to be DNA-tested, so it is unconfirmed whether they are from Columbus's body as well. Commemoration The figure of Columbus was not ignored in the British colonies during the colonial era: Columbus became a unifying symbol early in the history of the colonies that became the United States when Puritan preachers began to use his life story as a model for a "developing American spirit". In the spring of 1692, Puritan preacher Cotton Mather described Columbus's voyage as one of three shaping events of the modern age, connecting Columbus's voyage and the Puritans' migration to North America, seeing them together as the key to a grand design. The use of Columbus as a founding figure of New World nations spread rapidly after the American Revolution. This was out of a desire to develop a national history and founding myth with fewer ties to Britain. His name was the basis for the female national personification of the United States, Columbia, in use since the 1730s with reference to the original Thirteen Colonies, and also a historical name applied to the Americas and to the New World. Columbia, South Carolina and Columbia Rediviva, the ship for which the Columbia River was named, are named for Columbus. Columbus's name was given to the newly born Republic of Colombia in the early 19th century, inspired by the political project of "Colombeia" developed by revolutionary Francisco de Miranda, which was put at the service of the emancipation of continental Hispanic America. To commemorate the 400th anniversary of the landing of Columbus, the 1893 World's Fair in Chicago was named the World's Columbian Exposition. The U.S. Postal Service issued the first U.S. commemorative stamps, the Columbian Issue, depicting Columbus, Queen Isabella and others in various stages of his several voyages. The policies related to the celebration of the Spanish colonial empire as the vehicle of a nationalist project undertaken in Spain during the Restoration in the late 19th century took form with the commemoration of the 4th centenary on 12 October 1892 (in which the figure of Columbus was extolled by the Conservative government), eventually becoming the very same national day. Several monuments commemorating the "discovery" were erected in cities such as Palos, Barcelona, Granada, Madrid, Salamanca, Valladolid and Seville in the years around the 400th anniversary. For the Columbus Quincentenary in 1992, a second Columbian issue was released jointly with Italy, Portugal, and Spain. Columbus was celebrated at Seville Expo '92, and Genoa Expo '92. The Boal Mansion Museum, founded in 1951, contains a collection of materials concerning later descendants of Columbus and collateral branches of the family. It features a 16th-century chapel from a Spanish castle reputedly owned by Diego Colón which became the residence of Columbus's descendants. The chapel interior was dismantled and moved from Spain in 1909 and re-erected on the Boal estate at Boalsburg, Pennsylvania. Inside it are numerous religious paintings and other objects including a reliquary with fragments of wood supposedly from the True Cross. The museum also holds a collection of documents mostly relating to Columbus descendants of the late 18th and early 19th centuries. In many countries of the Americas, as well as Spain and Italy, Columbus Day celebrates the anniversary of Columbus's arrival in the Americas on 12 October 1492. Legacy The voyages of Columbus are considered a turning point in human history, marking the beginning of globalization and accompanying demographic, commercial, economic, social, and political changes. His explorations resulted in permanent contact between the two hemispheres, and the term "pre-Columbian" is used to refer to the cultures of the Americas before the arrival of Columbus and his European successors. The ensuing Columbian exchange saw the massive exchange of animals, plants, fungi, diseases, technologies, mineral wealth and ideas. In the first century after his endeavors, Columbus's figure largely languished in the backwaters of history, and his reputation was beset by his failures as a colonial administrator. His legacy was somewhat rescued from oblivion when he began to appear as a character in Italian and Spanish plays and poems from the late 16th century onward. Columbus was subsumed into the Western narrative of colonization and empire building, which invoked notions of translatio imperii and translatio studii to underline who was considered "civilized" and who was not. The Americanization of the figure of Columbus began in the latter decades of the 18th century, after the revolutionary period of the United States, elevating the status of his reputation to a national myth, homo americanus. His landing became a powerful icon as an "image of American genesis". The Discovery of America sculpture, depicting Columbus and a cowering Indian maiden, was commissioned on 3 April 1837, when U.S. President Martin Van Buren sanctioned the engineering of Luigi Persico's design. This representation of Columbus's triumph and the Indian's recoil is a demonstration of white superiority over savage, naive Indians. As recorded during its unveiling in 1844, the sculpture extends to "represent the meeting of the two races", as Persico captures their first interaction, highlighting the "moral and intellectual inferiority" of Indians. Placed outside the U.S. Capitol building where it remained until its removal in the mid-20th century, the sculpture reflected the contemporary view of whites in the U.S. toward the Natives; they are labeled "merciless Indian savages" in the United States Declaration of Independence. In 1836, Pennsylvania senator and future U.S. President James Buchanan, who proposed the sculpture, described it as representing "the great discoverer when he first bounded with ecstasy upon the shore, ail his toils past, presenting a hemisphere to the astonished world, with the name America inscribed upon it. Whilst he is thus standing upon the shore, a female savage, with awe and wonder depicted in her countenance, is gazing upon him." The American Columbus myth was reconfigured later in the century when he was enlisted as an ethnic hero by immigrants to the United States who were not of Anglo-Saxon stock, such as Jewish, Italian, and Irish people, who claimed Columbus as a sort of ethnic founding father. Catholics unsuccessfully tried to promote him for canonization in the 19th century. From the 1990s onward, a narrative of Columbus being responsible for the genocide of indigenous peoples and environmental destruction began to compete with the then predominant discourse of Columbus as Christ-bearer, scientist, or father of America. This narrative features the negative effects of Columbus' conquests on native populations. Exposed to Old World diseases, the indigenous populations of the New World collapsed, and were largely replaced by Europeans and Africans, who brought with them new methods of farming, business, governance, and religious worship. Originality of discovery of America Though Christopher Columbus came to be considered the European discoverer of America in Western popular culture, his historical legacy is more nuanced. After settling Iceland, the Norse settled the uninhabited southern part of Greenland beginning in the 10th century. Norsemen are believed to have then set sail from Greenland and Iceland to become the first known Europeans to reach the North American mainland, nearly 500 years before Columbus reached the Caribbean. The 1960s discovery of a Norse settlement dating to c. 1000 AD at L'Anse aux Meadows, Newfoundland, partially corroborates accounts within the Icelandic sagas of Erik the Red's colonization of Greenland and his son Leif Erikson's subsequent exploration of a place he called Vinland. In the 19th century, amid a revival of interest in Norse culture, Carl Christian Rafn and Benjamin Franklin DeCosta wrote works establishing that the Norse had preceded Columbus in colonizing the Americas. Following this, in 1874 Rasmus Bjørn Anderson argued that Columbus must have known of the North American continent before he started his voyage of discovery. Most modern scholars doubt Columbus had knowledge of the Norse settlements in America, with his arrival to the continent being most likely an independent discovery. Europeans devised explanations for the origins of the Native Americans and their geographical distribution with narratives that often served to reinforce their own preconceptions built on ancient intellectual foundations. In modern Latin America, the non-Native populations of some countries often demonstrate an ambiguous attitude toward the perspectives of indigenous peoples regarding the so-called "discovery" by Columbus and the era of colonialism that followed. In his 1960 monograph, Mexican philosopher and historian Edmundo O'Gorman explicitly rejects the Columbus discovery myth, arguing that the idea that Columbus discovered America was a misleading legend fixed in the public mind through the works of American author Washington Irving during the 19th century. O'Gorman argues that to assert Columbus "discovered America" is to shape the facts concerning the events of 1492 to make them conform to an interpretation that arose many years later. For him, the Eurocentric view of the discovery of America sustains systems of domination in ways that favor Europeans. In a 1992 article for The UNESCO Courier, Félix Fernández-Shaw argues that the word "discovery" prioritizes European explorers as the "heroes" of the contact between the Old and New World. He suggests that the word "encounter" is more appropriate, being a more universal term which includes Native Americans in the narrative. America as a distinct land Historians have traditionally argued that Columbus remained convinced until his death that his journeys had been along the east coast of Asia as he originally intended (excluding arguments such as Anderson's). On his third voyage he briefly referred to South America as a "hitherto unknown" continent, while also rationalizing that it was the "Earthly Paradise" located "at the end of the Orient". Columbus continued to claim in his later writings that he had reached Asia; in a 1502 letter to Pope Alexander VI, he asserts that Cuba is the east coast of Asia. On the other hand, in a document in the Book of Privileges (1502), Columbus refers to the New World as the Indias Occidentales ('West Indies'), which he says "were unknown to all the world". Shape of the Earth Washington Irving's 1828 biography of Columbus popularized the idea that Columbus had difficulty obtaining support for his plan because many Catholic theologians insisted that the Earth was flat, but this is a popular misconception which can be traced back to 17th-century Protestants campaigning against Catholicism. In fact, the spherical shape of the Earth had been known to scholars since antiquity, and was common knowledge among sailors, including Columbus. Coincidentally, the oldest surviving globe of the Earth, the Erdapfel, was made in 1492, just before Columbus's return to Europe from his first voyage. As such it contains no sign of the Americas and yet demonstrates the common belief in a spherical Earth. Making observations with a quadrant on his third voyage, Columbus inaccurately measured the polar radius of the North Star's diurnal motion to be five degrees, which was double the value of another erroneous reading he had made from further north. This led him to describe the figure of the Earth as pear-shaped, with the "stalk" portion ascending towards Heaven. In fact, the Earth is ever so slightly pear-shaped, with its "stalk" pointing north. Criticism and defense Columbus has been criticized both for his brutality and for initiating the depopulation of the indigenous peoples of the Caribbean, whether by imported diseases or intentional violence. According to scholars of Native American history, George Tinker and Mark Freedman, Columbus was responsible for creating a cycle of "murder, violence, and slavery" to maximize exploitation of the Caribbean islands' resources, and that Native deaths on the scale at which they occurred would not have been caused by new diseases alone. Further, they describe the proposition that disease and not genocide caused these deaths as "American holocaust denial". Historian Kris Lane disputes whether it is appropriate to use the term "genocide" when the atrocities were not Columbus's intent, but resulted from his decrees, family business goals, and negligence. Other scholars defend Columbus's actions or allege that the worst accusations against him are not based in fact while others claim that "he has been blamed for events far beyond his own reach or knowledge". As a result of the protests and riots that followed the murder of George Floyd in 2020, many public monuments of Christopher Columbus have been removed. Brutality Some historians have criticized Columbus for initiating the widespread colonization of the Americas and for abusing its native population. On St. Croix, Columbus's friend Michele da Cuneo—according to his own account—kept an indigenous woman he captured, whom Columbus "gave to [him]", then brutally raped her. According to some historians, the punishment for an indigenous person, aged 14 and older, failing to pay a hawk's bell, or cascabela, worth of gold dust every six months (based on Bartolomé de las Casas's account) was cutting off the hands of those without tokens, often leaving them to bleed to death. Other historians dispute such accounts. For example, a study of Spanish archival sources showed that the cascabela quotas were imposed by Guarionex, not Columbus, and that there is no mention, in the primary sources, of punishment by cutting off hands for failing to pay. Columbus had an economic interest in the enslavement of the Hispaniola natives and for that reason was not eager to baptize them, which attracted criticism from some churchmen. Consuelo Varela, a Spanish historian, stated that "Columbus's government was characterized by a form of tyranny. Even those who loved him had to admit the atrocities that had taken place." Other historians have argued that some of the accounts of the brutality of Columbus and his brothers have been exaggerated as part of the Black Legend, a historical tendency towards anti-Spanish and anti-Catholic sentiment in historical sources dating as far back as the 16th century, which they speculate may continue to taint scholarship into the present day. According to historian Emily Berquist Soule, the immense Portuguese profits from the maritime trade in African slaves along the West African coast served as an inspiration for Columbus to create a counterpart of this apparatus in the New World using indigenous American slaves. Historian William J. Connell has argued that while Columbus "brought the entrepreneurial form of slavery to the New World", this "was a phenomenon of the times", further arguing that "we have to be very careful about applying 20th-century understandings of morality to the morality of the 15th century." In a less popular defense of colonization, Spanish ambassador María Jesús Figa López-Palop has argued, "Normally we melded with the cultures in America, we stayed there, we spread our language and culture and religion." British historian Basil Davidson has dubbed Columbus the "father of the slave trade", citing the fact that the first license to ship enslaved Africans to the Caribbean was issued by the Catholic Monarchs in 1501 to the first royal governor of Hispaniola, Nicolás de Ovando. Depopulation Around the turn of the 21st century, estimates for the population of Hispaniola ranged between 250,000 and two million, but genetic analysis published in late 2020 suggests that smaller figures are more likely, perhaps as low as 10,000–50,000 for Hispaniola and Puerto Rico combined. Based on the previous figures of a few hundred thousand, some have estimated that a third or more of the natives in Haiti were dead within the first two years of Columbus's governorship. Contributors to depopulation included disease, warfare, and harsh enslavement. Indirect evidence suggests that some serious illness may have arrived with the 1,500 colonists who accompanied Columbus' second expedition in 1493. Charles C. Mann writes that "It was as if the suffering these diseases had caused in Eurasia over the past millennia were concentrated into the span of decades." A third of the natives forced to work in gold and silver mines died every six months. Within three to six decades, the surviving Arawak population numbered only in the hundreds. The indigenous population of the Americas overall is thought to have been reduced by about 90% in the century after Columbus's arrival. Among indigenous peoples, Columbus is often viewed as a key agent of genocide. Samuel Eliot Morison, a Harvard historian and author of a multivolume biography on Columbus, writes, "The cruel policy initiated by Columbus and pursued by his successors resulted in complete genocide." According to Noble David Cook, "There were too few Spaniards to have killed the millions who were reported to have died in the first century after Old and New World contact." He instead estimates that the death toll was caused by smallpox, which may have caused a pandemic only after the arrival of Hernán Cortés in 1519. According to some estimates, smallpox had an 80–90% fatality rate in Native American populations. The natives had no acquired immunity to these new diseases and suffered high fatalities. There is also evidence that they had poor diets and were overworked. Historian Andrés Reséndez of University of California, Davis, says the available evidence suggests "slavery has emerged as major killer" of the indigenous populations of the Caribbean between 1492 and 1550 more so than diseases such as smallpox, influenza and malaria. He says that indigenous populations did not experience a rebound like European populations did following the Black Death because unlike the latter, a large portion of the former were subjected to deadly forced labor in the mines. The diseases that devastated the Native Americans came in multiple waves at different times, sometimes as much as centuries apart, which would mean that survivors of one disease may have been killed by others, preventing the population from recovering. Historian David Stannard describes the depopulation of the indigenous Americans as "neither inadvertent nor inevitable", saying it was the result of both disease and intentional genocide. Navigational expertise Biographers and historians have a wide range of opinions about Columbus's expertise and experience navigating and captaining ships. One scholar lists some European works ranging from the 1890s to 1980s that support Columbus's experience and skill as among the best in Genoa, while listing some American works over a similar timeframe that portray the explorer as an untrained entrepreneur, having only minor crew or passenger experience prior to his noted journeys. According to Morison, Columbus's success in utilizing the trade winds might owe significantly to luck. Physical appearance Contemporary descriptions of Columbus, including those by his son Fernando and Bartolomé de las Casas, describe him as taller than average, with light skin (often sunburnt), blue or hazel eyes, high cheekbones and freckled face, an aquiline nose, and blond to reddish hair and beard (until about the age of 30, when it began to whiten). One Spanish commentator described his eyes using the word garzos, now usually translated as "light blue", but it seems to have indicated light grey-green or hazel eyes to Columbus's contemporaries. The word rubios can mean "blond", "fair", or "ruddy". Although an abundance of artwork depicts Columbus, no authentic contemporary portrait is known. A well-known image of Columbus is a portrait by Sebastiano del Piombo, which has been reproduced in many textbooks. It agrees with descriptions of Columbus in that it shows a large man with auburn hair, but the painting dates from 1519 so cannot have been painted from life. Furthermore, the inscription identifying the subject as Columbus was probably added later, and the face shown differs from that of other images. Sometime between 1531 and 1536, Alejo Fernández painted an altarpiece, The Virgin of the Navigators, that includes a depiction of Columbus. The painting was commissioned for a chapel in Seville's Casa de Contratación (House of Trade) in the Alcázar of Seville and remains there. At the World's Columbian Exposition in 1893, 71 alleged portraits of Columbus were displayed; most of them did not match contemporary descriptions. See also Christopher Columbus in fiction List of monuments and memorials to Christopher Columbus Egg of Columbus Diego Columbus Ferdinand Columbus Columbus's letter on the first voyage Christopher Columbus House History of the Americas Peopling of the Americas Lugares colombinos Notes References Sources in Crosby, A.W. (1987) The Columbian Voyages: the Columbian Exchange, and their Historians. Washington, DC: American Historical Association. Fuson, Robert H. (1992) The Log of Christopher Columbus. International Marine Publishing Further reading Wey, Gómez Nicolás (2008). The tropics of empire: Why Columbus sailed south to the Indies. Cambridge, MA: MIT Press. Wilford, John Noble (1991), The Mysterious History of Columbus: An Exploration of the Man, the Myth, the Legacy, New York: Alfred A. Knopf. External links Journals and Other Documents on the Life and Voyages of Christopher Columbus, translated and edited by Samuel Eliot Morison in PDF format Excerpts from the log of Christopher Columbus's first voyage The Letter of Columbus to Luis de Sant Angel Announcing His Discovery Columbus Monuments Pages (overview of monuments for Columbus all over the world) "But for Columbus There Would Be No America", Tiziano Thomas Dossena, Bridgepugliausa.it, 2012. 1451 births 1506 deaths 1490s in Cuba 1490s in the Caribbean 1492 in North America 15th-century apocalypticists 15th-century explorers 15th-century Genoese people 16th-century Genoese people 15th-century Roman Catholics Spanish exploration in the Age of Discovery Burials at Seville Cathedral Colonial governors of Santo Domingo Christopher Explorers of Central America Italian expatriates in Spain Italian explorers of North America Italian explorers of South America Italian people imprisoned abroad Italian Roman Catholics Explorers from the Republic of Genoa 16th-century diarists Prisoners and detainees of Spain
5636
https://en.wikipedia.org/wiki/Chemist
Chemist
A chemist (from Greek chēm(ía) alchemy; replacing chymist from Medieval Latin alchemist) is a scientist trained in the study of chemistry. Chemists study the composition of matter and its properties. Chemists carefully describe the properties they study in terms of quantities, with detail on the level of molecules and their component atoms. Chemists carefully measure substance proportions, chemical reaction rates, and other chemical properties. In Commonwealth English, pharmacists are often called chemists. Chemists use their knowledge to learn the composition and properties of unfamiliar substances, as well as to reproduce and synthesize large quantities of useful naturally occurring substances and create new artificial substances and useful processes. Chemists may specialize in any number of subdisciplines of chemistry. Materials scientists and metallurgists share much of the same education and skills with chemists. The work of chemists is often related to the work of chemical engineers, who are primarily concerned with the proper design, construction and evaluation of the most cost-effective large-scale chemical plants and work closely with industrial chemists on the development of new processes and methods for the commercial-scale manufacture of chemicals and related products. History of chemistry The roots of chemistry can be traced to the phenomenon of burning. Fire was a mystical force that transformed one substance into another and thus was of primary interest to mankind. It was fire that led to the discovery of iron and glasses. After gold was discovered and became a precious metal, many people were interested to find a method that could convert other substances into gold. This led to the protoscience called alchemy. The word chemist is derived from the Neo-Latin noun chimista, an abbreviation of alchimista (alchemist). Alchemists discovered many chemical processes that led to the development of modern chemistry. Chemistry as we know it today, was invented by Antoine Lavoisier with his law of conservation of mass in 1783. The discoveries of the chemical elements has a long history culminating in the creation of the periodic table by Dmitri Mendeleev. The Nobel Prize in Chemistry created in 1901 gives an excellent overview of chemical discovery since the start of the 20th century. Education Jobs for chemists generally require at least a bachelor's degree in chemistry, but many positions, especially those in research, require a Master of Science or a Doctor of Philosophy (PhD.). Most undergraduate programs emphasize mathematics and physics as well as chemistry, partly because chemistry is also known as "the central science", thus chemists ought to have a well-rounded knowledge about science. At the Master's level and higher, students tend to specialize in a particular field. Fields of specialization include biochemistry, nuclear chemistry, organic chemistry, inorganic chemistry, polymer chemistry, analytical chemistry, physical chemistry, theoretical chemistry, quantum chemistry, environmental chemistry, and thermochemistry. Postdoctoral experience may be required for certain positions. Workers whose work involves chemistry, but not at a complexity requiring an education with a chemistry degree, are commonly referred to as chemical technicians. Such technicians commonly do such work as simpler, routine analyses for quality control or in clinical laboratories, having an associate degree. A chemical technologist has more education or experience than a chemical technician but less than a chemist, often having a bachelor's degree in a different field of science with also an associate degree in chemistry (or many credits related to chemistry) or having the same education as a chemical technician but more experience. There are also degrees specific to become a chemical technologist, which are somewhat distinct from those required when a student is interested in becoming a professional chemist. A Chemical technologist is more involved in the management and operation of the equipment and instrumentation necessary to perform chemical analyzes than a chemical technician. They are part of the team of a chemical laboratory in which the quality of the raw material, intermediate products and finished products is analyzed. They also perform functions in the areas of environmental quality control and the operational phase of a chemical plant. In addition to all the training usually given to chemical technologists in their respective degree (or one given via an associate degree), a chemist is also trained to understand more details related to chemical phenomena so that the chemist can be capable of more planning on the steps to achieve a distinct goal via a chemistry-related endeavor. The higher the competency level achieved in the field of chemistry (as assessed via a combination of education, experience and personal achievements), the higher the responsibility given to that chemist and the more complicated the task might be. Chemistry, as a field, have so many applications that different tasks and objectives can be given to workers or scientists with these different levels of education or experience. The specific title of each job varies from position to position, depending on factors such as the kind of industry, the routine level of the task, the current needs of a particular enterprise, the size of the enterprise or hiring firm, the philosophy and management principles of the hiring firm, the visibility of the competency and individual achievements of the one seeking employment, economic factors such as recession or economic depression, among other factors, so this makes it difficult to categorize the exact roles of these chemistry-related workers as standard for that given level of education. Because of these factors affecting exact job titles with distinct responsibilities, some chemists might begin doing technician tasks while other chemists might begin doing more complicated tasks than those of a technician, such as tasks that also involve formal applied research, management, or supervision included within the responsibilities of that same job title. The level of supervision given to that chemist also varies in a similar manner, with factors similar to those that affect the tasks demanded for a particular chemist. It is important that those interested in a Chemistry degree understand the variety of roles available to them (on average), which vary depending on education and job experience. Those Chemists who hold a bachelor's degree are most commonly involved in positions related to either research assistance (working under the guidance of senior chemists in a research-oriented activity), or, alternatively, they may work on distinct (chemistry-related) aspects of a business, organization or enterprise including aspects that involve quality control, quality assurance, manufacturing, production, formulation, inspection, method validation, visitation for troubleshooting of chemistry-related instruments, regulatory affairs, "on-demand" technical services, chemical analysis for non-research purposes (e.g., as a legal request, for testing purposes, or for government or non-profit agencies); chemists may also work in environmental evaluation and assessment. Other jobs or roles may include sales and marketing of chemical products and chemistry-related instruments or technical writing. The more experience obtained, the more independence and leadership or management roles these chemists may perform in those organizations. Some chemists with relatively higher experience might change jobs or job position to become a manager of a chemistry-related enterprise, a supervisor, an entrepreneur or a chemistry consultant. Other chemists choose to combine their education and experience as a chemist with a distinct credential to provide different services (e.g., forensic chemists, chemistry-related software development, patent law specialists, environmental law firm staff, scientific news reporting staff, engineering design staff, etc.). In comparison, chemists who have obtained a Master of Science (M.S.) in chemistry or in a very related discipline may find chemist roles that allow them to enjoy more independence, leadership and responsibility earlier in their careers with less years of experience than those with a bachelor's degree as highest degree. Sometimes, M.S. chemists receive more complex tasks duties in comparison with the roles and positions found by chemists with a bachelor's degree as their highest academic degree and with the same or close-to-same years of job experience. There are positions that are open only to those that at least have a degree related to chemistry at the master's level. Although good chemists without a Ph. D. degree but with relatively many years of experience may be allowed some applied research positions, the general rule is that Ph. D. chemists are preferred for research positions and are typically the preferred choice for the highest administrative positions on big enterprises involved in chemistry-related duties. Some positions, especially research oriented, will only allow those chemists who are Ph. D. holders. Jobs that involve intensive research and actively seek to lead the discovery of completely new chemical compounds under specifically assigned monetary funds and resources or jobs that seek to develop new scientific theories require a Ph. D. more often than not. Chemists with a Ph. D. as the highest academic degree are found typically on the research-and-development department of an enterprise and can also hold university positions as professors. Professors for research universities or for big universities usually have a Ph. D., and some research-oriented institutions might require post-doctoral training. Some smaller colleges (including some smaller four-year colleges or smaller non-research universities for undergraduates) as well as community colleges usually hire chemists with a M.S. as professors too (and rarely, some big universities who need part-time or temporary instructors, or temporary staff), but when the positions are scarce and the applicants are many, they might prefer Ph. D. holders instead. Employment The three major employers of chemists are academic institutions, industry, especially the chemical industry and the pharmaceutical industry, and government laboratories. Chemistry typically is divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry. There is a great deal of overlap between different branches of chemistry, as well as with other scientific fields such as biology, medicine, physics, radiology, and several engineering disciplines. Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry. Biochemistry is the study of the chemicals, chemical reactions and chemical interaction}s that take place in living organisms. Biochemistry and organic chemistry are closely related, for example, in medicinal chemistry. Inorganic chemistry is the study of the properties and reactions of inorganic compounds. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry. The Inorganic chemistry is also the study of atomic and molecular structure and bonding. Medicinal chemistry is the science involved with designing, synthesizing and developing pharmaceutical drugs. Medicinal chemistry involves the identification, synthesis and development of new chemical entities suitable for therapeutic use. It also includes the study of existing drugs, their biological properties, and their quantitative structure-activity relationships. Organic chemistry is the study of the structure, properties, composition, mechanisms, and chemical reaction of carbon compounds. Physical chemistry is the study of the physical fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, quantum chemistry, statistical mechanics, and spectroscopy. Physical chemistry has a large overlap with theoretical chemistry and molecular physics. Physical chemistry involves the use of calculus in deriving equations. Theoretical chemistry is the study of chemistry via theoretical reasoning (usually within mathematics or physics). In particular, the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with condensed matter physics and molecular physics. See reductionism. All the above major areas of chemistry employ chemists. Other fields where chemical degrees are useful include astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemo-informatics, electrochemistry, environmental science, forensic science, geochemistry, green chemistry, history of chemistry, materials science, medical science, molecular biology, molecular genetics, nanotechnology, nuclear chemistry, oenology, organometallic chemistry, petrochemistry, pharmacology, photochemistry, phytochemistry, polymer chemistry, supramolecular chemistry and surface chemistry. Professional societies Chemists may belong to professional societies specifically for professionals and researchers within the field of chemistry, such as the Royal Society of Chemistry in the United Kingdom, the American Chemical Society (ACS) in the United States, or the Institution of Chemists in India. Honors and awards The highest honor awarded to chemists is the Nobel Prize in Chemistry, awarded since 1901, by the Royal Swedish Academy of Sciences. See also List of chemistry topics List of chemists List of Chemistry Societies References External links American Chemical Society Chemical Abstracts Service indexes and abstracts the world's chemistry-related literature and patents Chemists and Materials Scientists from the U.S. Department of Labor's Occupational Outlook Handbook Royal Society of Chemistry History of Chemistry links for chemists Luminaries of the Chemical Sciences accomplishments, biography, and publications from 44 of the most influential chemists Selected Classic Papers from the History of Chemistry Links for Chemists guide to web sites related to chemistry ChemistryViews.org Science occupations
5637
https://en.wikipedia.org/wiki/Cypress%20Hill
Cypress Hill
Cypress Hill is an American hip hop group from South Gate, California, formed in 1988. They have sold over 20 million albums worldwide, and they have obtained multi-platinum and platinum certifications. The group has been critically acclaimed for their first five albums. They are considered to be among the main progenitors of West Coast hip hop and 1990s hip hop. All of the group members advocate for medical and recreational use of cannabis in the United States. In 2019, Cypress Hill became the first hip hop group to have a star on the Hollywood Walk of Fame. History Formation (1988) Senen Reyes (also known as Sen Dog) and Ulpiano Sergio Reyes (also known as Mellow Man Ace) are brothers born in Pinar del Río, Cuba. In 1971, their family immigrated to the United States and initially lived in South Gate, California. In 1988, the two brothers teamed up with New York City native Lawrence Muggerud (also known as DJ Muggs, previously in a rap group named 7A3) and Louis Freese (also known as B-Real) to form a hip-hop group named DVX (Devastating Vocal Excellence). The band soon lost Mellow Man Ace to a solo career, and changed their name to Cypress Hill, after a street in South Gate. Mainstream success with Cypress Hill and Black Sunday, addition of Eric Bobo, and III: Temples of Boom (1989–1996) After recording a demo in 1989, Cypress Hill signed a record deal with Ruffhouse Records. Their self-titled first album was released in August 1991. The lead single was the double A-side "The Phuncky Feel One"/"How I Could Just Kill a Man" which received heavy airplay on urban and college radio, most notably peaking at No. 1 on Billboard Hot Rap Tracks chart and at No. 77 on the Billboard Hot 100. The other two singles released from the album were "Hand on the Pump" and "Latin Lingo", the latter of which combined English and Spanish lyrics, a trait that was continued throughout their career. The success of these singles led Cypress Hill to sell two million copies in the U.S. alone, and it peaked at No. 31 on the Billboard 200 and was certified double platinum by the RIAA. In 1992, Cypress Hill's first contribution to a soundtrack was the song "Shoot 'Em Up" for the movie Juice. The group made their first appearance at Lollapalooza on the side stage in 1992. It was the festival's second year of touring, and featured a diverse lineup of acts such as Red Hot Chili Peppers, Ice Cube, Lush, Tool, Stone Temple Pilots, among others. The trio also supported the Cypress Hill album by touring with the Beastie Boys, who were touring behind their third album Check Your Head. Black Sunday, the group's second album, debuted at No. 1 on the Billboard 200 in 1993, recording the highest Soundscan for a rap group up until that time. "Insane in the Brain" became a crossover hit, peaking at No. 19 on the Billboard Hot 100, at No. 16 on the Dance Club Songs chart, and at No. 1 on the Hot Rap Tracks chart. "Insane in the Brain" also garnered the group their first Grammy nomination. Black Sunday went triple platinum in the U.S. and sold about 3.26 million copies. Cypress Hill headlined the Soul Assassins tour with House of Pain and Funkdoobiest as support, then performed on a college tour with Rage Against the Machine and Seven Year Bitch. Also in 1993, Cypress Hill had two tracks on the Judgment Night soundtrack, teaming up with Pearl Jam (without vocalist Eddie Vedder) on the track "Real Thing" and Sonic Youth on "I Love You Mary Jane". The soundtrack was notable for intentionally creating collaborations between the rap/hip-hop and rock/metal genres, and as a result the soundtrack peaked at No. 17 on the Billboard 200 and was certified gold by the RIAA. On October 2, 1993, Cypress Hill performed on the comedy show Saturday Night Live, broadcast by NBC. Prior to their performances, studio executives, label representatives, and the group's own associates constantly asked the trio to not smoke marijuana on-stage. DJ Muggs became irritated due to the constant inquisitions, and he subsequently lit a joint during the group's second song. Up until that point, it was extremely uncommon to see marijuana usage on a live televised broadcast. The incident prompted NBC to ban the group from returning on the show, a distinction shared only by six other artists. The group later played at Woodstock 94, officially making percussionist Eric Bobo a member of the group during the performance. Eric Bobo was known as the son of Willie Bobo and as a touring member of the Beastie Boys, who Cypress Hill previously toured with in 1992. That same year, Rolling Stone named the group as the Best Rap Group in their music awards voted by critics and readers. Cypress Hill then played at Lollapalooza for two successive years, topping the bill in 1995. They also appeared on the "Homerpalooza" episode of The Simpsons. The group received their second Grammy nomination in 1995 for "I Ain't Goin' Out Like That". Cypress Hill's third album III: Temples of Boom was released in 1995 as it peaked at No. 3 on the Billboard 200 and at No. 3 on the Canadian Albums Chart. The album was certified platinum by the RIAA. "Throw Your Set in the Air" was the most successful single off the album, peaking at No. 45 on the Billboard Hot 100 and No. 11 on the Hot Rap Tracks charts. The single also earned Cypress Hill's third Grammy nomination. Shortly after the release of III: Temples of Boom, Sen Dog became frustrated due to the rigorous touring schedule. Just prior to an overseas tour, he departed from the group unexpectedly. Cypress Hill continued their tours throughout 1995 and 1996, with Eric Bobo and also various guest vocalists covering Sen Dog's verses. Sen Dog later formed the rock band SX-10 to explore other musical genres. Later on in 1996, Cypress Hill appeared on the first Smokin' Grooves tour, featuring Ziggy Marley, The Fugees, Busta Rhymes, and A Tribe Called Quest. The group also released a nine track EP Unreleased and Revamped with rare mixes. Focus on solo projects, IV, crossover appeal with Skull & Bones, and Stoned Raiders (1997–2002) In 1997, the members focused on their solo careers. DJ Muggs released Soul Assassins: Chapter 1, with features from Dr. Dre, KRS-One, Wyclef Jean, and Mobb Deep. B-Real appeared with Busta Rhymes, Coolio, LL Cool J, and Method Man on "Hit 'Em High" from the multi-platinum Space Jam Soundtrack. He also appeared with RBX, Nas, and KRS-One on "East Coast Killer, West Coast Killer" from Dr. Dre's Dr. Dre Presents the Aftermath album, and contributed to an album entitled The Psycho Realm with the group of the same name. Sen Dog also released the Get Wood sampler as part of SX-10 on the label Flip Records. In addition, Eric Bobo contributed drums to various rock bands on their albums, such as 311 and Soulfly. In early 1998, Sen Dog returned to Cypress Hill. He cited his therapist and also his creative collaborations with the band SX-10 as catalysts for his rejoining. The quartet then embarked on the third annual Smokin' Grooves tour with Public Enemy, Wyclef Jean, Busta Rhymes, and Gang Starr. Cypress Hill released IV in October 1998 which went gold in the U.S. and peaked at No. 11 on the Billboard 200. The lead single off the album was "Dr. Greenthumb", as it peaked at No. 11 on the Hot Rap Tracks chart. It also peaked at No. 70 on the Billboard Hot 100, their last appearance on the chart to date. In 1999, Cypress Hill helped with the PC first-person shooter video game Kingpin: Life of Crime. Three of the band's songs from the 1998 IV album were in the game; "16 Men Till There's No Men Left", "Checkmate", and "Lightning Strikes". The group also did voice work for some of the game's characters. Also in 1999, the band released a greatest hits album in Spanish, Los Grandes Éxitos en Español. In 2000, Cypress Hill fused genres with their fifth album, Skull & Bones, which consisted of two discs. The first disc Skull was composed of rap tracks while Bones explored further the group's forays into rock. The album peaked at No. 5 on the Billboard 200 and at No. 3 on the Canadian Albums Chart, and the album was eventually certified platinum by the RIAA. The first two singles were "(Rock) Superstar" for rock radio and "(Rap) Superstar" for urban radio. Both singles received heavy airplay on both rock and urban radio, enabling Cypress Hill to crossover again. "(Rock) Superstar" peaked at No. 18 on the Modern Rock Tracks chart and "(Rap) Superstar" peaked at No. 43 on the Hot Rap Tracks chart. Due to the rock genre's prominent appearance on Skull & Bones, Cypress Hill employed the members of Sen Dog's band SX-10 as backing musicians for the live shows. Cypress Hill supported Skull & Bones by initially playing a summer tour with Limp Bizkit and Cold called the Back 2 Basics Tour. The tour was controversial as it was sponsored by the file sharing service Napster. In addition, Napster enabled each show of the tour to be free to the fans, and no security guards were employed during the performances. After the tour's conclusion, the acts had not reported any disturbances. Towards the end of 2000, Cypress Hill and MxPx landed a slot opening for The Offspring on the Conspiracy of One Tour. The group also released Live at the Fillmore, a concert disc recorded at San Francisco's The Fillmore in 2000. Cypress Hill continued their experimentation with rock on the Stoned Raiders album in 2001; however, its sales were a disappointment. The album peaked at No. 64 on the Billboard 200, the group's lowest position to that point. Also in 2001, the group made a cameo appearance as themselves in the film How High. Cypress Hill then recorded the track "Just Another Victim" for WWF as a theme song for Tazz, borrowing elements from the 2000 single "(Rock) Superstar". The song would later be featured on the compilation WWF Forceable Entry in March 2002, which peaked at No. 3 on the Billboard 200 and was certified gold by the RIAA. Till Death Do Us Part, DJ Muggs' hiatus, and extensive collaborations on Rise Up (2003–2012) Cypress Hill released Till Death Do Us Part in March 2004 as it peaked at No. 21 on the Billboard 200. It featured appearances by Bob Marley's son Damian Marley, Prodigy of Mobb Deep, and producers The Alchemist and Fredwreck. The album represented a further departure from the group's signature sound. Reggae was a strong influence on its sound, especially on the lead single "What's Your Number?". The track featured Tim Armstrong of Rancid on guitar and backup vocals. It was based on the classic song "The Guns of Brixton" from The Clash's album London Calling. "What's Your Number?" saw Cypress Hill crossover into the rock charts again, as the single peaked at No. 23 on the Modern Rock Tracks chart. Afterwards, DJ Muggs took a hiatus from the group to focus on other projects, such as Soul Assassins and his DJ Muggs vs. collaboration albums. In December 2005 another compilation album titled Greatest Hits From the Bong was released. It included nine hits from previous albums and two new tracks. In the summer of 2006, B-Real appeared on Snoop Dogg's single "Vato", which was produced by Pharrell Williams. The group's next album was tentatively scheduled for an early 2007 release, but it was pushed back numerous times. In 2007 Cypress Hill toured as a part of the Rock the Bells tour. They headlined with Public Enemy, Wu-Tang Clan, Nas, and a reunited Rage Against the Machine. On July 25, 2008, Cypress Hill performed at a benefit concert at the House of Blues Chicago, where a majority of the proceeds went to the Chicago Alliance to End Homelessness. In August 2009, a new song by Cypress Hill titled "Get 'Em Up" was made available on iTunes. The song was also featured in the Madden NFL 2010 video game. It was the first sampling of the group's then-upcoming album. Cypress Hill's eighth studio album Rise Up featured contributions from Everlast, Tom Morello, Daron Malakian, Pitbull, Marc Anthony, and Mike Shinoda. Previously, the vast majority of the group's albums were produced by DJ Muggs; however, Rise Up instead featured a large array of guest features and producers, with DJ Muggs only appearing on two tracks. The album was released on Priority Records/EMI Entertainment, as the group was signed to the label by new creative chairman Snoop Dogg. Rise Up was released on April 20, 2010 and it peaked at No. 19 on the Billboard 200. The single "Rise Up" was featured at WWE's pay-per-view Elimination Chamber as the official theme song for the event. It also appeared in the trailer for the movie The Green Hornet. "Rise Up" managed to peak at No. 20 on both the Modern Rock Tracks and Mainstream Rock Tracks charts. "Armada Latina", which featured Pitbull and Marc Anthony, was Cypress Hill's last song to chart in the U.S. to date, peaking at No. 25 on the Hot Rap Tracks chart. Cypress Hill commenced its Rise Up tour in Philadelphia on April 10, 2010. In one particular instance, the group was supposed to stop in Tucson, Arizona but canceled the show in protest of the recent immigration legislation. At the Rock en Seine festival in Paris on August 27, 2010, they had said in an interview that they would anticipate the outcome of the legislation before returning. Also in 2010, Cypress Hill performed at the Reading and Leeds Festivals on August 28 at Leeds and August 29 at Reading. On June 5, 2012, Cypress Hill and dubstep artist Rusko released a collaborative EP entitled Cypress X Rusko. DJ Muggs, who was still on a hiatus, and Eric Bobo were absent on the release. Also in 2012, Cypress Hill collaborated with Deadmau5 on his sixth studio album Album Title Goes Here, lending vocals on "Failbait". Elephants on Acid, Hollywood Walk of Fame, and Back in Black (2013–2022) During the interval between Cypress Hill albums, the four members commenced work on various projects. B-Real formed the band Prophets of Rage alongside three members of Rage Against the Machine and two members of Public Enemy. He also released The Prescription EP under his Dr. Greenthumb persona. Sen Dog formed the band Powerflo alongside members of Fear Factory, downset., and Biohazard. DJ Muggs revived his Soul Assassins project as its main producer. Eric Bobo formed a duo named Ritmo Machine. He also contributed to an unreleased album by his father Willie Bobo. On September 28, 2018, Cypress Hill released the album Elephants on Acid, which saw the return of DJ Muggs as main composer and producer. It peaked at No. 120 on the Billboard 200 and at No. 6 on the Top Independent Albums chart. Overall, four different singles were released to promote the album. In April 2019 Cypress Hill received a star on the Hollywood Walk of Fame. Although various solo hip hop artists had received stars, Cypress Hill became the first collective hip hop group to receive a star. The entire lineup of B-Real, Sen Dog, Eric Bobo, and DJ Muggs had all attended the ceremony. In January 2022, the group announced their 10th studio album entitled Back in Black. In addition, Cypress Hill planned to support the album by joining Slipknot alongside Ho99o9 for the second half of the 2022 Knotfest Roadshow. They had previously invited Slipknot to join their Great Smoke-Out festival back in 2009. Back in Black was released on March 18, 2022. It was the group's first album to not feature DJ Muggs on any of the tracks, as producing duties were handled by Black Milk. Back in Black was the lowest charting album of the group's career, and the first to not reach the Billboard 200 chart; however, it peaked at No. 69 on the Top Current Album Sales chart. A documentary about the group, entitled Cypress Hill: Insane in the Brain, was released on the Showtime service in April 2022. Estevan Oriol, Cypress Hill's former tour manager and close associate, directed the film. It had mainly chronicled the group's formation and their first decade of existence. In relation to the Cypress Hill: Insane in the Brain documentary, Cypress Hill digitally released the single "Crossroads" in September 2022. The single featured the return of DJ Muggs on production. Future plans and tentative final album (2023–present) In an interview, Sen Dog claimed that the group will fully reunite with DJ Muggs for an 11th album; however, he stated that it will be the group's final album of their career. Style Rapping One of the band's most striking aspects is B-Real's exaggeratedly high-pitched nasal vocals. In the book Check the Technique, B-Real described his nasal style, saying his rapping voice is "high and annoying...the nasal style I have was just something that I developed...my more natural style wasn't so pleasing to DJ Muggs and Sen Dog's ears" and talking about the nasal style in the book How to Rap, B-Real said "you want to stand out from the others and just be distinct...when you got something that can separate you from everybody else, you gotta use it to your advantage." In the film Art of Rap, B-Real credited the Beastie Boys as an influence when developing his rapping style. Sen Dog's voice is deeper, more violent, and often shouted alongside the rapping; his vocals are often emphasized by adding another background/choir voice to say them. Sen Dog's style is in contrast to B-Real's, who said "Sen's voice is so strong" and "it all blends together" when they are both on the same track. Both B-Real and Sen Dog started writing lyrics in both Spanish and English. Initially, B-Real was inspired to start writing raps from watching Sen Dog and Mellow Man Ace writing their lyrics, and originally B-Real was going to just be the writer for the group rather than a rapper. Their lyrics are noted for bringing a "cartoonish" approach to violence by Peter Shapiro and Allmusic. Production The sound and groove of their music, mostly produced by DJ Muggs, has spooky sounds and a stoned aesthetic; with its bass-heavy rhythms and odd sample loops ("Insane in the Brain" has a blues guitar pitched looped in its chorus), it carries a psychedelic value, which is lessened in their rock-oriented albums. The double album Skull & Bones consists of a pure rap disc (Skull) and a separate rock disc (Bones). In the live album Live at The Fillmore, some of the old classics were played in a rock/metal version, with Eric Bobo playing the drums and Sen Dog's band SX-10 as the other instrumentalists. 2010's Rise Up was the most radically different album in regards to production. DJ Muggs had produced the majority of each prior Cypress Hill album, but he only appeared on Rise Up twice. The remaining songs were handled by various other guests. 2018's Elephants on Acid marked the return of DJ Muggs, and the album featured a more psychedelic and hip-hop approach. Legacy Cypress Hill are often credited for being one of the few Latin American hip hop groups to break through with their own stylistic impact on rap music. Cypress Hill have been cited as an influence by artists such as Eminem, Baby Bash, Paul Wall ,Post Malone, Luniz, and Fat Joe. Cypress Hill have also been cited as a strong influence on nu metal bands such as Deftones, Limp Bizkit, System of a Down, Linkin Park, and Korn. Famously, the bassline during the outro of Korn's 1994 single "Blind" was a direct tribute to Cypress Hill's 1993 track "Lick a Shot". Discography Studio albums Cypress Hill (1991) Black Sunday (1993) III: Temples of Boom (1995) IV (1998) Skull & Bones (2000) Stoned Raiders (2001) Till Death Do Us Part (2004) Rise Up (2010) Elephants on Acid (2018) Back in Black (2022) Awards and nominations Billboard Music Awards Grammy Awards MTV Video Music Awards Hollywood Walk of Fame |- |2019 |Cypress Hill |Star | |} Members Current Louis "B-Real" Freese – vocals (1988–present) Senen "Sen Dog" Reyes – vocals (1988–1995, 1998–present) Eric "Eric Bobo" Correa – drums, percussion (1993–present) Current touring Lord "DJ Lord" Asword – turntables, samples, vocals (2019–present) Former Ulpiano "Mellow Man Ace" Reyes – vocals (1988) Lawrence "DJ Muggs" Muggerud – turntables, samples (1988–2004, 2014–2018) Former touring Panchito "Ponch" Gomez – drums, percussion (1993–1994) Frank Mercurio – bass (2000–2002) Jeremy Fleener – guitar (2000–2002) Andy Zambrano – guitar (2000–2002) Julio "Julio G" González – turntables, samples (2004–2014) Michael "Mix Master Mike" Schwartz – turntables, samples (2018–2019) Timeline References External links 1988 establishments in California American cannabis activists American rap rock groups Bloods Cannabis music Columbia Records artists Gangsta rap groups West Coast hip hop groups Hispanic and Latino American rappers Musical groups established in 1988 Musical groups from California People from South Gate, California Priority Records artists Psychedelic rap groups Rappers from Los Angeles Hip hop groups from California
5638
https://en.wikipedia.org/wiki/Combustion
Combustion
Combustion, or burning, is a high-temperature exothermic redox chemical reaction between a fuel (the reductant) and an oxidant, usually atmospheric oxygen, that produces oxidized, often gaseous products, in a mixture termed as smoke. Combustion does not always result in fire, because a flame is only visible when substances undergoing combustion vaporize, but when it does, a flame is a characteristic indicator of the reaction. While activation energy must be supplied to initiate combustion (e.g., using a lit match to light a fire), the heat from a flame may provide enough energy to make the reaction self-sustaining. Combustion is often a complicated sequence of elementary radical reactions. Solid fuels, such as wood and coal, first undergo endothermic pyrolysis to produce gaseous fuels whose combustion then supplies the heat required to produce more of them. Combustion is often hot enough that incandescent light in the form of either glowing or a flame is produced. A simple example can be seen in the combustion of hydrogen and oxygen into water vapor, a reaction which is commonly used to fuel rocket engines. This reaction releases 242kJ/mol of heat and reduces the enthalpy accordingly (at constant temperature and pressure): 2H_2(g){+}O_2(g)\rightarrow 2H_2O\uparrow Uncatalyzed combustion in air requires relatively high temperatures. Complete combustion is stoichiometric concerning the fuel, where there is no remaining fuel, and ideally, no residual oxidant. Thermodynamically, the chemical equilibrium of combustion in air is overwhelmingly on the side of the products. However, complete combustion is almost impossible to achieve, since the chemical equilibrium is not necessarily reached, or may contain unburnt products such as carbon monoxide, hydrogen and even carbon (soot or ash). Thus, the produced smoke is usually toxic and contains unburned or partially oxidized products. Any combustion at high temperatures in atmospheric air, which is 78 percent nitrogen, will also create small amounts of several nitrogen oxides, commonly referred to as NOx, since the combustion of nitrogen is thermodynamically favored at high, but not low temperatures. Since burning is rarely clean, fuel gas cleaning or catalytic converters may be required by law. Fires occur naturally, ignited by lightning strikes or by volcanic products. Combustion (fire) was the first controlled chemical reaction discovered by humans, in the form of campfires and bonfires, and continues to be the main method to produce energy for humanity. Usually, the fuel is carbon, hydrocarbons, or more complicated mixtures such as wood that contain partially oxidized hydrocarbons. The thermal energy produced from the combustion of either fossil fuels such as coal or oil, or from renewable fuels such as firewood, is harvested for diverse uses such as cooking, production of electricity or industrial or domestic heating. Combustion is also currently the only reaction used to power rockets. Combustion is also used to destroy (incinerate) waste, both nonhazardous and hazardous. Oxidants for combustion have high oxidation potential and include atmospheric or pure oxygen, chlorine, fluorine, chlorine trifluoride, nitrous oxide and nitric acid. For instance, hydrogen burns in chlorine to form hydrogen chloride with the liberation of heat and light characteristic of combustion. Although usually not catalyzed, combustion can be catalyzed by platinum or vanadium, as in the contact process. Types Complete and incomplete Complete In complete combustion, the reactant burns in oxygen and produces a limited number of products. When a hydrocarbon burns in oxygen, the reaction will primarily yield carbon dioxide and water. When elements are burned, the products are primarily the most common oxides. Carbon will yield carbon dioxide, sulfur will yield sulfur dioxide, and iron will yield iron(III) oxide. Nitrogen is not considered to be a combustible substance when oxygen is the oxidant. Still, small amounts of various nitrogen oxides (commonly designated species) form when the air is the oxidative. Combustion is not necessarily favorable to the maximum degree of oxidation, and it can be temperature-dependent. For example, sulfur trioxide is not produced quantitatively by the combustion of sulfur. species appear in significant amounts above about , and more is produced at higher temperatures. The amount of is also a function of oxygen excess. In most industrial applications and in fires, air is the source of oxygen (). In the air, each mole of oxygen is mixed with approximately of nitrogen. Nitrogen does not take part in combustion, but at high temperatures, some nitrogen will be converted to (mostly , with much smaller amounts of ). On the other hand, when there is insufficient oxygen to combust the fuel completely, some fuel carbon is converted to carbon monoxide, and some of the hydrogens remain unreacted. A complete set of equations for the combustion of a hydrocarbon in the air, therefore, requires an additional calculation for the distribution of oxygen between the carbon and hydrogen in the fuel. The amount of air required for complete combustion is known as the "theoretical air" or "stoichiometric air". The amount of air above this value actually needed for optimal combustion is known as the "excess air", and can vary from 5% for a natural gas boiler, to 40% for anthracite coal, to 300% for a gas turbine. Incomplete Incomplete combustion will occur when there is not enough oxygen to allow the fuel to react completely to produce carbon dioxide and water. It also happens when the combustion is quenched by a heat sink, such as a solid surface or flame trap. As is the case with complete combustion, water is produced by incomplete combustion; however, carbon and carbon monoxide are produced instead of carbon dioxide. For most fuels, such as diesel oil, coal, or wood, pyrolysis occurs before combustion. In incomplete combustion, products of pyrolysis remain unburnt and contaminate the smoke with noxious particulate matter and gases. Partially oxidized compounds are also a concern; partial oxidation of ethanol can produce harmful acetaldehyde, and carbon can produce toxic carbon monoxide. The designs of combustion devices can improve the quality of combustion, such as burners and internal combustion engines. Further improvements are achievable by catalytic after-burning devices (such as catalytic converters) or by the simple partial return of the exhaust gases into the combustion process. Such devices are required by environmental legislation for cars in most countries. They may be necessary to enable large combustion devices, such as thermal power stations, to reach legal emission standards. The degree of combustion can be measured and analyzed with test equipment. HVAC contractors, firefighters and engineers use combustion analyzers to test the efficiency of a burner during the combustion process. Also, the efficiency of an internal combustion engine can be measured in this way, and some U.S. states and local municipalities use combustion analysis to define and rate the efficiency of vehicles on the road today. Carbon monoxide is one of the products from incomplete combustion. The formation of carbon monoxide produces less heat than formation of carbon dioxide so complete combustion is greatly preferred especially as carbon monoxide is a poisonous gas. When breathed, carbon monoxide takes the place of oxygen and combines with some of the hemoglobin in the blood, rendering it unable to transport oxygen. Problems associated with incomplete combustion Environmental problems These oxides combine with water and oxygen in the atmosphere, creating nitric acid and sulfuric acids, which return to Earth's surface as acid deposition, or "acid rain." Acid deposition harms aquatic organisms and kills trees. Due to its formation of certain nutrients that are less available to plants such as calcium and phosphorus, it reduces the productivity of the ecosystem and farms. An additional problem associated with nitrogen oxides is that they, along with hydrocarbon pollutants, contribute to the formation of ground level ozone, a major component of smog. Human health problems Breathing carbon monoxide causes headache, dizziness, vomiting, and nausea. If carbon monoxide levels are high enough, humans become unconscious or die. Exposure to moderate and high levels of carbon monoxide over long periods is positively correlated with the risk of heart disease. People who survive severe carbon monoxide poisoning may suffer long-term health problems. Carbon monoxide from the air is absorbed in the lungs which then binds with hemoglobin in human's red blood cells. This reduces the capacity of red blood cells that carry oxygen throughout the body. Smoldering Smoldering is the slow, low-temperature, flameless form of combustion, sustained by the heat evolved when oxygen directly attacks the surface of a condensed-phase fuel. It is a typically incomplete combustion reaction. Solid materials that can sustain a smoldering reaction include coal, cellulose, wood, cotton, tobacco, peat, duff, humus, synthetic foams, charring polymers (including polyurethane foam) and dust. Common examples of smoldering phenomena are the initiation of residential fires on upholstered furniture by weak heat sources (e.g., a cigarette, a short-circuited wire) and the persistent combustion of biomass behind the flaming fronts of wildfires. Spontaneous Spontaneous combustion is a type of combustion that occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self-heating which rapidly accelerates to high temperatures) and finally, ignition. For example, phosphorus self-ignites at room temperature without the application of heat. Organic materials undergoing bacterial composting can generate enough heat to reach the point of combustion. Turbulent Combustion resulting in a turbulent flame is the most used for industrial applications (e.g. gas turbines, gasoline engines, etc.) because the turbulence helps the mixing process between the fuel and oxidizer. Micro-gravity The term 'micro' gravity refers to a gravitational state that is 'low' (i.e., 'micro' in the sense of 'small' and not necessarily a millionth of Earth's normal gravity) such that the influence of buoyancy on physical processes may be considered small relative to other flow processes that would be present at normal gravity. In such an environment, the thermal and flow transport dynamics can behave quite differently than in normal gravity conditions (e.g., a candle's flame takes the shape of a sphere.). Microgravity combustion research contributes to the understanding of a wide variety of aspects that are relevant to both the environment of a spacecraft (e.g., fire dynamics relevant to crew safety on the International Space Station) and terrestrial (Earth-based) conditions (e.g., droplet combustion dynamics to assist developing new fuel blends for improved combustion, materials fabrication processes, thermal management of electronic systems, multiphase flow boiling dynamics, and many others). Micro-combustion Combustion processes that happen in very small volumes are considered micro-combustion. The high surface-to-volume ratio increases specific heat loss. Quenching distance plays a vital role in stabilizing the flame in such combustion chambers. Chemical equations Stoichiometric combustion of a hydrocarbon in oxygen Generally, the chemical equation for stoichiometric combustion of a hydrocarbon in oxygen is: C_\mathit{x}H_\mathit{y}{} + \mathit{z}O2 -> \mathit{x}CO2{} + \frac{\mathit{y}}{2}H2O where . For example, the stoichiometric burning of propane in oxygen is: \underset{propane\atop (fuel)}{C3H8} + \underset{oxygen}{5O2} -> \underset{carbon\ dioxide}{3CO2} + \underset{water}{4H2O} Stoichiometric combustion of a hydrocarbon in air If the stoichiometric combustion takes place using air as the oxygen source, the nitrogen present in the air (Atmosphere of Earth) can be added to the equation (although it does not react) to show the stoichiometric composition of the fuel in air and the composition of the resultant flue gas. Treating all non-oxygen components in air as nitrogen gives a 'nitrogen' to oxygen ratio of 3.77, i.e. (100% - O2%) / O2% where O2% is 20.95% vol: where . For example, the stoichiometric combustion of propane (C3H8) in air is: The stoichiometric composition of propane in air is 1 / (1 + 5 + 18.87) = 4.02% vol. The stoichiometric combustion reaction for CHO in air: The stoichiometric combustion reaction for CHOS: The stoichiometric combustion reaction for CHONS: The stoichiometric combustion reaction for CHOF: Trace combustion products Various other substances begin to appear in significant amounts in combustion products when the flame temperature is above about . When excess air is used, nitrogen may oxidize to and, to a much lesser extent, to . forms by disproportionation of , and and form by disproportionation of . For example, when of propane is burned with of air (120% of the stoichiometric amount), the combustion products contain 3.3% . At , the equilibrium combustion products contain 0.03% and 0.002% . At , the combustion products contain 0.17% , 0.05% , 0.01% , and 0.004% . Diesel engines are run with an excess of oxygen to combust small particles that tend to form with only a stoichiometric amount of oxygen, necessarily producing nitrogen oxide emissions. Both the United States and European Union enforce limits to vehicle nitrogen oxide emissions, which necessitate the use of special catalytic converters or treatment of the exhaust with urea (see Diesel exhaust fluid). Incomplete combustion of a hydrocarbon in oxygen The incomplete (partial) combustion of a hydrocarbon with oxygen produces a gas mixture containing mainly , , , and . Such gas mixtures are commonly prepared for use as protective atmospheres for the heat-treatment of metals and for gas carburizing. The general reaction equation for incomplete combustion of one mole of a hydrocarbon in oxygen is: \underset{fuel}{C_\mathit{x} H_\mathit{y}} + \underset{oxygen}{\mathit{z} O2} -> \underset{carbon \ dioxide}{\mathit{a}CO2} + \underset{carbon\ monoxide}{\mathit{b}CO} + \underset{water}{\mathit{c}H2O} + \underset{hydrogen}{\mathit{d}H2} When z falls below roughly 50% of the stoichiometric value, can become an important combustion product; when z falls below roughly 35% of the stoichiometric value, elemental carbon may become stable. The products of incomplete combustion can be calculated with the aid of a material balance, together with the assumption that the combustion products reach equilibrium. For example, in the combustion of one mole of propane () with four moles of , seven moles of combustion gas are formed, and z is 80% of the stoichiometric value. The three elemental balance equations are: Carbon: Hydrogen: Oxygen: These three equations are insufficient in themselves to calculate the combustion gas composition. However, at the equilibrium position, the water-gas shift reaction gives another equation: CO + H2O -> CO2 + H2; For example, at the value of K is 0.728. Solving, the combustion gas consists of 42.4% , 29.0% , 14.7% , and 13.9% . Carbon becomes a stable phase at and pressure when z is less than 30% of the stoichiometric value, at which point the combustion products contain more than 98% and and about 0.5% . Substances or materials which undergo combustion are called fuels. The most common examples are natural gas, propane, kerosene, diesel, petrol, charcoal, coal, wood, etc. Liquid fuels Combustion of a liquid fuel in an oxidizing atmosphere actually happens in the gas phase. It is the vapor that burns, not the liquid. Therefore, a liquid will normally catch fire only above a certain temperature: its flash point. The flash point of liquid fuel is the lowest temperature at which it can form an ignitable mix with air. It is the minimum temperature at which there is enough evaporated fuel in the air to start combustion. Gaseous fuels Combustion of gaseous fuels may occur through one of four distinctive types of burning: diffusion flame, premixed flame, autoignitive reaction front, or as a detonation. The type of burning that actually occurs depends on the degree to which the fuel and oxidizer are mixed prior to heating: for example, a diffusion flame is formed if the fuel and oxidizer are separated initially, whereas a premixed flame is formed otherwise. Similarly, the type of burning also depends on the pressure: a detonation, for example, is an autoignitive reaction front coupled to a strong shock wave giving it its characteristic high-pressure peak and high detonation velocity. Solid fuels The act of combustion consists of three relatively distinct but overlapping phases: Preheating phase, when the unburned fuel is heated up to its flash point and then fire point. Flammable gases start being evolved in a process similar to dry distillation. Distillation phase or gaseous phase, when the mix of evolved flammable gases with oxygen is ignited. Energy is produced in the form of heat and light. Flames are often visible. Heat transfer from the combustion to the solid maintains the evolution of flammable vapours. Charcoal phase or solid phase, when the output of flammable gases from the material is too low for the persistent presence of flame and the charred fuel does not burn rapidly and just glows and later only smoulders. Combustion management Efficient process heating requires recovery of the largest possible part of a fuel's heat of combustion into the material being processed. There are many avenues of loss in the operation of a heating process. Typically, the dominant loss is sensible heat leaving with the offgas (i.e., the flue gas). The temperature and quantity of offgas indicates its heat content (enthalpy), so keeping its quantity low minimizes heat loss. In a perfect furnace, the combustion air flow would be matched to the fuel flow to give each fuel molecule the exact amount of oxygen needed to cause complete combustion. However, in the real world, combustion does not proceed in a perfect manner. Unburned fuel (usually and ) discharged from the system represents a heating value loss (as well as a safety hazard). Since combustibles are undesirable in the offgas, while the presence of unreacted oxygen there presents minimal safety and environmental concerns, the first principle of combustion management is to provide more oxygen than is theoretically needed to ensure that all the fuel burns. For methane () combustion, for example, slightly more than two molecules of oxygen are required. The second principle of combustion management, however, is to not use too much oxygen. The correct amount of oxygen requires three types of measurement: first, active control of air and fuel flow; second, offgas oxygen measurement; and third, measurement of offgas combustibles. For each heating process, there exists an optimum condition of minimal offgas heat loss with acceptable levels of combustibles concentration. Minimizing excess oxygen pays an additional benefit: for a given offgas temperature, the NOx level is lowest when excess oxygen is kept lowest. Adherence to these two principles is furthered by making material and heat balances on the combustion process. The material balance directly relates the air/fuel ratio to the percentage of in the combustion gas. The heat balance relates the heat available for the charge to the overall net heat produced by fuel combustion. Additional material and heat balances can be made to quantify the thermal advantage from preheating the combustion air, or enriching it in oxygen. Reaction mechanism Combustion in oxygen is a chain reaction in which many distinct radical intermediates participate. The high energy required for initiation is explained by the unusual structure of the dioxygen molecule. The lowest-energy configuration of the dioxygen molecule is a stable, relatively unreactive diradical in a triplet spin state. Bonding can be described with three bonding electron pairs and two antibonding electrons, with spins aligned, such that the molecule has nonzero total angular momentum. Most fuels, on the other hand, are in a singlet state, with paired spins and zero total angular momentum. Interaction between the two is quantum mechanically a "forbidden transition", i.e. possible with a very low probability. To initiate combustion, energy is required to force dioxygen into a spin-paired state, or singlet oxygen. This intermediate is extremely reactive. The energy is supplied as heat, and the reaction then produces additional heat, which allows it to continue. Combustion of hydrocarbons is thought to be initiated by hydrogen atom abstraction (not proton abstraction) from the fuel to oxygen, to give a hydroperoxide radical (HOO). This reacts further to give hydroperoxides, which break up to give hydroxyl radicals. There are a great variety of these processes that produce fuel radicals and oxidizing radicals. Oxidizing species include singlet oxygen, hydroxyl, monatomic oxygen, and hydroperoxyl. Such intermediates are short-lived and cannot be isolated. However, non-radical intermediates are stable and are produced in incomplete combustion. An example is acetaldehyde produced in the combustion of ethanol. An intermediate in the combustion of carbon and hydrocarbons, carbon monoxide, is of special importance because it is a poisonous gas, but also economically useful for the production of syngas. Solid and heavy liquid fuels also undergo a great number of pyrolysis reactions that give more easily oxidized, gaseous fuels. These reactions are endothermic and require constant energy input from the ongoing combustion reactions. A lack of oxygen or other improperly designed conditions result in these noxious and carcinogenic pyrolysis products being emitted as thick, black smoke. The rate of combustion is the amount of a material that undergoes combustion over a period of time. It can be expressed in grams per second (g/s) or kilograms per second (kg/s). Detailed descriptions of combustion processes, from the chemical kinetics perspective, require the formulation of large and intricate webs of elementary reactions. For instance, combustion of hydrocarbon fuels typically involve hundreds of chemical species reacting according to thousands of reactions. The inclusion of such mechanisms within computational flow solvers still represents a pretty challenging task mainly in two aspects. First, the number of degrees of freedom (proportional to the number of chemical species) can be dramatically large; second, the source term due to reactions introduces a disparate number of time scales which makes the whole dynamical system stiff. As a result, the direct numerical simulation of turbulent reactive flows with heavy fuels soon becomes intractable even for modern supercomputers. Therefore, a plethora of methodologies have been devised for reducing the complexity of combustion mechanisms without resorting to high detail levels. Examples are provided by: The Relaxation Redistribution Method (RRM) The Intrinsic Low-Dimensional Manifold (ILDM) approach and further developments The invariant-constrained equilibrium edge preimage curve method. A few variational approaches The Computational Singular perturbation (CSP) method and further developments. The Rate Controlled Constrained Equilibrium (RCCE) and Quasi Equilibrium Manifold (QEM) approach. The G-Scheme. The Method of Invariant Grids (MIG). Kinetic modelling The kinetic modelling may be explored for insight into the reaction mechanisms of thermal decomposition in the combustion of different materials by using for instance Thermogravimetric analysis. Temperature Assuming perfect combustion conditions, such as complete combustion under adiabatic conditions (i.e., no heat loss or gain), the adiabatic combustion temperature can be determined. The formula that yields this temperature is based on the first law of thermodynamics and takes note of the fact that the heat of combustion is used entirely for heating the fuel, the combustion air or oxygen, and the combustion product gases (commonly referred to as the flue gas). In the case of fossil fuels burnt in air, the combustion temperature depends on all of the following: the heating value; the stoichiometric air to fuel ratio ; the specific heat capacity of fuel and air; the air and fuel inlet temperatures. The adiabatic combustion temperature (also known as the adiabatic flame temperature) increases for higher heating values and inlet air and fuel temperatures and for stoichiometric air ratios approaching one. Most commonly, the adiabatic combustion temperatures for coals are around (for inlet air and fuel at ambient temperatures and for ), around for oil and for natural gas. In industrial fired heaters, power station steam generators, and large gas-fired turbines, the more common way of expressing the usage of more than the stoichiometric combustion air is percent excess combustion air. For example, excess combustion air of 15 percent means that 15 percent more than the required stoichiometric air is being used. Instabilities Combustion instabilities are typically violent pressure oscillations in a combustion chamber. These pressure oscillations can be as high as 180dB, and long-term exposure to these cyclic pressure and thermal loads reduces the life of engine components. In rockets, such as the F1 used in the Saturn V program, instabilities led to massive damage to the combustion chamber and surrounding components. This problem was solved by re-designing the fuel injector. In liquid jet engines, the droplet size and distribution can be used to attenuate the instabilities. Combustion instabilities are a major concern in ground-based gas turbine engines because of emissions. The tendency is to run lean, an equivalence ratio less than 1, to reduce the combustion temperature and thus reduce the emissions; however, running the combustion lean makes it very susceptible to combustion instability. The Rayleigh Criterion is the basis for analysis of thermoacoustic combustion instability and is evaluated using the Rayleigh Index over one cycle of instability where q' is the heat release rate perturbation and p' is the pressure fluctuation. When the heat release oscillations are in phase with the pressure oscillations, the Rayleigh Index is positive and the magnitude of the thermoacoustic instability is maximised. On the other hand, if the Rayleigh Index is negative, then thermoacoustic damping occurs. The Rayleigh Criterion implies that thermoacoustic instability can be optimally controlled by having heat release oscillations 180 degrees out of phase with pressure oscillations at the same frequency. This minimizes the Rayleigh Index. See also Related concepts Air–fuel ratio Autoignition temperature Chemical looping combustion Deflagration Detonation Explosion Fire Flame Heterogeneous combustion Markstein number Phlogiston theory (historical) Spontaneous combustion Machines and equipment Boiler Bunsen burner External combustion engine Furnace Gas turbine Internal combustion engine Rocket engine Scientific and engineering societies International Flame Research Foundation The Combustion Institute Other List of light sources References Further reading Chemical reactions
5639
https://en.wikipedia.org/wiki/Cyrillic%20script
Cyrillic script
The Cyrillic script ( ), Slavonic script or the Slavic script is a writing system used for various languages across Eurasia. It is the designated national script in various Slavic, Turkic, Mongolic, Uralic, Caucasian and Iranic-speaking countries in Southeastern Europe, Eastern Europe, the Caucasus, Central Asia, North Asia, and East Asia, and used by many other minority languages. , around 250 million people in Eurasia use Cyrillic as the official script for their national languages, with Russia accounting for about half of them. With the accession of Bulgaria to the European Union on 1 January 2007, Cyrillic became the third official script of the European Union, following the Latin and Greek alphabets. The Early Cyrillic alphabet was developed during the 9th century AD at the Preslav Literary School in the First Bulgarian Empire during the reign of Tsar Simeon I the Great, probably by the disciples of the two Byzantine brothers Cyril and Methodius, who had previously created the Glagolitic script. Among them were Clement of Ohrid, Naum of Preslav, Angelar, Sava and other scholars. The script is named in honor of Saint Cyril. Etymology Since the script was conceived and popularised by the Slavic followers of Cyril and Methodius, rather than by Cyril and Methodius themselves, its name denotes homage rather than authorship. The name "Cyrillic" often confuses people who are not familiar with the script's history, because it does not identify the country of origin – Bulgaria (in contrast to the "Greek alphabet"). Among the general public, it is often called "the Russian alphabet", because Russian is the most popular and influential alphabet based on the script. In Bulgarian, Macedonian, Russian, Serbian, Czech and Slovak, the Cyrillic alphabet is also known as azbuka, derived from the old names of the first two letters of most Cyrillic alphabets (just as the term alphabet came from the first two Greek letters alpha and beta). In Czech and Slovak, which have never used Cyrillic, "azbuka" refers to Cyrillic and contrasts with "abeceda", which refers to the local Latin script and is composed of the names of the first letters (A, B, C, and D). In Russian, syllabaries, especially the Japanese kana, are commonly referred to as 'syllabic azbukas' rather than 'syllabic scripts'. History The Cyrillic script was created during the First Bulgarian Empire. Modern scholars believe that the Early Cyrillic alphabet was created at the Preslav Literary School, the most important early literary and cultural center of the First Bulgarian Empire and of all Slavs: Unlike the Churchmen in Ohrid, Preslav scholars were much more dependent upon Greek models and quickly abandoned the Glagolitic scripts in favor of an adaptation of the Greek uncial to the needs of Slavic, which is now known as the Cyrillic alphabet. A number of prominent Bulgarian writers and scholars worked at the school, including Naum of Preslav until 893; Constantine of Preslav; Joan Ekzarh (also transcr. John the Exarch); and Chernorizets Hrabar, among others. The school was also a center of translation, mostly of Byzantine authors. The Cyrillic script is derived from the Greek uncial script letters, augmented by ligatures and consonants from the older Glagolitic alphabet for sounds not found in Greek. Glagolitic and Cyrillic were formalized by the Byzantine Saints Cyril and Methodius and their disciples, such as Saints Naum, Clement, Angelar, and Sava. They spread and taught Christianity in the whole of Bulgaria. Paul Cubberley posits that although Cyril may have codified and expanded Glagolitic, it was his students in the First Bulgarian Empire under Tsar Simeon the Great that developed Cyrillic from the Greek letters in the 890s as a more suitable script for church books. Cyrillic spread among other Slavic peoples, as well as among non-Slavic Vlachs. The earliest datable Cyrillic inscriptions have been found in the area of Preslav, in the medieval city itself and at nearby Patleina Monastery, both in present-day Shumen Province, as well as in the Ravna Monastery and in the Varna Monastery. The new script became the basis of alphabets used in various languages in Orthodox Church-dominated Eastern Europe, both Slavic and non-Slavic languages (such as Romanian, until the 1860s). For centuries, Cyrillic was also used by Catholic and Muslim Slavs (see Bosnian Cyrillic). Cyrillic and Glagolitic were used for the Church Slavonic language, especially the Old Church Slavonic variant. Hence expressions such as "И is the tenth Cyrillic letter" typically refer to the order of the Church Slavonic alphabet; not every Cyrillic alphabet uses every letter available in the script. The Cyrillic script came to dominate Glagolitic in the 12th century. The literature produced in Old Church Slavonic soon spread north from Bulgaria and became the lingua franca of the Balkans and Eastern Europe. Bosnian Cyrillic, widely known as Bosančica is an extinct variant of the Cyrillic alphabet that originated in medieval Bosnia. Paleographers consider the earliest features of Bosnian Cyrillic script had likely begun to appear between the 10th or 11th century, with the Humac tablet (a tablet written in Bosnian Cyrillic) to be the first such document using this type of script and is believed to date from this period. Bosnian Cyrillic was used continuously until the 18th century, with sporadic usage even taking place in the 20th century. With the orthographic reform of Saint Evtimiy of Tarnovo and other prominent representatives of the Tarnovo Literary School of the 14th and 15th centuries, such as Gregory Tsamblak and Constantine of Kostenets, the school influenced Russian, Serbian, Wallachian and Moldavian medieval culture. This is known in Russia as the second South-Slavic influence. In the early 18th century, the Cyrillic script used in Russia was heavily reformed by Peter the Great, who had recently returned from his Grand Embassy in Western Europe. The new letterforms, called the Civil script, became closer to those of the Latin alphabet; several archaic letters were abolished and several new letters were introduced designed by Peter himself. Letters became distinguished between upper and lower case. West European typography culture was also adopted. The pre-reform letterforms, called 'Полуустав', were notably retained in Church Slavonic and are sometimes used in Russian even today, especially if one wants to give a text a 'Slavic' or 'archaic' feel. The alphabet used for the modern Church Slavonic language in Eastern Orthodox and Eastern Catholic rites still resembles early Cyrillic. However, over the course of the following millennium, Cyrillic adapted to changes in spoken language, developed regional variations to suit the features of national languages, and was subjected to academic reform and political decrees. A notable example of such linguistic reform can be attributed to Vuk Stefanović Karadžić, who updated the Serbian Cyrillic alphabet by removing certain graphemes no longer represented in the vernacular and introducing graphemes specific to Serbian (i.e. Љ Њ Ђ Ћ Џ Ј), distancing it from the Church Slavonic alphabet in use prior to the reform. Today, many languages in the Balkans, Eastern Europe, and northern Eurasia are written in Cyrillic alphabets. Letters Cyrillic script spread throughout the East Slavic and some South Slavic territories, being adopted for writing local languages, such as Old East Slavic. Its adaptation to local languages produced a number of Cyrillic alphabets, discussed below. Capital and lowercase letters were not distinguished in old manuscripts. Yeri () was originally a ligature of Yer and I ( + = ). Iotation was indicated by ligatures formed with the letter І: (not an ancestor of modern Ya, Я, which is derived from ), , (ligature of and ), , . Sometimes different letters were used interchangeably, for example = = , as were typographical variants like = . There were also commonly used ligatures like = . The letters also had numeric values, based not on Cyrillic alphabetical order, but inherited from the letters' Greek ancestors. The early Cyrillic alphabet is difficult to represent on computers. Many of the letterforms differed from those of modern Cyrillic, varied a great deal in manuscripts, and changed over time. Few fonts include glyphs sufficient to reproduce the alphabet. In accordance with Unicode policy, the standard does not include letterform variations or ligatures found in manuscript sources unless they can be shown to conform to the Unicode definition of a character. The Unicode 5.1 standard, released on 4 April 2008, greatly improved computer support for the early Cyrillic and the modern Church Slavonic language. In Microsoft Windows, the Segoe UI user interface font is notable for having complete support for the archaic Cyrillic letters since Windows 8. Currency signs Some currency signs have derived from Cyrillic letters: The Ukrainian hryvnia sign (₴) is from the cursive minuscule Ukrainian Cyrillic letter He (г). The Russian ruble sign (₽) from the majuscule Р. The Kyrgyzstani som sign (⃀) from the majuscule С (es) The Kazakhstani tenge sign (₸) from Т The Mongolian tögrög sign (₮) from Т Letterforms and typography The development of Cyrillic typography passed directly from the medieval stage to the late Baroque, without a Renaissance phase as in Western Europe. Late Medieval Cyrillic letters (categorized as vyaz' and still found on many icon inscriptions today) show a marked tendency to be very tall and narrow, with strokes often shared between adjacent letters. Peter the Great, Tsar of Russia, mandated the use of westernized letter forms (ru) in the early 18th century. Over time, these were largely adopted in the other languages that use the script. Thus, unlike the majority of modern Greek fonts that retained their own set of design principles for lower-case letters (such as the placement of serifs, the shapes of stroke ends, and stroke-thickness rules, although Greek capital letters do use Latin design principles), modern Cyrillic fonts are much the same as modern Latin fonts of the same font family. The development of some Cyrillic computer typefaces from Latin ones has also contributed to the visual Latinization of Cyrillic type. Lowercase forms Cyrillic uppercase and lowercase letter forms are not as differentiated as in Latin typography. Upright Cyrillic lowercase letters are essentially small capitals (with exceptions: Cyrillic , , , , , and adopted Western lowercase shapes, lowercase is typically designed under the influence of Latin , lowercase , and are traditional handwritten forms), although a good-quality Cyrillic typeface will still include separate small-caps glyphs. Cyrillic fonts, as well as Latin ones, have roman and italic types (practically all popular modern fonts include parallel sets of Latin and Cyrillic letters, where many glyphs, uppercase as well as lowercase, are shared by both). However, the native font terminology in most Slavic languages (for example, in Russian) does not use the words "roman" and "italic" in this sense. Instead, the nomenclature follows German naming patterns: Roman type is called ("upright type")compare with ("regular type") in German Italic type is called ("cursive") or ("cursive type")from the German word , meaning italic typefaces and not cursive writing Cursive handwriting is ("handwritten type")in German: or , both meaning literally 'running type' A (mechanically) sloped oblique type of sans-serif faces is ("sloped" or "slanted type"). A boldfaced type is called ("semi-bold type"), because there existed fully boldfaced shapes that have been out of use since the beginning of the 20th century. Italic and cursive forms Similarly to Latin fonts, italic and cursive types of many Cyrillic letters (typically lowercase; uppercase only for handwritten or stylish types) are very different from their upright roman types. In certain cases, the correspondence between uppercase and lowercase glyphs does not coincide in Latin and Cyrillic fonts: for example, italic Cyrillic is the lowercase counterpart of not of . Note: in some fonts or styles, , i.e. the lowercase italic Cyrillic , may look like Latin , and , i.e. lowercase italic Cyrillic , may look like small-capital italic . In Standard Serbian, as well as in Macedonian, some italic and cursive letters are allowed to be different, to more closely resemble the handwritten letters. The regular (upright) shapes are generally standardized in small caps form. Notes: Depending on fonts available, the Serbian row may appear identical to the Russian row. Unicode approximations are used in the faux row to ensure it can be rendered properly across all systems. In Bulgarian typography, many lowercase letterforms may more closely resemble the cursive forms on the one hand and Latin glyphs on the other hand, e.g. by having an ascender or descender or by using rounded arcs instead of sharp corners. Sometimes, uppercase letters may have a different shape as well, e.g. more triangular, Д and Л, like Greek delta Δ and lambda Λ. Notes: Depending on fonts available, the Bulgarian row may appear identical to the Russian row. Unicode approximations are used in the faux row to ensure it can be rendered properly across all systems; in some cases, such as ж with k-like ascender, no such approximation exists. Accessing variant forms Computer fonts typically default to the Central/Eastern, Russian letterforms, and require the use of OpenType Layout (OTL) features to display the Western, Bulgarian or Southern, Serbian/Macedonian forms. Depending on the choices of the font manufacturer, they may either be automatically activated by the local variant locl feature for text tagged with an appropriate language code, or the author needs to opt-in by activating a stylistic set ss## or character variant cv## feature. These solutions only enjoy partial support and may render with default glyphs in certain software configurations. Cyrillic alphabets Among others, Cyrillic is the standard script for writing the following languages: Slavic languages: Belarusian, Bulgarian, Macedonian, Russian, Rusyn, Serbo-Croatian (Standard Serbian, Bosnian, and Montenegrin), Ukrainian Non-Slavic languages of Russia: Abaza, Adyghe, Avar, Azerbaijani (in Dagestan), Bashkir, Buryat, Chechen, Chuvash, Erzya, Ingush, Kabardian, Kalmyk, Karachay-Balkar, Kildin Sami, Komi, Mari, Moksha, Nogai, Ossetian (in North Ossetia–Alania), Romani, Sakha/Yakut, Tatar, Tuvan, Udmurt, Yuit (Yupik) Non-Slavic languages in other countries: Abkhaz, Aleut (now mostly in church texts), Dungan, Kazakh (to be replaced by Latin script by 2025), Kyrgyz, Mongolian (to also be written with traditional Mongolian script by 2025), Tajik, Tlingit (now only in church texts), Turkmen (officially replaced by Latin script), Uzbek (also officially replaced by Latin script, but still in wide use), Yupik (in Alaska) The Cyrillic script has also been used for languages of Alaska, Slavic Europe (except for Western Slavic and some Southern Slavic), the Caucasus, the languages of Idel-Ural, Siberia, and the Russian Far East. The first alphabet derived from Cyrillic was Abur, used for the Komi language. Other Cyrillic alphabets include the Molodtsov alphabet for the Komi language and various alphabets for Caucasian languages. Usage of Cyrillic versus other scripts Latin script A number of languages written in a Cyrillic alphabet have also been written in a Latin alphabet, such as Azerbaijani, Uzbek, Serbian, and Romanian (in the Republic of Moldova until 1989 and in the Danubian Principalities throughout the 19th century). After the disintegration of the Soviet Union in 1991, some of the former republics officially shifted from Cyrillic to Latin. The transition is complete in most of Moldova (except the breakaway region of Transnistria, where Moldovan Cyrillic is official), Turkmenistan, and Azerbaijan. Uzbekistan still uses both systems, and Kazakhstan has officially begun a transition from Cyrillic to Latin (scheduled to be complete by 2025). The Russian government has mandated that Cyrillic must be used for all public communications in all federal subjects of Russia, to promote closer ties across the federation. This act was controversial for speakers of many Slavic languages; for others, such as Chechen and Ingush speakers, the law had political ramifications. For example, the separatist Chechen government mandated a Latin script which is still used by many Chechens. Standard Serbian uses both the Cyrillic and Latin scripts. Cyrillic is nominally the official script of Serbia's administration according to the Serbian constitution; however, the law does not regulate scripts in standard language, or standard language itself by any means. In practice the scripts are equal, with Latin being used more often in a less official capacity. The Zhuang alphabet, used between the 1950s and 1980s in portions of the People's Republic of China, used a mixture of Latin, phonetic, numeral-based, and Cyrillic letters. The non-Latin letters, including Cyrillic, were removed from the alphabet in 1982 and replaced with Latin letters that closely resembled the letters they replaced. Romanization There are various systems for romanization of Cyrillic text, including transliteration to convey Cyrillic spelling in Latin letters, and transcription to convey pronunciation. Standard Cyrillic-to-Latin transliteration systems include: Scientific transliteration, used in linguistics, is based on the Serbo-Croatian Latin alphabet. The Working Group on Romanization Systems of the United Nations recommends different systems for specific languages. These are the most commonly used around the world. ISO 9:1995, from the International Organization for Standardization. American Library Association and Library of Congress Romanization tables for Slavic alphabets (ALA-LC Romanization), used in North American libraries. BGN/PCGN Romanization (1947), United States Board on Geographic Names & Permanent Committee on Geographical Names for British Official Use). GOST 16876, a now defunct Soviet transliteration standard. Replaced by GOST 7.79-2000, which is based on ISO 9. Various informal romanizations of Cyrillic, which adapt the Cyrillic script to Latin and sometimes Greek glyphs for compatibility with small character sets. See also Romanization of Belarusian, Bulgarian, Kyrgyz, Russian, Macedonian and Ukrainian. Cyrillization Representing other writing systems with Cyrillic letters is called Cyrillization. Summary table Ё in Russian is usually spelled as Е; Ё is typically printed in texts for learners and in dictionaries, and in word pairs which are differentiated only by that letter (все – всё). Computer encoding Unicode As of Unicode version , Cyrillic letters, including national and historical alphabets, are encoded across several blocks: Cyrillic: U+0400–U+04FF Cyrillic Supplement: U+0500–U+052F Cyrillic Extended-A: U+2DE0–U+2DFF Cyrillic Extended-B: U+A640–U+A69F Cyrillic Extended-C: U+1C80–U+1C8F Cyrillic Extended-D: U+1E030–U+1E08F Phonetic Extensions: U+1D2B, U+1D78 Combining Half Marks: U+FE2E–U+FE2F The characters in the range U+0400 to U+045F are essentially the characters from ISO 8859-5 moved upward by 864 positions. The characters in the range U+0460 to U+0489 are historic letters, not used now. The characters in the range U+048A to U+052F are additional letters for various languages that are written with Cyrillic script. Unicode as a general rule does not include accented Cyrillic letters. A few exceptions include: combinations that are considered as separate letters of respective alphabets, like Й, Ў, Ё, Ї, Ѓ, Ќ (as well as many letters of non-Slavic alphabets); two most frequent combinations orthographically required to distinguish homonyms in Bulgarian and Macedonian: Ѐ, Ѝ; a few Old and New Church Slavonic combinations: Ѷ, Ѿ, Ѽ. To indicate stressed or long vowels, combining diacritical marks can be used after the respective letter (for example, : е́ у́ э́ etc.). Some languages, including Church Slavonic, are still not fully supported. Unicode 5.1, released on 4 April 2008, introduces major changes to the Cyrillic blocks. Revisions to the existing Cyrillic blocks, and the addition of Cyrillic Extended A (2DE0 ... 2DFF) and Cyrillic Extended B (A640 ... A69F), significantly improve support for the early Cyrillic alphabet, Abkhaz, Aleut, Chuvash, Kurdish, and Moksha. Other Other character encoding systems for Cyrillic: CP8668-bit Cyrillic character encoding established by Microsoft for use in MS-DOS also known as GOST-alternative. Cyrillic characters go in their native order, with a "window" for pseudographic characters. ISO/IEC 8859-58-bit Cyrillic character encoding established by International Organization for Standardization KOI8-R8-bit native Russian character encoding. Invented in the USSR for use on Soviet clones of American IBM and DEC computers. The Cyrillic characters go in the order of their Latin counterparts, which allowed the text to remain readable after transmission via a 7-bit line that removed the most significant bit from each bytethe result became a very rough, but readable, Latin transliteration of Cyrillic. Standard encoding of early 1990s for Unix systems and the first Russian Internet encoding. KOI8-UKOI8-R with addition of Ukrainian letters. MIK8-bit native Bulgarian character encoding for use in Microsoft DOS. Windows-12518-bit Cyrillic character encoding established by Microsoft for use in Microsoft Windows. The simplest 8-bit Cyrillic encoding32 capital chars in native order at 0xc0–0xdf, 32 usual chars at 0xe0–0xff, with rarely used "YO" characters somewhere else. No pseudographics. Former standard encoding in some Linux distributions for Belarusian and Bulgarian, but currently displaced by UTF-8. GOST-main. GB 2312Principally simplified Chinese encodings, but there are also the basic 33 Russian Cyrillic letters (in upper- and lower-case). JIS and Shift JISPrincipally Japanese encodings, but there are also the basic 33 Russian Cyrillic letters (in upper- and lower-case). Keyboard layouts Each language has its own standard keyboard layout, adopted from typewriters. With the flexibility of computer input methods, there are also transliterating or phonetic/homophonic keyboard layouts made for typists who are more familiar with other layouts, like the common English QWERTY keyboard. When practical Cyrillic keyboard layouts or fonts are unavailable, computer users sometimes use transliteration or look-alike "volapuk" encoding to type in languages that are normally written with the Cyrillic alphabet. See also Cyrillic Alphabet Day Cyrillic digraphs Cyrillic script in Unicode Faux Cyrillic, real or fake Cyrillic letters used to give Latin-alphabet text a Soviet or Russian feel List of Cyrillic digraphs and trigraphs Russian Braille Russian cursive Russian manual alphabet Bulgarian Braille Vladislav the Grammarian Yugoslav Braille Yugoslav manual alphabet Internet top-level domains in Cyrillic gTLDs .мон .бг .қаз .рф .срб .укр .мкд .бел Notes Footnotes References Bringhurst, Robert (2002). The Elements of Typographic Style (version 2.5), pp. 262–264. Vancouver, Hartley & Marks. . Nezirović, M. (1992). Jevrejsko-španjolska književnost. Sarajevo: Svjetlost. [cited in Šmid, 2002] Prostov, Eugene Victor. 1931. "Origins of Russian Printing". Library Quarterly 1 (January): 255–77. Šmid, Katja (2002). " ", in Verba Hispanica, vol X. Liubliana: Facultad de Filosofía y Letras de la Universidad de Liubliana. . 'The Lives of St. Tsurho and St. Strahota', Bohemia, 1495, Vatican Library Philipp Ammon: Tractatus slavonicus. in: Sjani (Thoughts) Georgian Scientific Journal of Literary Theory and Comparative Literature, N 17, 2016, pp. 248–256 External links The Cyrillic Charset Soup overview and history of Cyrillic charsets. Transliteration of Non-Roman Scripts, a collection of writing systems and transliteration tables History and development of the Cyrillic alphabet Cyrillic Alphabets of Slavic Languages review of Cyrillic charsets in Slavic Languages. data entry in Old Cyrillic / Стара Кирилица (archived 22 February 2014) Cyrillic and its Long Journey East – NamepediA Blog, article about the Cyrillic script Unicode collation charts—including Cyrillic letters, sorted by shape Bulgarian inventions Eastern Europe North Asia Central Asia
5641
https://en.wikipedia.org/wiki/Consonant
Consonant
In articulatory phonetics, a consonant is a speech sound that is articulated with complete or partial closure of the vocal tract. Examples are and [b], pronounced with the lips; and [d], pronounced with the front of the tongue; and [g], pronounced with the back of the tongue; , pronounced in the throat; , [v], and , pronounced by forcing air through a narrow channel (fricatives); and and , which have air flowing through the nose (nasals). Contrasting with consonants are vowels. Since the number of speech sounds in the world's languages is much greater than the number of letters in any one alphabet, linguists have devised systems such as the International Phonetic Alphabet (IPA) to assign a unique and unambiguous symbol to each attested consonant. The English alphabet has fewer consonant letters than the English language has consonant sounds, so digraphs like , , , and are used to extend the alphabet, though some letters and digraphs represent more than one consonant. For example, the sound spelled in "this" is a different consonant from the sound in "thin". (In the IPA, these are and , respectively.) Etymology The word consonant comes from Latin oblique stem , from 'sounding-together', a calque of Greek (plural , ). Dionysius Thrax calls consonants ( 'sounded with') because in Greek they can only be pronounced with a vowel. He divides them into two subcategories: ( 'half-sounded'), which are the continuants, and ( 'unsounded'), which correspond to plosives. This description does not apply to some languages, such as the Salishan languages, in which plosives may occur without vowels (see Nuxalk), and the modern concept of 'consonant' does not require co-occurrence with a vowel. Consonant sounds and consonant letters The word consonant may be used ambiguously for both speech sounds and the letters of the alphabet used to write them. In English, these letters are B, C, D, F, G, J, K, L, M, N, P, Q, S, T, V, X, Z and often H, R, W, Y. In English orthography, the letters H, R, W, Y and the digraph GH are used for both consonants and vowels. For instance, the letter Y stands for the consonant/semi-vowel in yoke, the vowel in myth, the vowel in funny, the diphthong in sky, and forms several digraphs for other diphthongs, such as say, boy, key. Similarly, R commonly indicates or modifies a vowel in non-rhotic accents. This article is concerned with consonant sounds, however they are written. Consonants versus vowels Consonants and vowels correspond to distinct parts of a syllable: The most sonorous part of the syllable (that is, the part that is easiest to sing), called the syllabic peak or nucleus, is typically a vowel, while the less sonorous margins (called the onset and coda) are typically consonants. Such syllables may be abbreviated CV, V, and CVC, where C stands for consonant and V stands for vowel. This can be argued to be the only pattern found in most of the world's languages, and perhaps the primary pattern in all of them. However, the distinction between consonant and vowel is not always clear cut: there are syllabic consonants and non-syllabic vowels in many of the world's languages. One blurry area is in segments variously called semivowels, semiconsonants, or glides. On one side, there are vowel-like segments that are not in themselves syllabic, but form diphthongs as part of the syllable nucleus, as the i in English boil . On the other, there are approximants that behave like consonants in forming onsets, but are articulated very much like vowels, as the y in English yes . Some phonologists model these as both being the underlying vowel , so that the English word bit would phonemically be , beet would be , and yield would be phonemically . Likewise, foot would be , food would be , wood would be , and wooed would be . However, there is a (perhaps allophonic) difference in articulation between these segments, with the in yes and yield and the of wooed having more constriction and a more definite place of articulation than the in boil or bit or the of foot. The other problematic area is that of syllabic consonants, segments articulated as consonants but occupying the nucleus of a syllable. This may be the case for words such as church in rhotic dialects of English, although phoneticians differ in whether they consider this to be a syllabic consonant, , or a rhotic vowel, : Some distinguish an approximant that corresponds to a vowel , for rural as or ; others see these as a single phoneme, . Other languages use fricative and often trilled segments as syllabic nuclei, as in Czech and several languages in Democratic Republic of the Congo, and China, including Mandarin Chinese. In Mandarin, they are historically allophones of , and spelled that way in Pinyin. Ladefoged and Maddieson call these "fricative vowels" and say that "they can usually be thought of as syllabic fricatives that are allophones of vowels". That is, phonetically they are consonants, but phonemically they behave as vowels. Many Slavic languages allow the trill and the lateral as syllabic nuclei (see Words without vowels). In languages like Nuxalk, it is difficult to know what the nucleus of a syllable is, or if all syllables even have nuclei. If the concept of 'syllable' applies in Nuxalk, there are syllabic consonants in words like (?) 'seal fat'. Miyako in Japan is similar, with 'to build' and 'to pull'. Each spoken consonant can be distinguished by several phonetic features: The manner of articulation is how air escapes from the vocal tract when the consonant or approximant (vowel-like) sound is made. Manners include stops, fricatives, and nasals. The place of articulation is where in the vocal tract the obstruction of the consonant occurs, and which speech organs are involved. Places include bilabial (both lips), alveolar (tongue against the gum ridge), and velar (tongue against soft palate). In addition, there may be a simultaneous narrowing at another place of articulation, such as palatalisation or pharyngealisation. Consonants with two simultaneous places of articulation are said to be coarticulated. The phonation of a consonant is how the vocal cords vibrate during the articulation. When the vocal cords vibrate fully, the consonant is called voiced; when they do not vibrate at all, it is voiceless. The voice onset time (VOT) indicates the timing of the phonation. Aspiration is a feature of VOT. The airstream mechanism is how the air moving through the vocal tract is powered. Most languages have exclusively pulmonic egressive consonants, which use the lungs and diaphragm, but ejectives, clicks, and implosives use different mechanisms. The length is how long the obstruction of a consonant lasts. This feature is borderline distinctive in English, as in "wholly" vs. "holy" , but cases are limited to morpheme boundaries. Unrelated roots are differentiated in various languages such as Italian, Japanese, and Finnish, with two length levels, "single" and "geminate". Estonian and some Sami languages have three phonemic lengths: short, geminate, and long geminate, although the distinction between the geminate and overlong geminate includes suprasegmental features. The articulatory force is how much muscular energy is involved. This has been proposed many times, but no distinction relying exclusively on force has ever been demonstrated. All English consonants can be classified by a combination of these features, such as "voiceless alveolar stop" . In this case, the airstream mechanism is omitted. Some pairs of consonants like p::b, t::d are sometimes called fortis and lenis, but this is a phonological rather than phonetic distinction. Consonants are scheduled by their features in a number of IPA charts: Examples The recently extinct Ubykh language had only 2 or 3 vowels but 84 consonants; the Taa language has 87 consonants under one analysis, 164 under another, plus some 30 vowels and tone. The types of consonants used in various languages are by no means universal. For instance, nearly all Australian languages lack fricatives; a large percentage of the world's languages lack voiced stops such as , , as phonemes, though they may appear phonetically. Most languages, however, do include one or more fricatives, with being the most common, and a liquid consonant or two, with the most common. The approximant is also widespread, and virtually all languages have one or more nasals, though a very few, such as the Central dialect of Rotokas, lack even these. This last language has the smallest number of consonants in the world, with just six. Most common In rhotic American English, the consonants spoken most frequently are . ( is less common in non-rhotic accents.) The most frequent consonant in many other languages is . The most universal consonants around the world (that is, the ones appearing in nearly all languages) are the three voiceless stops , , , and the two nasals , . However, even these common five are not completely universal. Several languages in the vicinity of the Sahara Desert, including Arabic, lack . Several languages of North America, such as Mohawk, lack both of the labials and . The Wichita language of Oklahoma and some West African languages, such as Ijo, lack the consonant on a phonemic level, but do use it phonetically, as an allophone of another consonant (of in the case of Ijo, and of in Wichita). A few languages on Bougainville Island and around Puget Sound, such as Makah, lack both of the nasals and altogether, except in special speech registers such as baby-talk. The 'click language' Nǁng lacks , and colloquial Samoan lacks both alveolars, and . Despite the 80-odd consonants of Ubykh, it lacks the plain velar in native words, as do the related Adyghe and Kabardian languages. But with a few striking exceptions, such as Xavante and Tahitian—which have no dorsal consonants whatsoever—nearly all other languages have at least one velar consonant: most of the few languages that do not have a simple (that is, a sound that is generally pronounced ) have a consonant that is very similar. For instance, an areal feature of the Pacific Northwest coast is that historical *k has become palatalized in many languages, so that Saanich for example has and but no plain ; similarly, historical *k in the Northwest Caucasian languages became palatalized to in extinct Ubykh and to in most Circassian dialects. See also IPA consonant chart with audio Articulatory phonetics List of consonants List of phonetics topics Words without vowels Notes References Sources Ian Maddieson, Patterns of Sounds, Cambridge University Press, 1984. External links Interactive manner and place of articulation Consonants (Journal of West African Languages)
5642
https://en.wikipedia.org/wiki/Costume%20jewelry
Costume jewelry
Costume or fashion jewelry includes a range of decorative items worn for personal adornment that are manufactured as less expensive ornamentation to complement a particular fashionable outfit or garment as opposed to "real" (fine) jewelry, which is more costly and which may be regarded primarily as collectibles, keepsakes, or investments. From the outset, costume jewelry — also known as fashion jewelry — paralleled the styles of its more precious fine counterparts. Terminology It is also known as artificial jewellery, imitation jewellery, imitated jewelry, trinkets, fashion jewelry, junk jewelry, fake jewelry, or fallalery. Etymology The term costume jewelry dates back to the early 20th century. It reflects the use of the word "costume" to refer to what is now called an "outfit". Components Originally, costume or fashion jewelry was made of inexpensive simulated gemstones, such as rhinestones or lucite, set in pewter, silver, nickel, or brass. During the depression years, rhinestones were even down-graded by some manufacturers to meet the cost of production. During the World War II era, sterling silver was often incorporated into costume jewelry designs primarily because: The components used for base metal were needed for wartime production (i.e., military applications), and a ban was placed on their use in the private sector. Base metal was originally popular because it could approximate platinum's color, sterling silver fulfilled the same function. This resulted in a number of years during which sterling silver costume jewelry was produced and some can still be found in today's vintage jewelry marketplace. Modern costume jewelry incorporates a wide range of materials. High-end crystals, cubic zirconia simulated diamonds, and some semi-precious stones are used in place of precious stones. Metals include gold- or silver-plated brass, and sometimes vermeil or sterling silver. Lower-priced jewelry may still use gold plating over pewter, nickel, or other metals; items made in countries outside the United States may contain lead. Some pieces incorporate plastic, acrylic, leather, or wood. Historical expression Costume jewelry can be characterized by the period in history in which it was made. Art Deco period (1920–1930s) The Art Deco movement was an attempt to combine the harshness of mass production with the sensitivity of art and design. It was during this period that Coco Chanel introduced costume jewelry to complete the costume. The Art Deco movement died with the onset of the Great Depression and the outbreak of World War II. According to Schiffer, some of the characteristics of the costume jewelry in the Art Deco period were: Free-flowing curves were replaced with a harshly geometric and symmetrical theme Long pendants, bangle bracelets, cocktail rings, and elaborate accessory items such as cigarette cases and holders Retro period (1935 to 1950) In the Retro period, designers struggled with the art versus mass production dilemma. Natural materials merged with plastics. The retro period primarily included American-made jewelry, which had a distinctly American look. With the war in Europe, many European jewelry firms were forced to shut down. Many European designers emigrated to the U.S. since the economy was recovering. According to Schiffer, some of the characteristics of costume jewelry in the Retro period were: Glamour, elegance, and sophistication Flowers, bows, and sunburst designs with a Hollywood flair Moonstones, horse motifs, military influence, and ballerinas Bakelite and other plastic jewelry Art Modern period (1945 to 1960) In the Art Modern period following World War II, jewelry designs became more traditional and understated. The big, bold styles of the Retro period went out of style and were replaced by the more tailored styles of the 1950s and 1960s. According to Schiffer, some of the characteristics of costume jewelry in the Art Modern period were: Bold, lavish jewelry Large, chunky bracelets, charm bracelets, Jade/opal, citrine and topaz Poodle pins, Christmas tree pins, and other Christmas jewelry Rhinestones With the advent of the Mod period came "Body Jewelry". Carl Schimel of Kim Craftsmen Jewelry was at the forefront of this style. While Kim Craftsmen closed in the early 1990s, many collectors still forage for their items at antique shows and flea markets. General history Costume jewelry has been part of the culture for almost 300 years. During the 18th century, jewelers began making pieces with inexpensive glass. In the 19th century, costume jewelry made of semi-precious material came into the market. Jewels made of semi-precious material were more affordable, and this affordability gave common people the chance to own costume jewelry. But the real golden era for costume jewelry began in the middle of the 20th century. The new middle class wanted beautiful, but affordable jewelry. The demand for jewelry of this type coincided with the machine age and the industrial revolution. The revolution made the production of carefully executed replicas of admired heirloom pieces possible. As the class structure in America changed, so did measures of real wealth. Women in all social stations, even the working-class woman, could own a small piece of costume jewelry. The average town and countrywoman could acquire and wear a considerable amount of this mass-produced jewelry that was both affordable and stylish. Costume jewelry was also made popular by various designers in the mid-20th century. Some of the most remembered names in costume jewelry include both the high and low priced brands: Crown Trifari, Dior, Chanel, Miriam Haskell, Monet, Napier, Corocraft, Coventry, and Kim Craftsmen. A significant factor in the popularization of costume jewelry was Hollywood movies. The leading female stars of the 1940s and 1950s often wore and then endorsed the pieces produced by a range of designers. If you admired a necklace worn by Bette Davis in The Private Lives of Elizabeth and Essex, you could buy a copy from Joseff of Hollywood, who made the original. Stars such as Vivien Leigh, Elizabeth Taylor, and Jane Russell appeared in adverts for the pieces and the availability of the collections in shops such as Woolworth made it possible for ordinary women to own and wear such jewelry. Coco Chanel greatly popularized the use of faux jewelry in her years as a fashion designer, bringing costume jewelry to life with gold and faux pearls. Kenneth Jay Lane has since the 1960s been known for creating unique pieces for Jackie Onassis, Elizabeth Taylor, Diana Vreeland, and Audrey Hepburn. He is probably best known for his three-strand faux pearl necklace worn by Barbara Bush to her husband's inaugural ball. In many instances, high-end fashion jewelry has achieved a "collectible" status and increased value over time. Today, there is a substantial secondary market for vintage fashion jewelry. The main collecting market is for 'signed pieces', that is pieces that have the maker's mark, usually stamped on the reverse. Amongst the most sought after are Miriam Haskell, Coro, Butler and Wilson, Crown Trifari, and Sphinx. However, there is also demand for good quality 'unsigned' pieces, especially if they are of an unusual design. Business and industry Costume jewelry is considered a discrete category of fashion accessory and displays many characteristics of a self-contained industry. Costume jewelry manufacturers are located throughout the world, with a particular concentration in parts of China and India, where entire citywide and region-wide economies are dominated by the trade of these goods. There has been considerable controversy in the United States and elsewhere about the lack of regulations in the manufacture of such jewelry—these range from human rights issues surrounding the treatment of labor, to the use of manufacturing processes in which small, but potentially harmful, amounts of toxic metals are added during production. In 2010, the Associated Press released the story that toxic levels of the metal cadmium were found in children's jewelry. An Associated Press investigation found some pieces contained more than 80 percent of cadmium. The wider issues surrounding imports, exports, trade laws, and globalization also apply to the costume jewelry trade. As part of the supply chain, wholesalers in the United States and other nations purchase costume jewelry from manufacturers and typically import or export it to wholesale distributors and suppliers who deal directly with retailers. Wholesale costume jewelry merchants will traditionally seek out new suppliers at trade shows. As the Internet has become increasingly important in global trade, the trade-show model has changed. Retailers can now select from a large number of wholesalers with sites on the World Wide Web. The wholesalers purchase from international suppliers who are also available on the Web from different parts of the world like Chinese, Korean, Indonesian, Thai, and Indian jewelry companies, with their wide range of products in bulk quantities. Some of these sites also market directly to consumers who can purchase costume jewelry at greatly reduced prices. Some of these websites categorize fashion jewelry separately, while others use this term in place of costume jewelry. The trend of jewelry-making at home by hobbyists for personal enjoyment or for sale on sites like Etsy has resulted in the common practice of buying wholesale costume jewelry in bulk and using it for parts. There is a rise in demand for artificial or imitation jewelry by 85% due to the increase in gold prices, according to a 2011 report. See also Marcasite jewelry References Jewellery components
5643
https://en.wikipedia.org/wiki/Channel%20Islands
Channel Islands
The Channel Islands are an archipelago in the English Channel, off the French coast of Normandy. They are divided into two Crown Dependencies: the Bailiwick of Jersey, which is the largest of the islands; and the Bailiwick of Guernsey, consisting of Guernsey, Alderney, Sark, Herm and some smaller islands. Historically, they are the remnants of the Duchy of Normandy. Although they are not part of the United Kingdom, the UK is currently responsible for the defence and international relations of the islands. The Crown Dependencies are neither members of the Commonwealth of Nations, nor part of the European Union. They have a total population of about , and the bailiwicks' capitals, Saint Helier and Saint Peter Port, have populations of 33,500 and 18,207 respectively. "Channel Islands" is a geographical term, not a political unit. The two bailiwicks have been administered separately since the late 13th century. Each has its own independent laws, elections, and representative bodies (although in modern times, politicians from the islands' legislatures are in regular contact). Any institution common to both is the exception rather than the rule. The Bailiwick of Guernsey is divided into three jurisdictions – Guernsey, Alderney and Sark – each with its own legislature. Although there are a few pan-island institutions (such as the Channel Islands Brussels Office, the Director of Civil Aviation and the Channel Islands Financial Ombudsman, which are actually joint ventures between the bailiwicks), these tend to be established structurally as equal projects between Guernsey and Jersey. Otherwise, entities whose names imply membership of both Guernsey and Jersey might in fact be from one bailiwick only. For instance, The International Stock Exchange is in Saint Peter Port and therefore is in Guernsey. The term "Channel Islands" began to be used around 1830, possibly first by the Royal Navy as a collective name for the islands. The term refers only to the archipelago to the west of the Cotentin Peninsula. Other populated islands located in the English Channel, and close to the coast of Britain, such as the Isle of Wight, Hayling Island and Portsea Island, are not regarded as "Channel Islands". Geography The two major islands are Jersey and Guernsey. They make up 99% of the population and 92% of the area. List of islands Names The names of the larger islands in the archipelago in general have the -ey suffix, whilst those of the smaller ones have the -hou suffix. These are believed to be from the Old Norse ey (island) and holmr (islet). The Chausey Islands The Chausey Islands south of Jersey are not generally included in the geographical definition of the Channel Islands but are occasionally described in English as 'French Channel Islands' in view of their French jurisdiction. They were historically linked to the Duchy of Normandy, but they are part of the French territory along with continental Normandy, and not part of the British Isles or of the Channel Islands in a political sense. They are an incorporated part of the commune of Granville (Manche). While they are popular with visitors from France, Channel Islanders can only visit them by private or charter boats as there are no direct transport links from the other islands. In official Jersey Standard French, the Channel Islands are called 'Îles de la Manche', while in France, the term 'Îles Anglo-normandes' (Anglo-Norman Isles) is used to refer to the British 'Channel Islands' in contrast to other islands in the Channel. Chausey is referred to as an 'Île normande' (as opposed to anglo-normande). 'Îles Normandes' and 'Archipel Normand' have also, historically, been used in Channel Island French to refer to the islands as a whole. Waters The very large tidal variation provides an environmentally rich inter-tidal zone around the islands, and some islands such as Burhou, the Écréhous, and the Minquiers have been designated Ramsar sites. The waters around the islands include the following: The Swinge (between Alderney and Burhou) The Little Swinge (between Burhou and Les Nannels) La Déroute (between Jersey and Sark, and Jersey and the Cotentin) Le Raz Blanchard, or Race of Alderney (between Alderney and the Cotentin) The Great Russel (between Sark, Jéthou and Herm) The Little Russel (between Guernsey, Herm and Jéthou) Souachehouais (between Le Rigdon and L'Étacq, Jersey) Le Gouliot (between Sark and Brecqhou) La Percée (between Herm and Jéthou) Highest point The highest point in the islands is Les Platons in Jersey at 143 metres (469 ft) above sea level. The lowest point is the English Channel (sea level). Climate History Prehistory The earliest evidence of human occupation of the Channel Islands has been dated to 250,000 years ago when they were attached to the landmass of continental Europe. The islands became detached by rising sea levels in the Mesolithic period. The numerous dolmens and other archaeological sites extant and recorded in history demonstrate the existence of a population large enough and organised enough to undertake constructions of considerable size and sophistication, such as the burial mound at La Hougue Bie in Jersey or the statue menhirs of Guernsey. From the Iron Age Hoards of Armorican coins have been excavated, providing evidence of trade and contact in the Iron Age period. Evidence for Roman settlement is sparse, although evidently the islands were visited by Roman officials and traders. The Roman name for the Channel Islands was I. Lenuri (Lenur Islands) and is included in the Peutinger Table The traditional Latin names used for the islands (Caesarea for Jersey, Sarnia for Guernsey, Riduna for Alderney) derive (possibly mistakenly) from the Antonine Itinerary. Gallo-Roman culture was adopted to an unknown extent in the islands. In the sixth century, Christian missionaries visited the islands. Samson of Dol, Helier, Marculf and Magloire are among saints associated with the islands. In the sixth century, they were already included in the diocese of Coutances where they remained until the Reformation. There were probably some Celtic Britons who settled on the Islands in the 5th and 6th centuries AD (the indigenous Celts of Great Britain, and the ancestors of the modern Welsh, Cornish, and Bretons) who had emigrated from Great Britain in the face of invading Anglo-Saxons. But there were not enough of them to leave any trace, and the islands continued to be ruled by the king of the Franks and its church remained part of the diocese of Coutances. From the beginning of the ninth century, Norse raiders appeared on the coasts. Norse settlement eventually succeeded initial attacks, and it is from this period that many place names of Norse origin appear, including the modern names of the islands. From the Duchy of Normandy In 933, the islands were granted to William I Longsword by Raoul, the King of Western Francia, and annexed to the Duchy of Normandy. In 1066, William II of Normandy invaded and conquered England, becoming William I of England, also known as William the Conqueror. In the period 1204–1214, King John lost the Angevin lands in northern France, including mainland Normandy, to King Philip II of France, but managed to retain control of the Channel Islands. In 1259, his successor, Henry III of England, by the Treaty of Paris, officially surrendered his claim and title to the Duchy of Normandy, while retaining the Channel Islands, as peer of France and feudal vassal of the King of France. Since then, the Channel Islands have been governed as two separate bailiwicks and were never absorbed into the Kingdom of England nor its successor kingdoms of Great Britain and the United Kingdom. During the Hundred Years' War, the Channel Islands were part of the French territory recognizing the claims of the English kings to the French throne. The islands were invaded by the French in 1338, who held some territory until 1345. Edward III of England granted a Charter in July 1341 to Jersey, Guernsey, Sark and Alderney, confirming their customs and laws to secure allegiance to the English Crown. Owain Lawgoch, a mercenary leader of a Free Company in the service of the French Crown, attacked Jersey and Guernsey in 1372, and in 1373 Bertrand du Guesclin besieged Mont Orgueil. The young King Richard II of England reconfirmed in 1378 the Charter rights granted by his grandfather, followed in 1394 with a second Charter granting, because of great loyalty shown to the Crown, exemption for ever, from English tolls, customs and duties. Jersey was occupied by the French in 1461 as part of an exchange for helping the Lancastrians fight against the Yorkists during The War of the Roses. It was retaken by the Yorkists in 1468. In 1483 a Papal bull decreed that the islands would be neutral during time of war. This privilege of neutrality enabled islanders to trade with both France and England and was respected until 1689 when it was abolished by Order in Council following the Glorious Revolution in Great Britain. Various attempts to transfer the islands from the diocese of Coutances (to Nantes (1400), Salisbury (1496), and Winchester (1499)) had little effect until an Order in Council of 1569 brought the islands formally into the diocese of Winchester. Control by the bishop of Winchester was ineffectual as the islands had turned overwhelmingly Calvinist and the episcopacy was not restored until 1620 in Jersey and 1663 in Guernsey. After the loss of Calais in 1558, the Channel Islands were the last remaining English holdings in France and the only French territory that was controlled by the English kings as Kings of France. This situation lasted until the English kings dropped their title and claims to the French throne in 1801, confirming the Channel Islands in a situation of a crown dependency under the sovereignty of neither Great-Britain nor France but of the British crown directly. Sark in the 16th century was uninhabited until colonised from Jersey in the 1560s. The grant of seigneurship from Elizabeth I of England in 1565 forms the basis of Sark's constitution today. From the 17th century During the Wars of the Three Kingdoms, Jersey held out strongly for the Royalist cause, providing refuge for Charles, Prince of Wales in 1646 and 1649–1650, while the more strongly Presbyterian Guernsey more generally favoured the parliamentary cause (although Castle Cornet was held by Royalists and did not surrender until October 1651). The islands acquired commercial and political interests in the North American colonies. Islanders became involved with the Newfoundland fisheries in the 17th century. In recognition for all the help given to him during his exile in Jersey in the 1640s, Charles II gave George Carteret, Bailiff and governor, a large grant of land in the American colonies, which he promptly named New Jersey, now part of the United States of America. Sir Edmund Andros, bailiff of Guernsey, was an early colonial governor in North America, and head of the short-lived Dominion of New England. In the late 18th century, the islands were dubbed "the French Isles". Wealthy French émigrés fleeing the French Revolution sought residency in the islands. Many of the town domiciles existing today were built in that time. In Saint Peter Port, a large part of the harbour had been built by 1865. 20th century World War II The islands were occupied by the German Army during World War II. The British Government demilitarised the islands in June 1940, and the lieutenant-governors were withdrawn on 21 June, leaving the insular administrations to continue government as best they could under impending military occupation. Before German troops landed, between 30 June and 4 July 1940, evacuation took place. Many young men had already left to join the Allied armed forces, as volunteers. 6,600 out of 50,000 left Jersey while 17,000 out of 42,000 left Guernsey. Thousands of children were evacuated with their schools to England and Scotland. The population of Sark largely remained where they were; but in Alderney, all but six people left. In Alderney, the occupying Germans built four prison camps which housed approximately 6,000 people, of whom over 700 died. Due to the destruction of documents, it is impossible to state how many forced workers died in the other islands. Alderney had the only Nazi concentration camps on British soil. The Royal Navy blockaded the islands from time to time, particularly following the Invasion of Normandy in June 1944. There was considerable hunger and privation during the five years of German occupation, particularly in the final months when the population was close to starvation. Intense negotiations resulted in some humanitarian aid being sent via the Red Cross, leading to the arrival of Red Cross parcels in the supply ship SS Vega in December 1944. The German occupation of 1940–45 was harsh: over 2,000 islanders were deported by the Germans, and some Jews were sent to concentration camps; partisan resistance and retribution, accusations of collaboration, and slave labour also occurred. Many Spaniards, initially refugees from the Spanish Civil War, were brought to the islands to build fortifications. Later, Russians and Central Europeans continued the work. Many land mines were laid, with 65,718 land mines laid in Jersey alone. There was no resistance movement in the Channel Islands on the scale of that in mainland France. This has been ascribed to a range of factors including the physical separation of the islands, the density of troops (up to one German for every two Islanders), the small size of the islands precluding any hiding places for resistance groups, and the absence of the Gestapo from the occupying forces. Moreover, much of the population of military age had already joined the British Army. The end of the occupation came after VE-Day on 8 May 1945, with Jersey and Guernsey being liberated on 9 May. The German garrison in Alderney was left until 16 May, and it was one of the last of the Nazi German remnants to surrender. The first evacuees returned on the first sailing from Great Britain on 23 June, but the people of Alderney were unable to start returning until December 1945. Many of the evacuees who returned home had difficulty reconnecting with their families after five years of separation. After 1945 Following the liberation of 1945, reconstruction led to a transformation of the economies of the islands, attracting immigration and developing tourism. The legislatures were reformed and non-party governments embarked on social programmes, aided by the incomes from offshore finance, which grew rapidly from the 1960s. The islands decided not to join the European Economic Community when the UK joined. Since the 1990s, declining profitability of agriculture and tourism has challenged the governments of the islands. Flag gallery Governance The Channel Islands fall into two separate self-governing bailiwicks, the Bailiwick of Guernsey and the Bailiwick of Jersey. Each of these is a British Crown Dependency, and neither is a part of the United Kingdom. They have been parts of the Duchy of Normandy since the 10th century, and Queen Elizabeth II was often referred to by her traditional and conventional title of Duke of Normandy. However, pursuant to the Treaty of Paris (1259), she governed in her right as The Queen (the "Crown in right of Jersey", and the "Crown in right of the république of the Bailiwick of Guernsey"), and not as the Duke. This notwithstanding, it is a matter of local pride for monarchists to treat the situation otherwise: the Loyal toast at formal dinners was to 'The Queen, our Duke', rather than to 'Her Majesty, The Queen' as in the UK. The Queen died in 2022 and her son Charles III became the King. A bailiwick is a territory administered by a bailiff. Although the words derive from a common root ('bail' = 'to give charge of') there is a vast difference between the meanings of the word 'bailiff' in Great Britain and in the Channel Islands; a bailiff in Britain is a court-appointed private debt-collector authorised to collect judgment debts, in the Channel Islands, the Bailiff in each bailiwick is the civil head, presiding officer of the States, and also head of the judiciary, and thus the most important citizen in the bailiwick. In the early 21st century, the existence of governmental offices such as the bailiffs' with multiple roles straddling the different branches of government came under increased scrutiny for their apparent contravention of the doctrine of separation of powers—most notably in the Guernsey case of McGonnell -v- United Kingdom (2000) 30 EHRR 289. That case, following final judgement at the European Court of Human Rights, became part of the impetus for much recent constitutional change, particularly the Constitutional Reform Act 2005 (2005 c.4) in the UK, including the separation of the roles of the Lord Chancellor, the abolition of the House of Lords' judicial role, and its replacement by the UK Supreme Court. The islands' bailiffs, however, still retain their historic roles. The systems of government in the islands date from Norman times, which accounts for the names of the legislatures, the States, derived from the Norman 'États' or 'estates' (i.e. the Crown, the Church, and the people). The States have evolved over the centuries into democratic parliaments. The UK Parliament has power to legislate for the islands, but Acts of Parliament do not extend to the islands automatically. Usually, an Act gives power to extend its application to the islands by an Order in Council, after consultation. For the most part the islands legislate for themselves. Each island has its own primary legislature, known as the States of Guernsey and the States of Jersey, with Chief Pleas in Sark and the States of Alderney. The Channel Islands are not represented in the UK Parliament. Laws passed by the States are given royal assent by The King in Council, to whom the islands' governments are responsible. The islands have never been part of the European Union, and thus were not a party to the 2016 referendum on the EU membership, but were part of the Customs Territory of the European Community by virtue of Protocol Three to the Treaty on European Union. In September 2010, a Channel Islands Brussels Office was set up jointly by the two Bailiwicks to develop the Channel Islands' influence with the EU, to advise the Channel Islands' governments on European matters, and to promote economic links with the EU. Both bailiwicks are members of the British–Irish Council, and Jèrriais and Guernésiais are recognised regional languages of the islands. The legal courts are separate; separate courts of appeal have been in place since 1961. Among the legal heritage from Norman law is the Clameur de haro. The basis of the legal systems of both Bailiwicks is Norman customary law (Coutume) rather than the English Common Law, although elements of the latter have become established over time. Islanders are full British citizens, but were not classed as European citizens unless by descent from a UK national. Any British citizen who applies for a passport in Jersey or Guernsey receives a passport bearing the words "British Islands, Bailiwick of Jersey" or "British Islands, Bailiwick of Guernsey". Under the provisions of Protocol Three, Channel Islanders who do not have a close connection with the UK (no parent or grandparent from the UK, and have never been resident in the UK for a five-year period) did not automatically benefit from the EU provisions on free movement within the EU, and their passports received an endorsement to that effect. This affected only a minority of islanders. Under the UK Interpretation Act 1978, the Channel Islands are deemed to be part of the British Islands, not to be confused with the British Isles. For the purposes of the British Nationality Act 1981, the "British Islands" include the United Kingdom (Great Britain and Northern Ireland), the Channel Islands and the Isle of Man, taken together, unless the context otherwise requires. Economy Tourism is still important. However, Jersey and Guernsey have, since the 1960s, become major offshore financial centres. Historically Guernsey's horticultural and greenhouse activities have been more significant than in Jersey, and Guernsey has maintained light industry as a higher proportion of its economy than Jersey. In Jersey, potatoes are an important export crop, shipped mostly to the UK. Jersey is heavily reliant on financial services, with 39.4% of Gross Value Added (GVA) in 2018 contributed by the sector. Rental income comes second at 15.1% with other business activities at 11.2%. Tourism 4.5% with agriculture contributing just 1.2% and manufacturing even lower at 1.1%. GVA has fluctuated between £4.5 and £5 billion for 20 years. Jersey has had a steadily rising population, increasing from below 90,000 in 2000 to over 105,000 in 2018 which combined with a flat GVA has resulted in GVA per head of population falling from £57,000 to £44,000 per person. Guernsey had a GDP of £3.2 billion in 2018 and with a stable population of around 66,000 has had a steadily rising GDP, and a GVA per head of population which in 2018 surpassed £52,000. Both bailiwicks issue their own banknotes and coins, which circulate freely in all the islands alongside UK coinage and Bank of England and Scottish banknotes. Transport and communications Post Since 1969, Jersey and Guernsey have operated postal administrations independently of the UK's Royal Mail, with their own postage stamps, which can be used for postage only in their respective Bailiwicks. UK stamps are no longer valid, but mail to the islands, and to the Isle of Man, is charged at UK inland rates. It was not until the early 1990s that the islands joined the UK's postcode system, Jersey postcodes using the initials JE and Guernsey GY. Transport Road Each of the three largest islands has a distinct vehicle registration scheme: Guernsey (GBG): a number of up to five digits; Jersey (GBJ): J followed by up to six digits (JSY vanity plates are also issued); Alderney (GBA): AY followed by up to five digits (four digits are the most that have been used, as redundant numbers are re-issued). In Sark, where most motor traffic is prohibited, the few vehicles – nearly all tractors – do not display plates. Bicycles display tax discs. Sea In the 1960s, names used for the cross-Channel ferries plying the mail route between the islands and Weymouth, Dorset, were taken from the popular Latin names for the islands: Caesarea (Jersey), Sarnia (Guernsey) and Riduna (Alderney). Fifty years later, the ferry route between the Channel Islands and the UK is operated by Condor Ferries from both St Helier, Jersey and St Peter Port, Guernsey, using high-speed catamaran fast craft to Poole in the UK. A regular passenger ferry service on the Commodore Clipper goes from both Channel Island ports to Portsmouth daily, and carries both passengers and freight. Ferry services to Normandy are operated by Manche Îles Express, and services between Jersey and Saint-Malo are operated by Compagnie Corsaire and Condor Ferries. The Isle of Sark Shipping Company operates small ferries to Sark. Normandy Trader operates an ex military tank landing craft for transporting freight between the islands and France. On 20 August 2013, Huelin-Renouf, which had operated a "lift-on lift-off" container service for 80 years between the Port of Southampton and the Port of Jersey, ceased trading. Senator Alan Maclean, a Jersey politician, had previously tried to save the 90-odd jobs furnished by the company to no avail. On 20 September, it was announced that Channel Island Lines would continue this service, and would purchase the MV Huelin Dispatch from Associated British Ports who in turn had purchased them from the receiver in the bankruptcy. The new operator was to be funded by Rockayne Limited, a closely held association of Jersey businesspeople. Air There are three airports in the Channel Islands: Alderney Airport, Guernsey Airport and Jersey Airport. They are directly connected to each other by services operated by Blue Islands and Aurigny. Rail Historically, there have been railway networks on Jersey, Guernsey, and Alderney, but all of the lines on Jersey and Guernsey have been closed and dismantled. Today there are three working railways in the Channel Islands, of which the Alderney Railway is the only one providing a regular timetabled passenger service. The other two are a gauge miniature railway, also on Alderney, and the heritage steam railway operated on Jersey as part of the Pallot Heritage Steam Museum. Media The Channel Islands are served by a number of local radio services – BBC Radio Jersey and BBC Radio Guernsey, Channel 103 and Island FM – as well as regional television news opt-outs from BBC Channel Islands and ITV Channel Television. On 1 August 2021, DAB+ digital radio became available for the first time, introducing new stations like the local Bailiwick Radio and Soleil Radio, and UK-wide services like Capital, Heart, and Times Radio. There are two broadcast transmitters serving Jersey – at Frémont Point and Les Platons – as well as one at Les Touillets in Guernsey and a relay in Alderney. There are several local newspapers including the Guernsey Press and the Jersey Evening Post and magazines. Telephone Jersey always operated its own telephone services independently of Britain's national system, Guernsey established its own telephone service in 1968. Both islands still form part of the British telephone numbering plan, but Ofcom on the mainlines does not have responsibility for telecommunications regulatory and licensing issues on the islands. It is responsible for wireless telegraphy licensing throughout the islands, and by agreement, for broadcasting regulation in the two large islands only. Submarine cables connect the various islands and provide connectivity with England and France. Internet Modern broadband speeds are available in all the islands, including full-fibre (FTTH) in Jersey (offering speeds of up to 1Gbps on all broadband connections) and VDSL and some business and homes with fibre connectivity in Guernsey. Providers include Sure and JT. The two Bailiwicks each have their own internet domain, .GG (Guernsey, Alderney, Sark) and .JE (Jersey), which are managed by channelisles.net. Culture The Norman language predominated in the islands until the nineteenth century, when increasing influence from English-speaking settlers and easier transport links led to Anglicisation. There are four main dialects/languages of Norman in the islands, Auregnais (Alderney, extinct in late twentieth century), Dgèrnésiais (Guernsey), Jèrriais (Jersey) and Sercquiais (Sark, an offshoot of Jèrriais). Victor Hugo spent many years in exile, first in Jersey and then in Guernsey, where he finished Les Misérables. Guernsey is the setting of Hugo's later novel Les Travailleurs de la Mer (Toilers of the Sea). A "Guernsey-man" also makes an appearance in chapter 91 of Herman Melville's Moby-Dick. The annual "Muratti", the inter-island football match, is considered the sporting event of the year, although, due to broadcast coverage, it no longer attracts the crowds of spectators, travelling between the islands, that it did during the twentieth century. Cricket is popular in the Channel Islands. The Jersey cricket team and the Guernsey cricket team are both associate members of the International Cricket Council. The teams have played each other in the inter-insular match since 1957. In 2001 and 2002, the Channel Islands entered a team into the MCCA Knockout Trophy, the one-day tournament of the minor counties of English and Welsh cricket. Channel Island sportsmen and women compete in the Commonwealth Games for their respective islands and the islands have also been enthusiastic supporters of the Island Games. Shooting is a popular sport, in which islanders have won Commonwealth medals. Guernsey's traditional colour for sporting and other purposes is green and Jersey's is red. The main islanders have traditional animal nicknames: Guernsey: les ânes ("donkeys" in French and Norman): the steepness of St Peter Port streets required beasts of burden, but Guernsey people also claim it is a symbol of their strength of characterwhich Jersey people traditionally interpret as stubbornness. Jersey: les crapauds ("toads" in French and Jèrriais): Jersey has toads and snakes, which Guernsey lacks. Sark: les corbins ("crows" in Sercquiais, Dgèrnésiais and Jèrriais, les corbeaux in French): crows could be seen from the sea on the island's coast. Alderney: les lapins ("rabbits" in French and Auregnais): the island is noted for its warrens. Religion Christianity was brought to the islands around the sixth century; according to tradition, Jersey was evangelised by St Helier, Guernsey by St Samson of Dol, and the smaller islands were occupied at various times by monastic communities representing strands of Celtic Christianity. At the Reformation, the previously Catholic islands converted to Calvinism under the influence of an influx of French-language pamphlets published in Geneva. Anglicanism was imposed in the seventeenth century, but the Non-Conformist local tendency returned with a strong adoption of Methodism. In the late twentieth century, a strong Catholic presence re-emerged with the arrival of numerous Portuguese workers (both from mainland Portugal and the island of Madeira). Their numbers have been reinforced by recent migrants from Poland and elsewhere in Eastern Europe. Today, Evangelical churches have been established. Services are held in a number of languages. According to 2015 statistics, 39% of the population was non-religious. Other islands in the English Channel A number of islands in the English Channel are part of France. Among these are Bréhat, Île de Batz, Chausey, Tatihou and the Îles Saint-Marcouf. The Isle of Wight, which is part of England, lies just off the coast of Great Britain, between the Channel and the Solent. Hayling and Portsea islands are also part of England which is part of the United Kingdom. See also German occupation of the Channel Islands List of churches, chapels and meeting halls in the Channel Islands Places named after the Channel Islands Notes References Bibliography Encyclopædia Britannica Vol. 5 (1951), Encyclopædia Britannica, Inc., Chicago – London – Toronto – Republished Hamlin, John F. "No 'Safe Haven': Military Aviation in the Channel Islands 1939–1945" Air Enthusiast, No. 83, September/October 1999, pp. 6–15 External links States of Alderney States of Guernsey States of Jersey Government of Sark British Isles Northwestern Europe Geography of Europe English-speaking countries and territories Special territories of the European Union
5644
https://en.wikipedia.org/wiki/Comedy%20film
Comedy film
A comedy film is a category of film which emphasizes on humor. These films are designed to make the audience laugh in amusement. Films in this style traditionally have a happy ending (dark comedy being an exception to this rule). Comedy is one of the oldest genres in film and it is derived from classical comedy in theatre. Some of the earliest silent films were comedies such as slapstick comedy which often relies on visual depictions, such as sight gags and pratfalls, so they can be enjoyed without requiring sound. To provide drama and excitement to silent movies, live music was played in sync with the action on the screen, by pianos, organs, and other instruments. When sound films became more prevalent during the 1920s, comedy films grew in popularity, as laughter could result from both burlesque situations but now also from humorous dialogue. Comedy, compared with other film genres, places more focus on individual star actors, with many former stand-up comics transitioning to the film industry due to their popularity. In The Screenwriters Taxonomy (2017), Eric R. Williams contends that film genres are fundamentally based upon a film's atmosphere, character, and story, and therefore the labels "drama" and "comedy" are too broad to be considered a genre. Instead, his comedy taxonomy argues that comedy is a type of film that contains at least a dozen different sub-types. A number of hybrid genres use comedy, such as action comedy and the romantic comedy. Comedy is a genre of entertainment that is designed to make audiences laugh. It can take many forms, including stand-up comedy, sketch comedy, sitcoms, and comedic films. Comedy often uses humor and satire to comment on social and political issues, as well as everyday life. Many comedians use observational humor, in which they draw on their own experiences and the world around them to create comedic material. Physical comedy, which uses gestures, facial expressions and body language to create humour, is also a popular form of comedy. The genre of comedy is known for its ability to make people laugh, but also make them think, and it can be a reflection of society and its issues. History Silent film era The first comedy film was L'Arroseur Arrosé (1895), directed and produced by film pioneer Louis Lumière. Less than 60 seconds long, it shows a boy playing a prank on a gardener. The most noted comedy actors of the silent film era (1895–1927) were Charlie Chaplin, Harold Lloyd, and Buster Keaton. Present era In a 2023 article in Collider, Lisa Laman states that "modern-day [film] comedies tend to suffer from so many visual problems" and use "frustratingly inert images" and "overly-lit" sets, making them "look like sitcoms, not movies." She says "modern comedy movies are filmed with "little imagination in…staging", poor production values, "awkward editing and flat camerawork", and with few "visual gags". Sub-types Anarchic comedy The anarchic comedy film, as name suggests, is a random or stream-of-consciousness type of humor that often lampoons a form of authority. The genre dates from the silent era. Notable examples of this type of film are those produced by Monty Python. Other examples include Duck Soup (1933) and Caddyshack (1980). Bathroom comedy (or gross-out comedy) Gross out films are aimed at the young adult market (age 18–24) and rely heavily on vulgar, sexual, or "toilet" humor. They often contain a large amount of profanity and nudity. Examples include Animal House (1978) and Freddy Got Fingered (2001). Comedy of ideas This sub-type uses comedy to explore serious ideas such as religion, sex, or politics. Often, the characters represent particular divergent world views and are forced to interact for comedic effect and social commentary. Some examples include both Ferris Bueller's Day Off (1986) and Swing Vote (2008). Comedy of manners A comedy of manners satirizes the mores and affectations of a social class. The plot of a comedy of manners is often concerned with an illicit love affair or other scandals. Generally, the plot is less important for its comedic effect than its witty dialogue. This form of comedy has a long ancestry that dates back at least as far as Much Ado about Nothing created by William Shakespeare, published in 1623. Examples for comedy of manners films include Breakfast at Tiffany's (1961) and Under the Tuscan Sun (2003). Black comedy The black comedy film deals with taboo subjectsincluding death, murder, crime, suicide, and warin a satirical manner. An example is Dr. Strangelove (1964). Farce Farcical films exaggerate situations beyond the realm of possibilitythereby making them entertaining. Film examples include Sleeper (1973). Mockumentary Mockumentary comedies are fictional but use a documentary style that includes interviews and "documentary" footage, along regular scenes. Examples include This Is Spinal Tap (1984) and Reboot Camp (2020). Musical comedy Musical comedy as a film genre has its roots in the 1920s, with Disney's Steamboat Willie (1928) being the most recognized of these early films. The subgenre resurged with popularity in the 1970s, with movies such as Bugsy Malone (1976) and Grease (1978) gaining status as cult classics. Observational humor Observational humor films find humor in the common practices of everyday life. Some film examples of observational humor include Knocked Up (2007) and The Intern (2015). Parody (or spoof) A parody or spoof film satirizes other film genres or classic films. Such films employ sarcasm, stereotyping, mockery of scenes from other films, and the obviousness of meaning in a character's actions. Examples of this form include Blazing Saddles (1974) and Spaceballs (1987). Sex comedy The humor in sex comedy is primarily derived from sexual situations and desire, as in Bachelor Party (1984) and The Inbetweeners Movie (2011). Situational comedy Situational comedy films' humor come from knowing a stock group of characters (or character types) and then exposing them to different situations to create humorous and ironic juxtaposition. Examples include Planes, Trains and Automobiles (1987) and The Hangover (2009). Straight comedy This broad sub-type applies to films that do not attempt a specific approach to comedy but, rather, used comedy for comedic sake. Chasing Amy (1997) and The Shaggy Dog (2006) are examples of straight comedy films. Slapstick films Slapstick films involve exaggerated, boisterous physical action to create impossible and humorous situations. Because it relies predominantly on visual depictions of events, it does not require sound. Accordingly, the subgenre was ideal for silent movies and was prevalent during that era. Popular stars of the slapstick genre include Harold Lloyd, Roscoe Arbuckle, Charlie Chaplin, Peter Sellers and Norman Wisdom. Some of these stars, as well as acts such as Laurel and Hardy and the Three Stooges, also found success incorporating slapstick comedy into sound films. Modern examples of slapstick comedy include Mr. Bean's Holiday (2007) and Get Smart (2008). Surreal comedy Although not specifically linked to the history of surrealism, surreal comedies comedies include behavior and storytelling techniques that are illogicalincluding bizarre juxtapositions, absurd situations, and unpredictable reactions to normal situations. Some examples are It's a Mad, Mad, Mad, Mad World (1963) and Everything Everywhere All at Once (2022). Hybrid subgenres According to Williams' taxonomy, all film descriptions should contain their type (comedy or drama) combined with one (or more) subgenres. This combination does not create a separate genre, but rather, provides a better understanding of the film. Action comedy Films of this type blend comic antics and action where the stars combine one-liners with a thrilling plot and daring stunts. The genre became a specific draw in North America in the eighties when comedians such as Eddie Murphy started taking more action-oriented roles, such as in 48 Hrs. (1982) and Beverly Hills Cop (1984). Sub-genres of the action comedy (labeled macro-genres by Williams) include: Martial arts films Slapstick martial arts films became a mainstay of Hong Kong action cinema through the work of Jackie Chan among others, such as Who Am I? (1998). Kung Fu Panda is an action comedy that focuses on the martial art of kung fu. Superhero films Some action films focus on superheroes; for example, The Incredibles, Hancock, Kick-Ass, and Mystery Men. Other categories of the action comedy include: Buddy films Films starring mismatched partners for comedic effects, such as in Midnight Run, Rush Hour, 21 Jump Street, Bad Boys, Starsky and Hutch, Booksmart, The Odd Couple, and Ted. Comedy thriller Comedy thriller is a type that combines elements of humor and suspense. Films such as Silver Streak, Charade, Kiss Kiss Bang Bang, In Bruges, Mr. and Mrs. Smith, Grosse Point Blank, The Thin Man, The Big Fix, and The Lady Vanishes. Comedy mystery Comedy mystery is a film genre combining elements of comedy and mystery fiction. Though the genre arguably peaked in the 1930s and 1940s, comedy-mystery films have been continually produced since. Examples include the Pink Panther series,Scooby-Doo films, Clue (1985) and Knives Out (2019). Crime comedy A hybrid mix of crime and comedy films, examples include Inspector Palmu's Mistake (1960), Oh Brother Where Art Thou? (2000), Take the Money and Run (1969) and Who Framed Roger Rabbit (1988). Fantasy comedy Fantasy comedy films use magic, supernatural or mythological figures for comedic purposes. Some fantasy comedy includes an element of parody, or satire, turning fantasy conventions on their head, such as the hero becoming a cowardly fool or the princess being a klutz. Examples of these films include Big, Being John Malkovich, Ernest Saves Christmas, Ernest Scared Stupid, Night at the Museum, Groundhog Day, Click, and Shrek. Comedy horror Comedy horror is a genre/type in which the usual dark themes and "scare tactics" attributed to horror films are treated with a humorous approach. These films either often goofy horror cliches, such as in Scream, Young Frankenstein, The Rocky Horror Picture Show, Little Shop of Horrors, The Haunted Mansion, and Scary Movie where campy styles are favored. Some are much more subtle and do not parody horror, such as An American Werewolf in London. Another style of comedy horror can also rely on over-the-top violence and gore such as in The Evil Dead (1981), The Return of the Living Dead (1985), Braindead (1992), and Club Dread (2004) – such films are sometimes known as splatstick, a portmanteau of the words splatter and slapstick. It would be reasonable to put Ghostbusters in this category. Day-in-the-life comedy Day-in-the-life films take small events in a person's life and raises their level of importance. The "small things in life" feel as important to the protagonist (and the audience) as the climactic battle in an action film, or the final shootout in a western.  Often, the protagonists deal with multiple, overlapping issues in the course of the film.  The day-in-the-life comedy often finds humor in commenting upon the absurdity or irony of daily life; for example The Terminal (2004) or Waitress (2007). Character humor is also used extensively in day-in-the-life comedies, as can be seen in American Splendor (2003). Romantic comedy Romantic comedies are humorous films with central themes that reinforce societal beliefs about love (e.g., themes such as "love at first sight", "love conquers all", or "there is someone out there for everyone"); the story typically revolves around characters falling into (and out of, and back into) love. Amélie (2001), Annie Hall (1977), Charade (1963), City Lights (1931), Four Weddings and a Funeral (1994), It (1927), The Lobster (2015), My Wife, the Director General (1966), My Favorite Wife (1940), Pretty Woman (1990), Some Like It Hot (1959), There's Something About Mary (1998) and When Harry Met Sally... (1989) are examples of romantic comedies. Screwball comedy A subgenre of the romantic comedy, screwball comedies appears to focus on the story of a central male character until a strong female character takes center stage; at this point, the man's story becomes secondary to a new issue typically introduced by the woman; this story grows in significance and, as it does, the man's masculinity is challenged by the sharp-witted woman, who is often his love interest. Typically it can include a romantic element, an interplay between people of different economic strata, quick and witty repartee, some form of role reversal, and a happy ending. Some examples of screwball comedy during its heyday include It Happened One Night (1934), Bringing Up Baby (1938), The Philadelphia Story (1940), His Girl Friday (1940), Mr. & Mrs. Smith (1941); more recent examples include What's Up, Doc? (1972), Rat Race (2001), and Our Idiot Brother (2011). Science fiction comedy Science fiction comedy films often exaggerate the elements of traditional science fiction films to comic effect. Examples include Spaceballs, Ghostbusters, Galaxy Quest, Mars Attacks!, Men in Black, and many more. Sports comedy Sports comedy combines the genre of comedy with that of the sports film genre. Thematically, the story is often one of "Our Team" versus "Their Team"; their team will always try to win, and our team will show the world that they deserve recognition or redemption; the story does not always have to involve a team. The story could also be about an individual athlete or the story could focus on an individual playing on a team. The comedic aspect of this super-genre often comes from physical humor (Happy Gilmore - 1996), character humor (Caddyshack - 1980), or the juxtaposition of bad athletes succeeding against the odds (The Bad News Bears - 1976). War comedy War films typically tell the story of a small group of isolated individuals who – one by one – get killed (literally or metaphorically) by an outside force until there is a final fight to the death; the idea of the protagonists facing death is a central expectation in a war film. War comedies infuse this idea of confronting death with a morbid sense of humor. In a war film even though the enemy may out-number, or out-power, the hero, we assume that the enemy can be defeated if only the hero can figure out how. Often, this strategic sensibility provides humorous opportunities in a war comedy. Examples include Good Morning, Vietnam; M*A*S*H; the Francis the Talking Mule series; and others. Western comedy Films in the western super-genre often take place in the American Southwest or in Mexico, with a large number of scenes occurring outside so we can soak in nature's rugged beauty. Visceral expectations for the audience include fistfights, gunplay, and chase scenes. There is also the expectation of spectacular panoramic images of the countryside including sunsets, wide open landscapes, and endless deserts and sky. Western comedies often find their humor in specific characters (Three Amigos, 1986), in interpersonal relationships (Lone Ranger, 2013) or in creating a parody of the western (Rango, 2011). By country See also AFI's 100 Years...100 Laughs (1924–1998, list made in 2000) References Bibliography Thomas W. Bohn and Richard L. Stromgren, Light and Shadows: A History of Motion Pictures, 1975, Mayfield Publishing. Williams, Eric R. (2017) The Screenwriters Taxonomy: A Roadmap to Creative Storytelling. New York, NY: Routledge Press, Studies in Media Theory and Practice. External links Comedy films at IMDB Top 100 Comedy movies from Rottentomatoes Film genres
5645
https://en.wikipedia.org/wiki/Cult%20film
Cult film
A cult film or cult movie, also commonly referred to as a cult classic, is a film that has acquired a cult following. Cult films are known for their dedicated, passionate fanbase which forms an elaborate subculture, members of which engage in repeated viewings, dialogue-quoting, and audience participation. Inclusive definitions allow for major studio productions, especially box-office bombs, while exclusive definitions focus more on obscure, transgressive films shunned by the mainstream. The difficulty in defining the term and subjectivity of what qualifies as a cult film mirror classificatory disputes about art. The term cult film itself was first used in the 1970s to describe the culture that surrounded underground films and midnight movies, though cult was in common use in film analysis for decades prior to that. Cult films trace their origin back to controversial and suppressed films kept alive by dedicated fans. In some cases, reclaimed or rediscovered films have acquired cult followings decades after their original release, occasionally for their camp value. Other cult films have since become well-respected or reassessed as classics; there is debate as to whether these popular and accepted films are still cult films. After failing at the cinema, some cult films have become regular fixtures on cable television or profitable sellers on home video. Others have inspired their own film festivals. Cult films can both appeal to specific subcultures and form their own subcultures. Other media that reference cult films can easily identify which demographics they desire to attract and offer savvy fans an opportunity to demonstrate their knowledge. Cult films frequently break cultural taboos, and many feature excessive displays of violence, gore, sexuality, profanity, or combinations thereof. This can lead to controversy, censorship, and outright bans; less transgressive films may attract similar amounts of controversy when critics call them frivolous or incompetent. Films that fail to attract requisite amounts of controversy may face resistance when labeled as cult films. Mainstream films and big budget blockbusters have attracted cult followings similar to more underground and lesser known films; fans of these films often emphasize the films' niche appeal and reject the more popular aspects. Fans who like the films for the wrong reasons, such as perceived elements that represent mainstream appeal and marketing, will often be ostracized or ridiculed. Likewise, fans who stray from accepted subcultural scripts may experience similar rejection. Since the late 1970s, cult films have become increasingly popular. Films that once would have been limited to obscure cult followings are now capable of breaking into the mainstream, and showings of cult films have proved to be a profitable business venture. Overbroad usage of the term has resulted in controversy, as purists state it has become a meaningless descriptor applied to any film that is the slightest bit weird or unconventional; others accuse Hollywood studios of trying to artificially create cult films or use the term as a marketing tactic. Films are frequently stated to be an "instant cult classic" now, occasionally before they are released. Fickle fans on the Internet have latched on to unreleased films only to abandon them later on release. At the same time, other films have acquired massive, quick cult followings, owing to spreading virally through social media. Easy access to cult films via video on demand and peer-to-peer file sharing has led some critics to pronounce the death of cult films. Definition A cult film is any film that has a cult following, although the term is not easily defined and can be applied to a wide variety of films. Some definitions exclude films that have been released by major studios or have big budgets, that try specifically to become cult films, or become accepted by mainstream audiences and critics. Cult films are defined by audience reaction as much as by their content. This may take the form of elaborate and ritualized audience participation, film festivals, or cosplay. Over time, the definition has become more vague and inclusive as it drifts away from earlier, stricter views. Increasing use of the term by mainstream publications has resulted in controversy, as cinephiles argue that the term has become meaningless or "elastic, a catchall for anything slightly maverick or strange". Academic Mark Shiel has criticized the term itself as being a weak concept, reliant on subjectivity; different groups can interpret films in their own terms. According to feminist scholar Joanne Hollows, this subjectivity causes films with large female cult followings to be perceived as too mainstream and not transgressive enough to qualify as a cult film. Academic Mike Chopra‑Gant says that cult films become decontextualized when studied as a group, and Shiel criticizes this recontextualization as cultural commodification. In 2008, Cineaste asked a range of academics for their definition of a cult film. Several people defined cult films primarily in terms of their opposition to mainstream films and conformism, explicitly requiring a transgressive element, though others disputed the transgressive potential, given the demographic appeal to conventional moviegoers and mainstreaming of cult films. Jeffrey Andrew Weinstock instead called them mainstream films with transgressive elements. Most definitions also required a strong community aspect, such as obsessed fans or ritualistic behavior. Citing misuse of the term, Mikel J. Koven took a self-described hard-line stance that rejected definitions that use any other criteria. Matt Hills instead stressed the need for an open-ended definition rooted in structuration, where the film and the audience reaction are interrelated and neither is prioritized. Ernest Mathijs focused on the accidental nature of cult followings, arguing that cult film fans consider themselves too savvy to be marketed to, while Jonathan Rosenbaum rejected the continued existence of cult films and called the term a marketing buzzword. Mathijs suggests that cult films help to understand ambiguity and incompleteness in life given the difficulty in even defining the term. That cult films can have opposing qualities – such as good and bad, failure and success, innovative and retro – helps to illustrate that art is subjective and never self-evident. This ambiguity leads critics of postmodernism to accuse cult films of being beyond criticism, as the emphasis is now on personal interpretation rather than critical analysis or metanarratives. These inherent dichotomies can lead audiences to be split between ironic and earnest fans. Writing in Defining Cult Movies, Jancovich et al. quote academic Jeffrey Sconce, who defines cult films in terms of paracinema, marginal films that exist outside critical and cultural acceptance: everything from exploitation to beach party musicals to softcore pornography. However, they reject cult films as having a single unifying feature; instead, they state that cult films are united in their "subcultural ideology" and opposition to mainstream tastes, itself a vague and undefinable term. Cult followings themselves can range from adoration to contempt, and they have little in common except for their celebration of nonconformity – even the bad films ridiculed by fans are artistically nonconformist, albeit unintentionally. At the same time, they state that bourgeois, masculine tastes are frequently reinforced, which makes cult films more of an internal conflict within the bourgeoisie, rather than a rebellion against it. This results in an anti-academic bias despite the use of formal methodologies, such as defamiliarization. This contradiction exists in many subcultures, especially those dependent on defining themselves in terms of opposition to the mainstream. This nonconformity is eventually co-opted by the dominant forces, such as Hollywood, and marketed to the mainstream. Academic Xavier Mendik also defines cult films as opposing the mainstream and further proposes that films can become cult by virtue of their genre or content, especially if it is transgressive. Due to their rejection of mainstream appeal, Mendik says cult films can be more creative and political; times of relative political instability produce more interesting films. General overview Cult films have existed since the early days of cinema. Film critic Harry Allan Potamkin traces them back to 1910s France and the reception of Pearl White, William S. Hart, and Charlie Chaplin, which he described as "a dissent from the popular ritual". Nosferatu (1922) was an unauthorized adaptation of Bram Stoker's Dracula. Stoker's widow sued the production company and drove it to bankruptcy. All known copies of the film were destroyed, and Nosferatu become an early cult film, kept alive by a cult following that circulated illegal bootlegs. Academic Chuck Kleinhans identifies the Marx Brothers as making other early cult films. On their original release, some highly regarded classics from the Golden Age of Hollywood were panned by critics and audiences, relegated to cult status. The Night of the Hunter (1955) was a cult film for years, quoted often and championed by fans, before it was reassessed as an important and influential classic. During this time, American exploitation films and imported European art films were marketed similarly. Although critics Pauline Kael and Arthur Knight argued against arbitrary divisions into high and low culture, American films settled into rigid genres; European art films continued to push the boundaries of simple definitions, and these exploitative art films and artistic exploitation films would go on to influence American cult films. Much like later cult films, these early exploitation films encouraged audience participation, influenced by live theater and vaudeville. Modern cult films grew from 1960s counterculture and underground films, popular among those who rejected mainstream Hollywood films. These underground film festivals led to the creation of midnight movies, which attracted cult followings. The term cult film itself was an outgrowth of this movement and was first used in the 1970s, though cult had been in use for decades in film analysis with both positive and negative connotations. These films were more concerned with cultural significance than the social justice sought by earlier avant-garde films. Midnight movies became more popular and mainstream, peaking with the release of The Rocky Horror Picture Show (1975), which finally found its audience several years after its release. Eventually, the rise of home video would marginalize midnight movies once again, after which many directors joined the burgeoning independent film scene or went back underground. Home video would give a second life to box-office flops, as positive word-of-mouth or excessive replay on cable television led these films to develop an appreciative audience, as well as obsessive replay and study. For example, The Beastmaster (1982), despite its failure at the box office, became one of the most played movies on American cable television and developed into a cult film. Home video and television broadcasts of cult films were initially greeted with hostility. Joanne Hollows states that they were seen as turning cult films mainstream – in effect, feminizing them by opening them to distracted, passive audiences. Releases from major studios – such as The Big Lebowski (1998), which was distributed by Universal Studios – can become cult films when they fail at the box office and develop a cult following through reissues, such as midnight movies, festivals, and home video. Hollywood films, due to their nature, are more likely to attract this kind of attention, which leads to a mainstreaming effect of cult culture. With major studios behind them, even financially unsuccessful films can be re-released multiple times, which plays into a trend to capture audiences through repetitious reissues. The constant use of profanity and drugs in otherwise mainstream, Hollywood films, such as The Big Lebowski, can alienate critics and audiences yet lead to a large cult following among more open-minded demographics not often associated with cult films, such as Wall Street bankers and professional soldiers. Thus, even comparatively mainstream films can satisfy the traditional demands of a cult film, perceived by fans as transgressive, niche, and uncommercial. Discussing his reputation for making cult films, Bollywood director Anurag Kashyap said, "I didn't set out to make cult films. I wanted to make box-office hits." Writing in Cult Cinema, academics Ernest Mathijs and Jamie Sexton state that this acceptance of mainstream culture and commercialism is not out of character, as cult audiences have a more complex relationship to these concepts: they are more opposed to mainstream values and excessive commercialism than they are anything else. In a global context, popularity can vary widely by territory, especially with regard to limited releases. Mad Max (1979) was an international hit – except in America where it became an obscure cult favorite, ignored by critics and available for years only in a dubbed version though it earned over $100M internationally. Foreign cinema can put a different spin on popular genres, such as Japanese horror, which was initially a cult favorite in America. Asian imports to the West are often marketed as exotic cult films and of interchangeable national identity, which academic Chi-Yun Shin criticizes as reductive. Foreign influence can affect fan response, especially on genres tied to a national identity; when they become more global in scope, questions of authenticity may arise. Filmmakers and films ignored in their own country can become the objects of cult adoration in another, producing perplexed reactions in their native country. Cult films can also establish an early viability for more mainstream films both for filmmakers and national cinema. The early cult horror films of Peter Jackson were so strongly associated with his homeland that they affected the international reputation of New Zealand and its cinema. As more artistic films emerged, New Zealand was perceived as a legitimate competitor to Hollywood, which mirrored Jackson's career trajectory. Heavenly Creatures (1994) acquired its own cult following, became a part of New Zealand's national identity, and paved the way for big-budget, Hollywood-style epics, such as Jackson's The Lord of the Rings trilogy. Mathijs states that cult films and fandom frequently involve nontraditional elements of time and time management. Fans will often watch films obsessively, an activity that is viewed by the mainstream as wasting time yet can be seen as resisting the commodification of leisure time. They may also watch films idiosyncratically: sped up, slowed down, frequently paused, or at odd hours. Cult films themselves subvert traditional views of time – time travel, non-linear narratives, and ambiguous establishments of time are all popular. Mathijs also identifies specific cult film viewing habits, such as viewing horror films on Halloween, sentimental melodrama on Christmas, and romantic films on Valentine's Day. These films are often viewed as marathons where fans can gorge themselves on their favorites. Mathijs states that cult films broadcast on Christmas have a nostalgic factor. These films, ritually watched every season, give a sense of community and shared nostalgia to viewers. New films often have trouble making inroads against the institutions of It's A Wonderful Life (1946) and Miracle on 34th Street (1947). These films provide mild criticism of consumerism while encouraging family values. Halloween, on the other hand, allows flaunting society's taboos and testing one's fears. Horror films have appropriated the holiday, and many horror films debut on Halloween. Mathijs criticizes the over-cultified, commercialized nature of Halloween and horror films, which feed into each other so much that Halloween has turned into an image or product with no real community. Mathijs states that Halloween horror conventions can provide the missing community aspect. Despite their oppositional nature, cult films can produce celebrities. Like cult films themselves, authenticity is an important aspect of their popularity. Actors can become typecast as they become strongly associated with such iconic roles. Tim Curry, despite his acknowledged range as an actor, found casting difficult after he achieved fame in The Rocky Horror Picture Show. Even when discussing unrelated projects, interviewers frequently bring up the role, which causes him to tire of discussing it. Mary Woronov, known for her transgressive roles in cult films, eventually transitioned to mainstream films. She was expected to recreate the transgressive elements of her cult films within the confines of mainstream cinema. Instead of the complex gender deconstructions of her Andy Warhol films, she became typecast as a lesbian or domineering woman. Sylvia Kristel, after starring in Emmanuelle (1974), found herself highly associated with the film and the sexual liberation of the 1970s. Caught between the transgressive elements of her cult film and the mainstream appeal of soft-core pornography, she was unable to work in anything but exploitation films and Emmanuelle sequels. Despite her immense popularity and cult following, she would rate only a footnote in most histories of European cinema if she was even mentioned. Similarly, Chloë Sevigny has struggled with her reputation as a cult independent film star famous for her daring roles in transgressive films. Cult films can also trap directors. Leonard Kastle, who directed The Honeymoon Killers (1969), never directed another film again. Despite his cult following, which included François Truffaut, he was unable to find financing for any of his other screenplays. Qualities that bring cult films to prominence – such as an uncompromising, unorthodox vision – caused Alejandro Jodorowsky to languish in obscurity for years. Transgression and censorship Transgressive films as a distinct artistic movement began in the 1970s. Unconcerned with genre distinctions, they drew inspiration equally from the nonconformity of European art cinema and experimental film, the gritty subject matter of Italian neorealism, and the shocking images of 1960s exploitation. Some used hardcore pornography and horror, occasionally at the same time. In the 1980s, filmmaker Nick Zedd identified this movement as the Cinema of Transgression and later wrote a manifesto. Popular in midnight showings, they were mainly limited to large urban areas, which led academic Joan Hawkins to label them as "downtown culture". These films acquired a legendary reputation as they were discussed and debated in alternative weeklies, such as The Village Voice. Home video would finally allow general audiences to see them, which gave many people their first taste of underground film. Ernest Mathijs says that cult films often disrupt viewer expectations, such as giving characters transgressive motivations or focusing attention on elements outside the film. Cult films can also transgress national stereotypes and genre conventions, such as Battle Royale (2000), which broke many rules of teenage slasher films. The reverse – when films based on cult properties lose their transgressive edge – can result in derision and rejection by fans. Audience participation itself can be transgressive, such as breaking long-standing taboos against talking during films and throwing things at the screen. According to Mathijs, critical reception is important to a film's perception as cult, through topicality and controversy. Topicality, which can be regional (such as objection to government funding of the film) or critical (such as philosophical objections to the themes), enables attention and a contextual response. Cultural topics make the film relevant and can lead to controversy, such as a moral panic, which provides opposition. Cultural values transgressed in the film, such as sexual promiscuity, can be attacked by proxy, through attacks on the film. These concerns can vary from culture to culture, and they need not be at all similar. However, Mathijs says the film must invoke metacommentary for it to be more than simply culturally important. While referencing previous arguments, critics may attack its choice of genre or its very right to exist. Taking stances on these varied issues, critics assure their own relevance while helping to elevate the film to cult status. Perceived racist and reductive remarks by critics can rally fans and raise the profile of cult films, an example of which would be Rex Reed's comments about Korean culture in his review of Oldboy (2003). Critics can also polarize audiences and lead debates, such as how Joe Bob Briggs and Roger Ebert dueled over I Spit On Your Grave (1978). Briggs would later contribute a commentary track to the DVD release in which he describes it as a feminist film. Films which do not attract enough controversy may be ridiculed and rejected when suggested as cult films. Academic Peter Hutchings, noting the many definitions of a cult film that require transgressive elements, states that cult films are known in part for their excesses. Both subject matter and its depiction are portrayed in extreme ways that break taboos of good taste and aesthetic norms. Violence, gore, sexual perversity, and even the music can be pushed to stylistic excess far beyond that allowed by mainstream cinema. Film censorship can make these films obscure and difficult to find, common criteria used to define cult films. Despite this, these films remain well-known and prized among collectors. Fans will occasionally express frustration with dismissive critics and conventional analysis, which they believe marginalizes and misinterprets paracinema. In marketing these films, young men are predominantly targeted. Horror films in particular can draw fans who seek the most extreme films. Audiences can also ironically latch on to offensive themes, such as misogyny, using these films as catharsis for the things that they hate most in life. Exploitative, transgressive elements can be pushed to excessive extremes for both humor and satire. Frank Henenlotter faced censorship and ridicule, but he found acceptance among audiences receptive to themes that Hollywood was reluctant to touch, such as violence, drug addiction, and misogyny. Lloyd Kaufman sees his films' political statements as more populist and authentic than the hypocrisy of mainstream films and celebrities. Despite featuring an abundance of fake blood, vomit, and diarrhea, Kaufman's films have attracted positive attention from critics and academics. Excess can also exist as camp, such as films that highlight the excesses of 1980s fashion and commercialism. Films that are influenced by unpopular styles or genres can become cult films. Director Jean Rollin worked within cinéma fantastique, an unpopular genre in modern France. Influenced by American films and early French fantasists, he drifted between art, exploitation, and pornography. His films were reviled by critics, but he retained a cult following drawn by the nudity and eroticism. Similarly, Jess Franco chafed under fascist censorship in Spain but became influential in Spain's horror boom of the 1960s. These transgressive films that straddle the line between art and horror may have overlapping cult followings, each with their own interpretation and reasons for appreciating it. The films that followed Jess Franco were unique in their rejection of mainstream art. Popular among fans of European horror for their subversiveness and obscurity, these later Spanish films allowed political dissidents to criticize the fascist regime within the cloak of exploitation and horror. Unlike most exploitation directors, they were not trying to establish a reputation. They were already established in the art-house world and intentionally chose to work within paracinema as a reaction against the New Spanish Cinema, an artistic revival supported by the fascists. As late as the 1980s, critics still cited Pedro Almodóvar's anti-macho iconoclasm as a rebellion against fascist mores, as he grew from countercultural rebel to mainstream respectability. Transgressive elements that limit a director's appeal in one country can be celebrated or highlighted in another. Takashi Miike has been marketed in the West as a shocking and avant-garde filmmaker despite his many family-friendly comedies, which have not been imported. The transgressive nature of cult films can lead to their censorship. During the 1970s and early 1980s, a wave of explicit, graphic exploitation films caused controversy. Called "video nasties" within the UK, they ignited calls for censorship and stricter laws on home video releases, which were largely unregulated. Consequently, the British Board of Film Classification banned many popular cult films due to issues of sex, violence, and incitement to crime. Released during the cannibal boom, Cannibal Holocaust (1980) was banned in dozens of countries and caused the director to be briefly jailed over fears that it was a real snuff film. Although opposed to censorship, director Ruggero Deodato would later agree with cuts made by the BBFC which removed unsimulated animal killings, which limited the film's distribution. Frequently banned films may introduce questions of authenticity as fans question whether they have seen a truly uncensored cut. Cult films have been falsely claimed to have been banned to increase their transgressive reputation and explain their lack of mainstream penetration. Marketing campaigns have also used such claims to raise interest among curious audiences. Home video has allowed cult film fans to import rare or banned films, finally giving them a chance to complete their collection with imports and bootlegs. Cult films previously banned are sometimes released with much fanfare and the fans assumed to be already familiar with the controversy. Personal responsibility is often highlighted, and a strong anti-censorship message may be present. Previously lost scenes cut by studios can be re-added and restore a director's original vision, which draws similar fanfare and acclaim from fans. Imports are sometimes censored to remove elements that would be controversial, such as references to Islamic spirituality in Indonesian cult films. Academics have written of how transgressive themes in cult films can be regressive. David Church and Chuck Kleinhans describe an uncritical celebration of transgressive themes in cult films, including misogyny and racism. Church has also criticized gendered descriptions of transgressive content that celebrate masculinity. Joanne Hollows further identifies a gendered component to the celebration of transgressive themes in cult films, where male terms are used to describe films outside the mainstream while female terms are used to describe mainstream, conformist cinema. Jacinda Read's expansion states that cult films, despite their potential for empowerment of the marginalized, are more often used by politically incorrect males. Knowledgeable about feminism and multiculturalism, they seek a refuge from the academic acceptance of these progressive ideals. Their playful and ironic acceptance of regressive lad culture invites, and even dares, condemnation from academics and the uncool. Thus, cult films become a tool to reinforce mainstream values through transgressive content; Rebecca Feasy states that cultural hierarchies can also be reaffirmed through mockery of films perceived to be lacking masculinity. However, the sexploitation films of Doris Wishman took a feminist approach which avoids and subverts the male gaze and traditional goal-oriented methods. Wishman's subject matter, though exploitative and transgressive, was always framed in terms of female empowerment and the feminine spectator. Her use of common cult film motifs – female nudity and ambiguous gender – were repurposed to comment on feminist topics. Similarly, the films of Russ Meyer were a complicated combination of transgressive, mainstream, progressive, and regressive elements. They attracted both acclaim and denouncement from critics and progressives. Transgressive films imported from cultures that are recognizably different yet still relatable can be used to progressively examine issues in another culture. Subcultural appeal and fandom Cult films can be used to help define or create groups as a form of subcultural capital; knowledge of cult films proves that one is "authentic" or "non-mainstream". They can be used to provoke an outraged response from the mainstream, which further defines the subculture, as only members could possibly tolerate such deviant entertainment. More accessible films have less subcultural capital; among extremists, banned films will have the most. By referencing cult films, media can identify desired demographics, strengthen bonds with specific subcultures, and stand out among those who understand the intertextuality. Popular films from previous eras may be reclaimed by genre fans long after they have been forgotten by the original audiences. This can be done for authenticity, such as horror fans who seek out now-obscure titles from the 1950s instead of the modern, well-known remakes. Authenticity may also drive fans to deny genre categorization to films perceived as too mainstream or accessible. Authenticity in performance and expertise can drive fan acclaim. Authenticity can also drive fans to decry the mainstream in the form of hostile critics and censors. Especially when promoted by enthusiastic and knowledgeable programmers, choice of venue can be an important part of expressing individuality. Besides creating new communities, cult films can link formerly disparate groups, such as fans and critics. As these groups intermix, they can influence each other, though this may be resisted by older fans, unfamiliar with these new references. In extreme cases, cult films can lead to the creation of religions, such as Dudeism. For their avoidance of mainstream culture and audiences, enjoyment of irony, and celebration of obscure subcultures, academic Martin Roberts compares cult film fans to hipsters. A film can become the object of a cult following within a particular region or culture if it has unusual significance. For example, Norman Wisdom's films, friendly to Marxist interpretation, amassed a cult following in Albania, as they were among the few Western films allowed by the country's Communist rulers. The Wizard of Oz (1939) and its star, Judy Garland, hold special significance to American and British gay culture, although it is a widely viewed and historically important film in greater American culture. Similarly, James Dean and his brief film career have become icons of alienated youth. Cult films can have such niche appeal that they are only popular within certain subcultures, such as Reefer Madness (1936) and Hemp for Victory (1942) among the stoner subculture. Beach party musicals, popular among American surfers, failed to find an equivalent audience when imported to the United Kingdom. When films target subcultures like this, they may seem unintelligible without the proper cultural capital. Films which appeal to teenagers may offer subcultural identities that are easily recognized and differentiate various subcultural groups. Films which appeal to stereotypical male activities, such as sports, can easily gain strong male cult followings. Sports metaphors are often used in the marketing of cult films to males, such as emphasizing the "extreme" nature of the film, which increases the appeal to youth subcultures fond of extreme sports. Matt Hills' concept of the "cult blockbuster" involves cult followings inside larger, mainstream films. Although these are big budget, mainstream films, they still attract cult followings. The cult fans differentiate themselves from ordinary fans in several ways: longstanding devotion to the film, distinctive interpretations, and fan works. Hills identifies three different cult followings for The Lord of the Rings, each with their own fandom separate from the mainstream. Academic Emma Pett identifies Back to the Future (1985) as another example of a cult blockbuster. Although the film was an instant hit when released, it has also developed a nostalgic cult following over the years. The hammy acting by Christopher Lloyd and quotable dialogue have drawn a cult following, as they mimic traditional cult films. Blockbuster science fiction films that include philosophical subtexts, such as The Matrix, allow cult film fans to enjoy them on a higher level than the mainstream. Star Wars, with its large cult following in geek subculture, has been cited as both a cult blockbuster and a cult film. Although a mainstream epic, Star Wars has provided its fans with a spirituality and culture outside of the mainstream. Fans, in response to the popularity of these blockbusters, will claim elements for themselves while rejecting others. For example, in the Star Wars film series, mainstream criticism of Jar Jar Binks focused on racial stereotyping; although cult film fans will use that to bolster their arguments, he is rejected because he represents mainstream appeal and marketing. Also, instead of valuing textual rarity, fans of cult blockbusters will value repeat viewings. They may also engage in behaviors more traditional for fans of cult television and other serial media, as cult blockbusters are often franchised, preconceived as a film series, or both. To reduce mainstream accessibility, a film series can be self-reflexive and full of in-jokes that only longtime fans can understand. Mainstream critics may ridicule commercially successful directors of cult blockbusters, such as James Cameron, Michael Bay, and Luc Besson, whose films have been called simplistic. This critical backlash may serve to embellish the filmmakers' reception as cult auteurs. In the same way, critics may ridicule fans of cult blockbusters as immature or shallow. Cult films can create their own subculture. Rocky Horror, originally made to exploit the popularity of glam subculture, became what academic Gina Marchetti called a "sub-subculture", a variant that outlived its parent subculture. Although often described as primarily composed of obsessed fans, cult film fandom can include many newer, less experienced members. Familiar with the film's reputation and having watched clips on YouTube, these fans may take the next step and enter the film's fandom. If they are the majority, they may alter or ignore long-standing traditions, such as audience participation rituals; rituals which lack perceived authenticity may be criticized, but accepted rituals bring subcultural capital to veteran fans who introduce them to the newer members. Fans who flaunt their knowledge receive negative reactions. Newer fans may cite the film itself as their reason for attending a showing, but longtime fans often cite the community. Organized fandoms may spread and become popular as a way of introducing new people to the film, as well as theatrical screenings being privileged by the media and fandom itself. Fandom can also be used as a process of legitimation. Fans of cult films, as in media fandom, are frequently producers instead of mere consumers. Unconcerned with traditional views on intellectual property, these fan works are often unsanctioned, transformative, and ignore fictional canon. Like cult films themselves, magazines and websites dedicated to cult films revel in their self-conscious offensiveness. They maintain a sense of exclusivity by offending mainstream audiences with misogyny, gore, and racism. Obsessive trivia can be used to bore mainstream audiences while building up subcultural capital. Specialist stores on the fringes of society (or websites which prominently partner with hardcore pornographic sites) can be used to reinforce the outsider nature of cult film fandom, especially when they use erotic or gory imagery. By assuming a preexisting knowledge of trivia, non-fans can be excluded. Previous articles and controversies can also be alluded to without explanation. Casual readers and non-fans will thus be left out of discussions and debates, as they lack enough information to meaningfully contribute. When fans like a cult film for the wrong reasons, such as casting or characters aimed at mainstream appeal, they may be ridiculed. Thus, fandom can keep the mainstream at bay while defining themselves in terms of the "Other", a philosophical construct divergent from social norms. Commercial aspects of fandom (such as magazines or books) can also be defined in terms of "otherness" and thus valid to consume: consumers purchasing independent or niche publications are discerning consumers, but the mainstream is denigrated. Irony or self-deprecating humor can also be used. In online communities, different subcultures attracted to transgressive films can clash over values and criteria for subcultural capital. Even within subcultures, fans who break subcultural scripts, such as denying the affectivity of a disturbing film, will be ridiculed for their lack of authenticity. Types "So bad it's good" The critic Michael Medved characterized examples of the "so bad it's good" class of low-budget cult film through books such as The Golden Turkey Awards. These films include financially fruitless and critically scorned films that have become inadvertent comedies to film buffs, such as Plan 9 from Outer Space (1959), Mommie Dearest (1981), The Room (2003), and the Ugandan action comedy film Who Killed Captain Alex? (2010). Similarly, Paul Verhoeven's Showgirls (1995) bombed in theaters but developed a cult following on video. Catching on, Metro-Goldwyn-Mayer capitalized on the film's ironic appeal and marketed it as a cult film. Sometimes, fans will impose their own interpretation of films which have attracted derision, such as reinterpreting an earnest melodrama as a comedy. Jacob deNobel of the Carroll County Times states that films can be perceived as nonsensical or inept when audiences misunderstand avant-garde filmmaking or misinterpret parody. Films such as Rocky Horror can be misinterpreted as "weird for weirdness' sake" by people unfamiliar with the cult films that it parodies. deNobel ultimately rejects the use of the label "so bad it's good" as mean-spirited and often misapplied. Alamo Drafthouse programmer Zack Carlson has further said that any film which succeeds in entertaining an audience is good, regardless of irony. In francophone culture, "so bad it's good" films, known as , have given rise to a subculture with dedicated websites such as Nanarland, film festivals and viewings in theaters, as well as various books analyzing the phenomenon. The rise of the Internet and on-demand films has led critics to question whether "so bad it's good" films have a future now that people have such diverse options in both availability and catalog, though fans eager to experience the worst films ever made can lead to lucrative showings for local theaters and merchandisers. Camp and guilty pleasures Chuck Kleinhans states that the difference between a guilty pleasure and a cult film can be as simple as the number of fans; David Church raises the question of how many people it takes to form a cult following, especially now that home video makes fans difficult to count. As these cult films become more popular, they can bring varied responses from fans that depend on different interpretations, such as camp, irony, genuine affection, or combinations thereof. Earnest fans, who recognize and accept the film's faults, can make minor celebrities of the film's cast, though the benefits are not always clear. Cult film stars known for their camp can inject subtle parody or signal when films should not be taken seriously. Campy actors can also provide comic book supervillains for serious, artistic-minded films. This can draw fan acclaim and obsession more readily than subtle, method-inspired acting. Mark Chalon Smith of the Los Angeles Times says technical faults may be forgiven if a film makes up for them in other areas, such as camp or transgressive content. Smith states that the early films of John Waters are amateurish and less influential than claimed, but Waters' outrageous vision cements his place in cult cinema. Films such as Myra Breckinridge (1970) and Beyond the Valley of the Dolls (1970) can experience critical reappraisal later, once their camp excess and avant-garde filmmaking are better accepted, and films that are initially dismissed as frivolous are often reassessed as campy. Films that intentionally try to appeal to fans of camp may end up alienating them, as the films become perceived as trying too hard or not authentic. Nostalgia According to academic Brigid Cherry, nostalgia "is a strong element of certain kinds of cult appeal." When Veoh added many cult films to their site, they cited nostalgia as a factor for their popularity. Academic I. Q. Hunter describes cult films as "New Hollywood in extremis" and a form of nostalgia for that period. Ernest Mathijs instead states that cult films use nostalgia as a form of resistance against progress and capitalistic ideas of a time-based economy. By virtue of the time travel plot, Back to the Future permits nostalgia for both the 1950s and 1980s. Many members of its nostalgic cult following are too young to have been alive during those periods, which Emma Pett interprets as fondness for retro aesthetics, nostalgia for when they saw the film rather than when it was released, and looking to the past to find a better time period. Similarly, films directed by John Hughes have taken hold in midnight movie venues, trading off of nostalgia for the 1980s and an ironic appreciation for their optimism. Mathijs and Sexton describe Grease (1978) as a film nostalgic about an imagined past that has acquired a nostalgic cult following. Other cult films, such as Streets of Fire (1984), create a new fictional world based on nostalgic views of the past. Cult films may also subvert nostalgia, such as The Big Lebowski, which introduces many nostalgic elements and then reveals them as fake and hollow. Scott Pilgrim vs. the World is a recent example, containing extensive nostalgia for the music and video gaming culture of the 2000s. Nathan Lee of the New York Sun identifies the retro aesthetic and nostalgic pastiche in films such as Donnie Darko as factors in its popularity among midnight movie crowds. Midnight movies Author Tomas Crowder-Taraborrelli describes midnight movies as a reaction against the political and cultural conservatism in America, and Joan Hawkins identifies the movement as running the gamut from anarchist to libertarian, united in their anti-establishment attitude and punk aesthetic. These films are resistant to simple categorization and are defined by the fanaticism and ritualistic behaviors of their audiences. Midnight movies require a night life and an audience willing to invest themselves actively. Hawkins states that these films took a rather bleak point of view due to the living conditions of the artists and the economic prospects of the 1970s. Like the surrealists and dadaists, they not only satirically attacked society but also the very structure of film – a counter-cinema that deconstructs narrative and traditional processes. In the late 1980s and 1990s, midnight movies transitioned from underground showings to home video viewings; eventually, a desire for community brought a resurgence, and The Big Lebowski kick-started a new generation. Demographics shifted, and more hip and mainstream audiences were drawn to them. Although studios expressed skepticism, large audiences were drawn to box-office flops, such as Donnie Darko (2001), The Warriors (1979) and Office Space (1999). Modern midnight movies retain their popularity and have been strongly diverging from mainstream films shown at midnight. Mainstream cinemas, eager to disassociate themselves from negative associations and increase profits, have begun abandoning midnight screenings. Although classic midnight movies have dropped off in popularity, they still bring reliable crowds. Art and exploitation Although seemingly at odds with each other, art and exploitation films are frequently treated as equal and interchangeable in cult fandom, listed alongside each other and described in similar terms: their ability to provoke a response. The most exploitative aspects of art films are thus played up and their academic recognition ignored. This flattening of culture follows the popularity of post-structuralism, which rejects a hierarchy of artistic merit and equates exploitation and art. Mathijs and Sexton state that although cult films are not synonymous with exploitation, as is occasionally assumed, this is a key component; they write that exploitation, which exists on the fringes of the mainstream and deals with taboo subjects, is well-suited for cult followings. Academic David Andrews writes that cult softcore films are "the most masculinized, youth-oriented, populist, and openly pornographic softcore area." The sexploitation films of Russ Meyer were among the first to abandon all hypocritical pretenses of morality and were technically proficient enough to gain a cult following. His persistent vision saw him received as an auteur worthy of academic study; director John Waters attributes this to Meyer's ability to create complicated, sexually charged films without resorting to explicit sex. Myrna Oliver described Doris Wishman's exploitation films as "crass, coarse, and camp ... perfect fodder for a cult following." "Sick films", the most disturbing and graphically transgressive films, have their own distinct cult following; these films transcend their roots in exploitation, horror, and art films. In 1960s and 1970s America, exploitation and art films shared audiences and marketing, especially in New York City's grindhouse cinemas. B and genre films Mathijs and Sexton state that genre is an important part of cult films; cult films will often mix, mock, or exaggerate the tropes associated with traditional genres. Science fiction, fantasy, and horror are known for their large and dedicated cult followings; as science fiction films become more popular, fans emphasize non-mainstream and less commercial aspects of it. B films, which are often conflated with exploitation, are as important to cult films as exploitation. Teodor Reljic of Malta Today states that cult B films are a realistic goal for Malta's burgeoning film industry. Genre films, B films that strictly adhere to genre limitations, can appeal to cult film fans: given their transgressive excesses, horror films are likely to become to cult films; films like Galaxy Quest (1999) highlight the importance of cult followings and fandom to science fiction; and authentic martial arts skills in Hong Kong action films can drive them to become cult favorites. Cult musicals can range from the traditional, such as Singin' in the Rain (1952), which appeal to cult audiences through nostalgia, camp, and spectacle, to the more non-traditional, such as Cry-Baby (1990), which parodies musicals, and Rocky Horror, which uses a rock soundtrack. Romantic fairy tale The Princess Bride (1987) failed to attract audiences in its original release, as the studio did not know how to market it. The freedom and excitement associated with cars can be an important part of drawing cult film fans to genre films, and they can signify action and danger with more ambiguity than a gun. Ad Week writes that cult B films, when released on home video, market themselves and need only enough advertising to raise curiosity or nostalgia. Animation Animation can provide wide open vistas for stories. The French film Fantastic Planet (1973) explored ideas beyond the limits of traditional, live-action science fiction films. Ralph Bakshi's career has been marked with controversy: Fritz the Cat (1972), the first animated film to be rated "X" by the MPAA, provoked outrage for its racial caricatures and graphic depictions of sex, and Coonskin (1975) was decried as racist. Bakshi recalls that older animators had tired of "kid stuff" and desired edgier work, whereas younger animators hated his work for "destroying the Disney images". Eventually, his work would be reassessed and cult followings, which include Quentin Tarantino and Robert Rodriguez, developed around several of his films. Heavy Metal (1981) faced similar denunciations from critics. Donald Liebenson of the Los Angeles Times cites the violence and sexual imagery as alienating critics, who did not know what to make of the film. It would go on to become a popular midnight movie and frequently bootlegged by fans, as licensing issues kept it from being released on video for many years. Phil Hoad of The Guardian identifies Akira (1988) as introducing violent, adult Japanese animation (known as anime) to the West and paving the way for later works. Anime, according to academic Brian Ruh, is not a cult genre, but the lack of individual fandoms inside anime fandom itself lends itself to a bleeding over of cult attention and can help spread works internationally. Anime, which is frequently presented as a series (with movies either rising from existing series, or spinning off series based on the film), provides its fans with alternative fictional canons and points of view that can drive fan activity. The Ghost in the Shell films, for example, provided Japanese fans with enough bonus material and spinoffs that it encouraged cult tendencies. Markets that did not support the sale of these materials saw less cult activity. The claymation film Gumby: The Movie (1995), which made only $57,100 at the box office against its $2.8 million budget but sold a million copies on VHS alone, was subsequently released on DVD and remastered in high definition for Blu-ray due to its strong cult following. Like many cult films, RiffTrax made their own humorous audio commentary for Gumby: The Movie in 2021. Nonfiction Sensationalistic documentaries called mondo films replicate the most shocking and transgressive elements of exploitation films. They are usually modeled after "sick films" and cover similar subject matter. In The Cult Film Reader, academics Mathijs and Mendik write that these documentaries often present non-Western societies as "stereotypically mysterious, seductive, immoral, deceptive, barbaric or savage". Though they can be interpreted as racist, Mathijs and Mendik state that they also "exhibit a liberal attitude towards the breaking of cultural taboos". Mondo films like Faces of Death mix real and fake footage freely, and they gain their cult following through the outrage and debate over authenticity that results. Like "so bad it's good" cult films, old propaganda and government hygiene films may be enjoyed ironically by more modern audiences for the camp value of the outdated themes and outlandish claims made about perceived social threats, such as drug use. Academic Barry K. Grant states that Frank Capra's Why We Fight World War II propaganda films are explicitly not cult, because they are "slickly made and have proven their ability to persuade an audience." The sponsored film Mr. B Natural became a cult hit when it was broadcast on the satirical television show Mystery Science Theater 3000; cast member Trace Beaulieu cited these educational shorts as his favorite to mock on the show. Mark Jancovich states that cult audiences are drawn to these films because of their "very banality or incoherence of their political positions", unlike traditional cult films, which achieve popularity through auteurist radicalism. Mainstream popularity Mark Shiel explains the rising popularity of cult films as an attempt by cinephiles and scholars to escape the oppressive conformity and mainstream appeal of even independent film, as well as a lack of condescension in both critics and the films; Academic Donna de Ville says it is a chance to subvert the dominance of academics and cinephiles. According to Xavier Mendik, "academics have been really interested in cult movies for quite a while now." Mendik has sought to bring together academic interest and fandom through Cine-Excess, a film festival. I. Q. Hunter states that "it's much easier to be a cultist now, but it is also rather more inconsequential." Citing the mainstream availability of Cannibal Holocaust, Jeffrey Sconce rejects definitions of cult films based on controversy and excess, as they've now become meaningless. Cult films have influenced such diverse industries as cosmetics, music videos, and fashion. Cult films have shown up in less expected places; as a sign of his popularity, a bronze statue of Ed Wood has been proposed in his hometown, and L'Osservatore Romano, the official newspaper of the Holy See, has courted controversy for its endorsement of cult films and pop culture. When cities attempt to renovate neighborhoods, fans have called attempts to demolish iconic settings from cult films "cultural vandalism". Cult films can also drive tourism, even when it is unwanted. From Latin America, Alejandro Jodorowsky's film El Topo (1970) has attracted attention of rock musicians such as John Lennon, Mick Jagger, and Bob Dylan. As far back as the 1970s, Attack of the Killer Tomatoes (1978) was designed specifically to be a cult film, and The Rocky Horror Picture Show was produced by 20th Century Fox, a major Hollywood studio. Over its decades-long release, Rocky Horror became the seventh highest grossing R-rated film when adjusted for inflation; journalist Matt Singer has questioned whether Rocky Horrors popularity invalidates its cult status. Founded in 1974, Troma Entertainment, an independent studio, would become known for both its cult following and cult films. In the 1980s, Danny Peary's Cult Movies (1981) would influence director Edgar Wright and film critic Scott Tobias of The A.V. Club. The rise of home video would have a mainstreaming effect on cult films and cultish behavior, though some collectors would be unlikely to self-identify as cult film fans. Film critic Joe Bob Briggs began reviewing drive-in theater and cult films, though he faced much criticism as an early advocate of exploitation and cult films. Briggs highlights the mainstreaming of cult films by pointing out the respectful obituaries that cult directors have received from formerly hostile publications and acceptance of politically incorrect films at mainstream film festivals. This acceptance is not universal, though, and some critics have resisted this mainstreaming of paracinema. Beginning in the 1990s, director Quentin Tarantino would have the greatest success in turning cult films mainstream. Tarantino later used his fame to champion obscure cult films that had influenced him and set up the short-lived Rolling Thunder Pictures, which distributed several of his favorite cult films. Tarantino's clout led Phil Hoad of The Guardian to call Tarantino the world's most influential director. As major Hollywood studios and audiences both become savvy to cult films, productions once limited to cult appeal have instead become popular hits, and cult directors have become hot properties known for more mainstream and accessible films. Remarking on the popular trend of remaking cult films, Claude Brodesser-Akner of New York magazine states that Hollywood studios have been superstitiously hoping to recreate past successes rather than trading on nostalgia. Their popularity would bring some critics to proclaim the death of cult films now that they have finally become successful and mainstream, are too slick to attract a proper cult following, lack context, or are too easily found online. In response, David Church says that cult film fans have retreated to more obscure and difficult to find films, often using illegal distribution methods, which preserves the outlaw status of cult films. Virtual spaces, such as online forums and fan sites, replace the traditional fanzines and newsletters. Cult film fans consider themselves collectors, rather than consumers, as they associate consumers with mainstream, Hollywood audiences. This collecting can take the place of fetishization of a single film. Addressing concerns that DVDs have revoked the cult status of films like Rocky Horror, academic Mikel J. Koven states that small scale screenings with friends and family can replace midnight showings. Koven also identifies television shows, such as Twin Peaks, as retaining more traditional cult activities inside popular culture. David Lynch himself has not ruled out another television series, as studios have become reluctant to take chances on non-mainstream ideas. Despite this, the Alamo Drafthouse has capitalized on cult films and the surrounding culture through inspiration drawn from Rocky Horror and retro promotional gimmickry. They sell out their shows regularly and have acquired a cult following of their own. Academic Bob Batchelor, writing in Cult Pop Culture, states that the internet has democratized cult culture and destroyed the line between cult and mainstream. Fans of even the most obscure films can communicate online with each other in vibrant communities. Although known for their big-budget blockbusters, Steven Spielberg and George Lucas have criticized the current Hollywood system of gambling everything on the opening weekend of these productions. Geoffrey Macnab of The Independent instead suggests that Hollywood look to capitalize on cult films, which have exploded in popularity on the internet. The rise of social media has been a boon to cult films. Sites such as Twitter have displaced traditional venues for fandom and courted controversy from cultural critics who are unamused by campy cult films. After a clip from one of his films went viral, director-producer Roger Corman made a distribution deal with YouTube. Found footage which had originally been distributed as cult VHS collections eventually went viral on YouTube, which opened them to new generations of fans. Films such as Birdemic (2008) and The Room (2003) gained quick, massive popularity, as prominent members of social networking sites discussed them. Their rise as "instant cult classics" bypasses the years of obscurity that most cult films labor under. In response, critics have described the use of viral marketing as astroturfing and an attempt to manufacture cult films. I. Q. Hunter identifies a prefabricated cult film style which includes "deliberately, insulting bad films", "slick exercises in dysfunction and alienation", and mainstream films "that sell themselves as worth obsessing over". Writing for NPR, Scott Tobias states that Don Coscarelli, whose previous films effortlessly attracted cult followings, has drifted into this realm. Tobias criticizes Coscarelli as trying too hard to appeal to cult audiences and sacrificing internal consistency for calculated quirkiness. Influenced by the successful online hype of The Blair Witch Project (1999), other films have attempted to draw online cult fandom with the use of prefabricated cult appeal. Snakes on a Plane (2006) is an example that attracted massive attention from curious fans. Uniquely, its cult following preceded the film's release and included speculative parodies of what fans imagined the film might be. This reached the point of convergence culture when fan speculation began to impact on the film's production. Although it was proclaimed a cult film and major game-changer before it was released, it failed to win either mainstream audiences or maintain its cult following. In retrospect, critic Spencer Kornhaber would call it a serendipitous novelty and a footnote to a "more naive era of the Internet". However, it became influential in both marketing and titling. This trend of "instant cult classics" which are hailed yet fail to attain a lasting following is described by Matt Singer, who states that the phrase is an oxymoron. Cult films are often approached in terms of auteur theory, which states that the director's creative vision drives a film. This has fallen out of favor in academia, creating a disconnect between cult film fans and critics. Matt Hills states that auteur theory can help to create cult films; fans that see a film as continuing a director's creative vision are likely to accept it as cult. According to academic Greg Taylor, auteur theory also helped to popularize cult films when middlebrow audiences found an accessible way to approach avant-garde film criticism. Auteur theory provided an alternative culture for cult film fans while carrying the weight of scholarship. By requiring repeated viewings and extensive knowledge of details, auteur theory naturally appealed to cult film fans. Taylor further states that this was instrumental in allowing cult films to break through to the mainstream. Academic Joe Tompkins states that this auteurism is often highlighted when mainstream success occurs. This may take the place of – and even ignore – political readings of the director. Cult films and directors may be celebrated for their transgressive content, daring, and independence, but Tompkins argues that mainstream recognition requires they be palatable to corporate interests who stand to gain much from the mainstreaming of cult film culture. While critics may champion revolutionary aspects of filmmaking and political interpretation, Hollywood studios and other corporate interests will instead highlight only the aspects that they wish to legitimize in their own films, such as sensational exploitation. Someone like George Romero, whose films are both transgressive and subversive, will have the transgressive aspects highlighted while the subversive aspects are ignored. See also Cult video game List of cult films Sleeper hit Mark Kermode's Secrets of Cinema: Cult Movies List of cult television shows References Film and video fandom Film and video terminology Film genres Cult following Articles containing video clips
5646
https://en.wikipedia.org/wiki/Constantinople
Constantinople
Constantinople (see other names) became the capital of the Roman Empire during the reign of Constantine the Great in 330. Following the collapse of the Western Roman Empire in the late 5th century, Constantinople remained the capital of the Eastern Roman Empire (also known as the Byzantine Empire; 330–1204 and 1261–1453), the Latin Empire (1204–1261), and the Ottoman Empire (1453–1922). Following the Turkish War of Independence, the Turkish capital then moved to Ankara. Officially renamed Istanbul in the 1920s, the city is today the largest city and financial centre of Turkey and the largest city in Europe, straddling the Bosporus strait, lying in both Europe and Asia. In 324, after the Western and Eastern Roman Empires were reunited, the ancient city of Byzantium was selected to serve as the new capital of the Roman Empire, and the city was renamed Nova Roma, or 'New Rome', by Emperor Constantine the Great. On 11 May 330, it was renamed Constantinople and dedicated to Constantine. Constantinople is generally considered to be the center and the "cradle of Orthodox Christian civilization". From the mid-5th century to the early 13th century, Constantinople was the largest and wealthiest city in Europe. The city became famous for its architectural masterpieces, such as Hagia Sophia, the cathedral of the Eastern Orthodox Church, which served as the seat of the Ecumenical Patriarchate; the sacred Imperial Palace, where the emperors lived; the Hippodrome; the Golden Gate of the Land Walls; and opulent aristocratic palaces. The University of Constantinople was founded in the 5th century and contained artistic and literary treasures before it was sacked in 1204 and 1453, including its vast Imperial Library which contained the remnants of the Library of Alexandria and had 100,000 volumes. The city was the home of the Ecumenical Patriarch of Constantinople and guardian of Christendom's holiest relics such as the Crown of thorns and the True Cross. Constantinople was famous for its massive and complex fortifications, which ranked among the most sophisticated defensive architecture of antiquity. The Theodosian Walls consisted of a double wall lying about to the west of the first wall and a moat with palisades in front. Constantinople's location between the Golden Horn and the Sea of Marmara reduced the land area that needed defensive walls. The city was built intentionally to rival Rome, and it was claimed that several elevations within its walls matched Rome's 'seven hills'. The impenetrable defenses enclosed magnificent palaces, domes, and towers, the result of prosperity Constantinople achieved as the gateway between two continents (Europe and Asia) and two seas (the Mediterranean and the Black Sea). Although besieged on numerous occasions by various armies, the defenses of Constantinople proved impenetrable for nearly nine hundred years. In 1204, however, the armies of the Fourth Crusade took and devastated the city, and for several decades, its inhabitants resided under Latin occupation in a dwindling and depopulated city. In 1261 the Byzantine Emperor Michael VIII Palaiologos liberated the city, and after the restoration under the Palaiologos dynasty, it enjoyed a partial recovery. With the advent of the Ottoman Empire in 1299, the Byzantine Empire began to lose territories, and the city began to lose population. By the early 15th century, the Byzantine Empire was reduced to just Constantinople and its environs, along with Morea in Greece, making it an enclave inside the Ottoman Empire. The city was finally besieged and conquered by the Ottoman Empire in 1453, remaining under its control until the early 20th century, after which it was renamed Istanbul under the Empire's successor state, Turkey. Names Before Constantinople According to Pliny the Elder in his Natural History, the first known name of a settlement on the site of Constantinople was Lygos, a settlement likely of Thracian origin founded between the 13th and 11th centuries BC. The site, according to the founding myth of the city, was abandoned by the time Greek settlers from the city-state of Megara founded Byzantium (, Byzántion) in around 657 BC, across from the town of Chalcedon on the Asiatic side of the Bosphorus. The origins of the name of Byzantion, more commonly known by the later Latin Byzantium, are not entirely clear, though some suggest it is of Thracian origin. The founding myth of the city has it told that the settlement was named after the leader of the Megarian colonists, Byzas. The later Byzantines of Constantinople themselves would maintain that the city was named in honor of two men, Byzas and Antes, though this was more likely just a play on the word Byzantion. The city was briefly renamed Augusta Antonina in the early 3rd century AD by the Emperor Septimius Severus (193–211), who razed the city to the ground in 196 for supporting a rival contender in the civil war and had it rebuilt in honor of his son Marcus Aurelius Antoninus (who succeeded him as Emperor), popularly known as Caracalla. The name appears to have been quickly forgotten and abandoned, and the city reverted to Byzantium/Byzantion after either the assassination of Caracalla in 217 or, at the latest, the fall of the Severan dynasty in 235. Names of Constantinople Byzantium took on the name of Constantinople (Greek: Κωνσταντινούπολις, romanized: Kōnstantinoupolis; "city of Constantine") after its refoundation under Roman emperor Constantine I, who transferred the capital of the Roman Empire to Byzantium in 330 and designated his new capital officially as Nova Roma () 'New Rome'. During this time, the city was also called 'Second Rome', 'Eastern Rome', and Roma Constantinopolitana (Latin for 'Constantinopolitan Rome'). As the city became the sole remaining capital of the Roman Empire after the fall of the West, and its wealth, population, and influence grew, the city also came to have a multitude of nicknames. As the largest and wealthiest city in Europe during the 4th–13th centuries and a center of culture and education of the Mediterranean basin, Constantinople came to be known by prestigious titles such as Basileuousa (Queen of Cities) and Megalopolis (the Great City) and was, in colloquial speech, commonly referred to as just Polis () 'the City' by Constantinopolitans and provincial Byzantines alike. In the language of other peoples, Constantinople was referred to just as reverently. The medieval Vikings, who had contacts with the empire through their expansion in eastern Europe (Varangians), used the Old Norse name Miklagarðr (from mikill 'big' and garðr 'city'), and later Miklagard and Miklagarth. In Arabic, the city was sometimes called Rūmiyyat al-Kubra (Great City of the Romans) and in Persian as Takht-e Rum (Throne of the Romans). In East and South Slavic languages, including in Kievan Rus', Constantinople has been referred to as Tsargrad (Царьград) or Carigrad, 'City of the Caesar (Emperor)', from the Slavonic words tsar ('Caesar' or 'King') and grad ('city'). This was presumably a calque on a Greek phrase such as (Vasileos Polis), 'the city of the emperor [king]'. In Persian the city was also called Asitane (the Threshold of the State), and in Armenian, it was called Gosdantnubolis (City of Constantine). Modern names of the city The modern Turkish name for the city, İstanbul, derives from the Greek phrase eis tin Polin (), meaning '(in)to the city'. This name was used in colloquial speech in Turkish alongside Kostantiniyye, the more formal adaptation of the original Constantinople, during the period of Ottoman rule, while western languages mostly continued to refer to the city as Constantinople until the early 20th century. In 1928, the Turkish alphabet was changed from Arabic script to Latin script. After that, as part of the Turkification movement, Turkey started to urge other countries to use Turkish names for Turkish cities, instead of other transliterations to Latin script that had been used in Ottoman times and the city came to be known as Istanbul and its variations in most world languages. The name Constantinople is still used by members of the Eastern Orthodox Church in the title of one of their most important leaders, the Orthodox patriarch based in the city, referred to as "His Most Divine All-Holiness the Archbishop of Constantinople New Rome and Ecumenical Patriarch". In Greece today, the city is still called Konstantinoúpoli(s) () or simply just "the City" (). History Foundation of Byzantium Constantinople was founded by the Roman emperor Constantine I (272–337) in 324 on the site of an already-existing city, Byzantium, which was settled in the early days of Greek colonial expansion, in around 657 BC, by colonists of the city-state of Megara. This is the first major settlement that would develop on the site of later Constantinople, but the first known settlements was that of Lygos, referred to in Pliny's Natural Histories. Apart from this, little is known about this initial settlement. The site, according to the founding myth of the city, was abandoned by the time Greek settlers from the city-state of Megara founded Byzantium () in around 657  BC, across from the town of Chalcedon on the Asiatic side of the Bosphorus. Hesychius of Miletus wrote that some "claim that people from Megara, who derived their descent from Nisos, sailed to this place under their leader Byzas, and invent the fable that his name was attached to the city". Some versions of the founding myth say Byzas was the son of a local nymph, while others say he was conceived by one of Zeus' daughters and Poseidon. Hesychius also gives alternate versions of the city's founding legend, which he attributed to old poets and writers: It is said that the first Argives, after having received this prophecy from Pythia, Blessed are those who will inhabit that holy city, a narrow strip of the Thracian shore at the mouth of the Pontos, where two pups drink of the gray sea, where fish and stag graze on the same pasture, set up their dwellings at the place where the rivers Kydaros and Barbyses have their estuaries, one flowing from the north, the other from the west, and merging with the sea at the altar of the nymph called Semestre" The city maintained independence as a city-state until it was annexed by Darius I in 512 BC into the Persian Empire, who saw the site as the optimal location to construct a pontoon bridge crossing into Europe as Byzantium was situated at the narrowest point in the Bosphorus strait. Persian rule lasted until 478 BC when as part of the Greek counterattack to the Second Persian invasion of Greece, a Greek army led by the Spartan general Pausanias captured the city which remained an independent, yet subordinate, city under the Athenians, and later to the Spartans after 411 BC. A farsighted treaty with the emergent power of Rome in which stipulated tribute in exchange for independent status allowed it to enter Roman rule unscathed. This treaty would pay dividends retrospectively as Byzantium would maintain this independent status, and prosper under peace and stability in the Pax Romana, for nearly three centuries until the late 2nd century AD. Byzantium was never a major influential city-state like that of Athens, Corinth or Sparta, but the city enjoyed relative peace and steady growth as a prosperous trading city lent by its remarkable position. The site lay astride the land route from Europe to Asia and the seaway from the Black Sea to the Mediterranean, and had in the Golden Horn an excellent and spacious harbor. Already then, in Greek and early Roman times, Byzantium was famous for the strategic geographic position that made it difficult to besiege and capture, and its position at the crossroads of the Asiatic-European trade route over land and as the gateway between the Mediterranean and Black Seas made it too valuable a settlement to abandon, as Emperor Septimius Severus later realized when he razed the city to the ground for supporting Pescennius Niger's claimancy. It was a move greatly criticized by the contemporary consul and historian Cassius Dio who said that Severus had destroyed "a strong Roman outpost and a base of operations against the barbarians from Pontus and Asia". He would later rebuild Byzantium towards the end of his reign, in which it would be briefly renamed Augusta Antonina, fortifying it with a new city wall in his name, the Severan Wall. 324–337: The refoundation as Constantinople Constantine had altogether more colourful plans. Having restored the unity of the Empire, and, being in the course of major governmental reforms as well as of sponsoring the consolidation of the Christian church, he was well aware that Rome was an unsatisfactory capital. Rome was too far from the frontiers, and hence from the armies and the imperial courts, and it offered an undesirable playground for disaffected politicians. Yet it had been the capital of the state for over a thousand years, and it might have seemed unthinkable to suggest that the capital be moved to a different location. Nevertheless, Constantine identified the site of Byzantium as the right place: a place where an emperor could sit, readily defended, with easy access to the Danube or the Euphrates frontiers, his court supplied from the rich gardens and sophisticated workshops of Roman Asia, his treasuries filled by the wealthiest provinces of the Empire. Constantinople was built over six years, and consecrated on 11 May 330. Constantine divided the expanded city, like Rome, into 14 regions, and ornamented it with public works worthy of an imperial metropolis. Yet, at first, Constantine's new Rome did not have all the dignities of old Rome. It possessed a proconsul, rather than an urban prefect. It had no praetors, tribunes, or quaestors. Although it did have senators, they held the title clarus, not clarissimus, like those of Rome. It also lacked the panoply of other administrative offices regulating the food supply, police, statues, temples, sewers, aqueducts, or other public works. The new programme of building was carried out in great haste: columns, marbles, doors, and tiles were taken wholesale from the temples of the empire and moved to the new city. In similar fashion, many of the greatest works of Greek and Roman art were soon to be seen in its squares and streets. The emperor stimulated private building by promising householders gifts of land from the imperial estates in Asiana and Pontica and on 18 May 332 he announced that, as in Rome, free distributions of food would be made to the citizens. At the time, the amount is said to have been 80,000 rations a day, doled out from 117 distribution points around the city. Constantine laid out a new square at the centre of old Byzantium, naming it the Augustaeum. The new senate-house (or Curia) was housed in a basilica on the east side. On the south side of the great square was erected the Great Palace of the Emperor with its imposing entrance, the Chalke, and its ceremonial suite known as the Palace of Daphne. Nearby was the vast Hippodrome for chariot-races, seating over 80,000 spectators, and the famed Baths of Zeuxippus. At the western entrance to the Augustaeum was the Milion, a vaulted monument from which distances were measured across the Eastern Roman Empire. From the Augustaeum led a great street, the Mese, lined with colonnades. As it descended the First Hill of the city and climbed the Second Hill, it passed on the left the Praetorium or law-court. Then it passed through the oval Forum of Constantine where there was a second Senate-house and a high column with a statue of Constantine himself in the guise of Helios, crowned with a halo of seven rays and looking toward the rising sun. From there, the Mese passed on and through the Forum Tauri and then the Forum Bovis, and finally up the Seventh Hill (or Xerolophus) and through to the Golden Gate in the Constantinian Wall. After the construction of the Theodosian Walls in the early 5th century, it was extended to the new Golden Gate, reaching a total length of seven Roman miles. After the construction of the Theodosian Walls, Constantinople consisted of an area approximately the size of Old Rome within the Aurelian walls, or some 1,400 ha. 337–529: Constantinople during the Barbarian Invasions and the fall of the West The importance of Constantinople increased, but it was gradual. From the death of Constantine in 337 to the accession of Theodosius I, emperors had been resident only in the years 337–338, 347–351, 358–361, 368–369. Its status as a capital was recognized by the appointment of the first known Urban Prefect of the City Honoratus, who held office from 11 December 359 until 361. The urban prefects had concurrent jurisdiction over three provinces each in the adjacent dioceses of Thrace (in which the city was located), Pontus and Asia comparable to the 100-mile extraordinary jurisdiction of the prefect of Rome. The emperor Valens, who hated the city and spent only one year there, nevertheless built the Palace of Hebdomon on the shore of the Propontis near the Golden Gate, probably for use when reviewing troops. All the emperors up to Zeno and Basiliscus were crowned and acclaimed at the Hebdomon. Theodosius I founded the Church of John the Baptist to house the skull of the saint (today preserved at the Topkapı Palace), put up a memorial pillar to himself in the Forum of Taurus, and turned the ruined temple of Aphrodite into a coach house for the Praetorian Prefect; Arcadius built a new forum named after himself on the Mese, near the walls of Constantine. After the shock of the Battle of Adrianople in 378, in which the emperor Valens with the flower of the Roman armies was destroyed by the Visigoths within a few days' march, the city looked to its defences, and in 413–414 Theodosius II built the 18-metre (60-foot)-tall triple-wall fortifications, which were not to be breached until the coming of gunpowder. Theodosius also founded a University near the Forum of Taurus, on 27 February 425. Uldin, a prince of the Huns, appeared on the Danube about this time and advanced into Thrace, but he was deserted by many of his followers, who joined with the Romans in driving their king back north of the river. Subsequent to this, new walls were built to defend the city and the fleet on the Danube improved. After the barbarians overran the Western Roman Empire, Constantinople became the indisputable capital city of the Roman Empire. Emperors were no longer peripatetic between various court capitals and palaces. They remained in their palace in the Great City and sent generals to command their armies. The wealth of the eastern Mediterranean and western Asia flowed into Constantinople. 527–565: Constantinople in the Age of Justinian The emperor Justinian I (527–565) was known for his successes in war, for his legal reforms and for his public works. It was from Constantinople that his expedition for the reconquest of the former Diocese of Africa set sail on or about 21 June 533. Before their departure, the ship of the commander Belisarius was anchored in front of the Imperial palace, and the Patriarch offered prayers for the success of the enterprise. After the victory, in 534, the Temple treasure of Jerusalem, looted by the Romans in AD 70 and taken to Carthage by the Vandals after their sack of Rome in 455, was brought to Constantinople and deposited for a time, perhaps in the Church of St Polyeuctus, before being returned to Jerusalem in either the Church of the Resurrection or the New Church. Chariot-racing had been important in Rome for centuries. In Constantinople, the hippodrome became over time increasingly a place of political significance. It was where (as a shadow of the popular elections of old Rome) the people by acclamation showed their approval of a new emperor, and also where they openly criticized the government, or clamoured for the removal of unpopular ministers. In the time of Justinian, public order in Constantinople became a critical political issue. Throughout the late Roman and early Byzantine periods, Christianity was resolving fundamental questions of identity, and the dispute between the orthodox and the monophysites became the cause of serious disorder, expressed through allegiance to the chariot-racing parties of the Blues and the Greens. The partisans of the Blues and the Greens were said to affect untrimmed facial hair, head hair shaved at the front and grown long at the back, and wide-sleeved tunics tight at the wrist; and to form gangs to engage in night-time muggings and street violence. At last these disorders took the form of a major rebellion of 532, known as the "Nika" riots (from the battle-cry of "Conquer!" of those involved). Fires started by the Nika rioters consumed the Theodosian basilica of Hagia Sophia (Holy Wisdom), the city's cathedral, which lay to the north of the Augustaeum and had itself replaced the Constantinian basilica founded by Constantius II to replace the first Byzantine cathedral, Hagia Irene (Holy Peace). Justinian commissioned Anthemius of Tralles and Isidore of Miletus to replace it with a new and incomparable Hagia Sophia. This was the great cathedral of the city, whose dome was said to be held aloft by God alone, and which was directly connected to the palace so that the imperial family could attend services without passing through the streets. The dedication took place on 26 December 537 in the presence of the emperor, who was later reported to have exclaimed, "O Solomon, I have outdone thee!" Hagia Sophia was served by 600 people including 80 priests, and cost 20,000 pounds of gold to build. Justinian also had Anthemius and Isidore demolish and replace the original Church of the Holy Apostles and Hagia Irene built by Constantine with new churches under the same dedication. The Justinianic Church of the Holy Apostles was designed in the form of an equal-armed cross with five domes, and ornamented with beautiful mosaics. This church was to remain the burial place of the emperors from Constantine himself until the 11th century. When the city fell to the Turks in 1453, the church was demolished to make room for the tomb of Mehmet II the Conqueror. Justinian was also concerned with other aspects of the city's built environment, legislating against the abuse of laws prohibiting building within of the sea front, in order to protect the view. During Justinian I's reign, the city's population reached about 500,000 people. However, the social fabric of Constantinople was also damaged by the onset of the Plague of Justinian between 541 and 542 AD. It killed perhaps 40% of the city's inhabitants. Survival, 565–717: Constantinople during the Byzantine Dark Ages In the early 7th century, the Avars and later the Bulgars overwhelmed much of the Balkans, threatening Constantinople with attack from the west. Simultaneously, the Persian Sassanids overwhelmed the Prefecture of the East and penetrated deep into Anatolia. Heraclius, son of the exarch of Africa, set sail for the city and assumed the throne. He found the military situation so dire that he is said to have contemplated withdrawing the imperial capital to Carthage, but relented after the people of Constantinople begged him to stay. The citizens lost their right to free grain in 618 when Heraclius realized that the city could no longer be supplied from Egypt as a result of the Persian wars: the population fell substantially as a result. While the city withstood a siege by the Sassanids and Avars in 626, Heraclius campaigned deep into Persian territory and briefly restored the status quo in 628, when the Persians surrendered all their conquests. However, further sieges followed the Arab conquests, first from 674 to 678 and then in 717 to 718. The Theodosian Walls kept the city impenetrable from the land, while a newly discovered incendiary substance known as Greek fire allowed the Byzantine navy to destroy the Arab fleets and keep the city supplied. In the second siege, the second ruler of Bulgaria, Khan Tervel, rendered decisive help. He was called Saviour of Europe. 717–1025: Constantinople during the Macedonian Renaissance In the 730s Leo III carried out extensive repairs of the Theodosian walls, which had been damaged by frequent and violent attacks; this work was financed by a special tax on all the subjects of the Empire. Theodora, widow of the Emperor Theophilus (died 842), acted as regent during the minority of her son Michael III, who was said to have been introduced to dissolute habits by her brother Bardas. When Michael assumed power in 856, he became known for excessive drunkenness, appeared in the hippodrome as a charioteer and burlesqued the religious processions of the clergy. He removed Theodora from the Great Palace to the Carian Palace and later to the monastery of Gastria, but, after the death of Bardas, she was released to live in the palace of St Mamas; she also had a rural residence at the Anthemian Palace, where Michael was assassinated in 867. In 860, an attack was made on the city by a new principality set up a few years earlier at Kiev by Askold and Dir, two Varangian chiefs: Two hundred small vessels passed through the Bosporus and plundered the monasteries and other properties on the suburban Princes' Islands. Oryphas, the admiral of the Byzantine fleet, alerted the emperor Michael, who promptly put the invaders to flight; but the suddenness and savagery of the onslaught made a deep impression on the citizens. In 980, the emperor Basil II received an unusual gift from Prince Vladimir of Kiev: 6,000 Varangian warriors, which Basil formed into a new bodyguard known as the Varangian Guard. They were known for their ferocity, honour, and loyalty. It is said that, in 1038, they were dispersed in winter quarters in the Thracesian Theme when one of their number attempted to violate a countrywoman, but in the struggle she seized his sword and killed him; instead of taking revenge, however, his comrades applauded her conduct, compensated her with all his possessions, and exposed his body without burial as if he had committed suicide. However, following the death of an Emperor, they became known also for plunder in the Imperial palaces. Later in the 11th century the Varangian Guard became dominated by Anglo-Saxons who preferred this way of life to subjugation by the new Norman kings of England. The Book of the Eparch, which dates to the 10th century, gives a detailed picture of the city's commercial life and its organization at that time. The corporations in which the tradesmen of Constantinople were organised were supervised by the Eparch, who regulated such matters as production, prices, import, and export. Each guild had its own monopoly, and tradesmen might not belong to more than one. It is an impressive testament to the strength of tradition how little these arrangements had changed since the office, then known by the Latin version of its title, had been set up in 330 to mirror the urban prefecture of Rome. In the 9th and 10th centuries, Constantinople had a population of between 500,000 and 800,000. Iconoclast controversy in Constantinople In the 8th and 9th centuries, the iconoclast movement caused serious political unrest throughout the Empire. The emperor Leo III issued a decree in 726 against images, and ordered the destruction of a statue of Christ over one of the doors of the Chalke, an act that was fiercely resisted by the citizens. Constantine V convoked a church council in 754, which condemned the worship of images, after which many treasures were broken, burned, or painted over with depictions of trees, birds or animals: One source refers to the church of the Holy Virgin at Blachernae as having been transformed into a "fruit store and aviary". Following the death of her husband Leo IV in 780, the empress Irene restored the veneration of images through the agency of the Second Council of Nicaea in 787. The iconoclast controversy returned in the early 9th century, only to be resolved once more in 843 during the regency of Empress Theodora, who restored the icons. These controversies contributed to the deterioration of relations between the Western and the Eastern Churches. 1025–1081: Constantinople after Basil II In the late 11th century catastrophe struck with the unexpected and calamitous defeat of the imperial armies at the Battle of Manzikert in Armenia in 1071. The Emperor Romanus Diogenes was captured. The peace terms demanded by Alp Arslan, sultan of the Seljuk Turks, were not excessive, and Romanus accepted them. On his release, however, Romanus found that enemies had placed their own candidate on the throne in his absence; he surrendered to them and suffered death by torture, and the new ruler, Michael VII Ducas, refused to honour the treaty. In response, the Turks began to move into Anatolia in 1073. The collapse of the old defensive system meant that they met no opposition, and the empire's resources were distracted and squandered in a series of civil wars. Thousands of Turkoman tribesmen crossed the unguarded frontier and moved into Anatolia. By 1080, a huge area had been lost to the Empire, and the Turks were within striking distance of Constantinople. 1081–1185: Constantinople under the Comneni Under the Comnenian dynasty (1081–1185), Byzantium staged a remarkable recovery. In 1090–91, the nomadic Pechenegs reached the walls of Constantinople, where Emperor Alexius I with the aid of the Kipchaks annihilated their army. In response to a call for aid from Alexius, the First Crusade assembled at Constantinople in 1096, but declining to put itself under Byzantine command set out for Jerusalem on its own account. John II built the monastery of the Pantocrator (Almighty) with a hospital for the poor of 50 beds. With the restoration of firm central government, the empire became fabulously wealthy. The population was rising (estimates for Constantinople in the 12th century vary from some 100,000 to 500,000), and towns and cities across the realm flourished. Meanwhile, the volume of money in circulation dramatically increased. This was reflected in Constantinople by the construction of the Blachernae palace, the creation of brilliant new works of art, and general prosperity at this time: an increase in trade, made possible by the growth of the Italian city-states, may have helped the growth of the economy. It is certain that the Venetians and others were active traders in Constantinople, making a living out of shipping goods between the Crusader Kingdoms of Outremer and the West, while also trading extensively with Byzantium and Egypt. The Venetians had factories on the north side of the Golden Horn, and large numbers of westerners were present in the city throughout the 12th century. Toward the end of Manuel I Komnenos's reign, the number of foreigners in the city reached about 60,000–80,000 people out of a total population of about 400,000 people. In 1171, Constantinople also contained a small community of 2,500 Jews. In 1182, most Latin (Western European) inhabitants of Constantinople were massacred. In artistic terms, the 12th century was a very productive period. There was a revival in the mosaic art, for example: Mosaics became more realistic and vivid, with an increased emphasis on depicting three-dimensional forms. There was an increased demand for art, with more people having access to the necessary wealth to commission and pay for such work. 1185–1261: Constantinople during the Imperial Exile On 25 July 1197, Constantinople was struck by a severe fire which burned the Latin Quarter and the area around the Gate of the Droungarios () on the Golden Horn. Nevertheless, the destruction wrought by the 1197 fire paled in comparison with that brought by the Crusaders. In the course of a plot between Philip of Swabia, Boniface of Montferrat and the Doge of Venice, the Fourth Crusade was, despite papal excommunication, diverted in 1203 against Constantinople, ostensibly promoting the claims of Alexios IV Angelos brother-in-law of Philip, son of the deposed emperor Isaac II Angelos. The reigning emperor Alexios III Angelos had made no preparation. The Crusaders occupied Galata, broke the defensive chain protecting the Golden Horn, and entered the harbour, where on 27 July they breached the sea walls: Alexios III fled. But the new Alexios IV Angelos found the Treasury inadequate, and was unable to make good the rewards he had promised to his western allies. Tension between the citizens and the Latin soldiers increased. In January 1204, the protovestiarius Alexios Murzuphlos provoked a riot, it is presumed, to intimidate Alexios IV, but whose only result was the destruction of the great statue of Athena Promachos, the work of Phidias, which stood in the principal forum facing west. In February 1204, the people rose again: Alexios IV was imprisoned and executed, and Murzuphlos took the purple as Alexios V Doukas. He made some attempt to repair the walls and organise the citizenry, but there had been no opportunity to bring in troops from the provinces and the guards were demoralised by the revolution. An attack by the Crusaders on 6 April failed, but a second from the Golden Horn on 12 April succeeded, and the invaders poured in. Alexios V fled. The Senate met in Hagia Sophia and offered the crown to Theodore Lascaris, who had married into the Angelos dynasty, but it was too late. He came out with the Patriarch to the Golden Milestone before the Great Palace and addressed the Varangian Guard. Then the two of them slipped away with many of the nobility and embarked for Asia. By the next day the Doge and the leading Franks were installed in the Great Palace, and the city was given over to pillage for three days. Sir Steven Runciman, historian of the Crusades, wrote that the sack of Constantinople is "unparalleled in history". For the next half-century, Constantinople was the seat of the Latin Empire. Under the rulers of the Latin Empire, the city declined, both in population and the condition of its buildings. Alice-Mary Talbot cites an estimated population for Constantinople of 400,000 inhabitants; after the destruction wrought by the Crusaders on the city, about one third were homeless, and numerous courtiers, nobility, and higher clergy, followed various leading personages into exile. "As a result Constantinople became seriously depopulated," Talbot concludes. The Latins took over at least 20 churches and 13 monasteries, most prominently the Hagia Sophia, which became the cathedral of the Latin Patriarch of Constantinople. It is to these that E.H. Swift attributed the construction of a series of flying buttresses to shore up the walls of the church, which had been weakened over the centuries by earthquake tremors. However, this act of maintenance is an exception: for the most part, the Latin occupiers were too few to maintain all of the buildings, either secular and sacred, and many became targets for vandalism or dismantling. Bronze and lead were removed from the roofs of abandoned buildings and melted down and sold to provide money to the chronically under-funded Empire for defense and to support the court; Deno John Geanokoplos writes that "it may well be that a division is suggested here: Latin laymen stripped secular buildings, ecclesiastics, the churches." Buildings were not the only targets of officials looking to raise funds for the impoverished Latin Empire: the monumental sculptures which adorned the Hippodrome and fora of the city were pulled down and melted for coinage. "Among the masterpieces destroyed, writes Talbot, "were a Herakles attributed to the fourth-century B.C. sculptor Lysippos, and monumental figures of Hera, Paris, and Helen." The Nicaean emperor John III Vatatzes reportedly saved several churches from being dismantled for their valuable building materials; by sending money to the Latins "to buy them off" (exonesamenos), he prevented the destruction of several churches. According to Talbot, these included the churches of Blachernae, Rouphinianai, and St. Michael at Anaplous. He also granted funds for the restoration of the Church of the Holy Apostles, which had been seriously damaged in an earthquake. The Byzantine nobility scattered, many going to Nicaea, where Theodore Lascaris set up an imperial court, or to Epirus, where Theodore Angelus did the same; others fled to Trebizond, where one of the Comneni had already with Georgian support established an independent seat of empire. Nicaea and Epirus both vied for the imperial title, and tried to recover Constantinople. In 1261, Constantinople was captured from its last Latin ruler, Baldwin II, by the forces of the Nicaean emperor Michael VIII Palaiologos under the command of Caesar Alexios Strategopoulos. 1261–1453: Palaiologan Era and the Fall of Constantinople Although Constantinople was retaken by Michael VIII Palaiologos, the Empire had lost many of its key economic resources, and struggled to survive. The palace of Blachernae in the north-west of the city became the main Imperial residence, with the old Great Palace on the shores of the Bosporus going into decline. When Michael VIII captured the city, its population was 35,000 people, but, by the end of his reign, he had succeeded in increasing the population to about 70,000 people. The Emperor achieved this by summoning former residents who had fled the city when the crusaders captured it, and by relocating Greeks from the recently reconquered Peloponnese to the capital. Military defeats, civil wars, earthquakes and natural disasters were joined by the Black Death, which in 1347 spread to Constantinople, exacerbated the people's sense that they were doomed by God. In 1453, when the Ottoman Turks captured the city, it contained approximately 50,000 people. Constantinople was conquered by the Ottoman Empire on 29 May 1453. Mehmed II intended to complete his father's mission and conquer Constantinople for the Ottomans. In 1452 he reached peace treaties with Hungary and Venice. He also began the construction of the Boğazkesen (later called the Rumelihisarı), a fortress at the narrowest point of the Bosphorus Strait, in order to restrict passage between the Black and Mediterranean seas. Mehmed then tasked the Hungarian gunsmith Urban with both arming Rumelihisarı and building cannon powerful enough to bring down the walls of Constantinople. By March 1453 Urban's cannon had been transported from the Ottoman capital of Edirne to the outskirts of Constantinople. In April, having quickly seized Byzantine coastal settlements along the Black Sea and Sea of Marmara, Ottoman regiments in Rumelia and Anatolia assembled outside the Byzantine capital. Their fleet moved from Gallipoli to nearby Diplokionion, and the sultan himself set out to meet his army. The Ottomans were commanded by 21-year-old Ottoman Sultan Mehmed II. The conquest of Constantinople followed a seven-week siege which had begun on 6 April 1453. The Empire fell on 29 May 1453. 1453–1930: Ottoman and Republican Kostantiniyye The Christian Orthodox city of Constantinople was now under Ottoman control. When Mehmed II finally entered Constantinople through the Gate of Charisius (today known as Edirnekapı or Adrianople Gate), he immediately rode his horse to the Hagia Sophia, where after the doors were axed down, the thousands of citizens hiding within the sanctuary were raped and enslaved, often with slavers fighting each other to the death over particularly beautiful and valuable slave girls. Moreover, symbols of Christianity everywhere were vandalized or destroyed, including the crucifix of Hagia Sophia which was paraded through the sultan's camps. Afterwards he ordered his soldiers to stop hacking at the city's valuable marbles and 'be satisfied with the booty and captives; as for all the buildings, they belonged to him'. He ordered that an imam meet him there in order to chant the adhan thus transforming the Orthodox cathedral into a Muslim mosque, solidifying Islamic rule in Constantinople. Mehmed's main concern with Constantinople had to do with consolidating control over the city and rebuilding its defenses. After 45,000 captives were marched from the city, building projects were commenced immediately after the conquest, which included the repair of the walls, construction of the citadel, and building a new palace. Mehmed issued orders across his empire that Muslims, Christians, and Jews should resettle the city, with Christians and Jews required to pay jizya and Muslims pay Zakat; he demanded that five thousand households needed to be transferred to Constantinople by September. From all over the Islamic empire, prisoners of war and deported people were sent to the city: these people were called "Sürgün" in Turkish (). Two centuries later, Ottoman traveler Evliya Çelebi gave a list of groups introduced into the city with their respective origins. Even today, many quarters of Istanbul, such as Aksaray, Çarşamba, bear the names of the places of origin of their inhabitants. However, many people escaped again from the city, and there were several outbreaks of plague, so that in 1459 Mehmed allowed the deported Greeks to come back to the city. Culture Constantinople was the largest and richest urban center in the Eastern Mediterranean Sea during the late Eastern Roman Empire, mostly as a result of its strategic position commanding the trade routes between the Aegean Sea and the Black Sea. It would remain the capital of the eastern, Greek-speaking empire for over a thousand years and in some ways is the nexus of Byzantine art production. At its peak, roughly corresponding to the Middle Ages, it was one of the richest and largest cities in Europe. It exerted a powerful cultural pull and dominated much of the economic life in the Mediterranean. Visitors and merchants were especially struck by the beautiful monasteries and churches of the city, in particular the Hagia Sophia, or the Church of Holy Wisdom. According to Russian 14th-century traveler Stephen of Novgorod: "As for Hagia Sophia, the human mind can neither tell it nor make description of it." It was especially important for preserving in its libraries manuscripts of Greek and Latin authors throughout a period when instability and disorder caused their mass-destruction in western Europe and north Africa: On the city's fall, thousands of these were brought by refugees to Italy, and played a key part in stimulating the Renaissance, and the transition to the modern world. The cumulative influence of the city on the west, over the many centuries of its existence, is incalculable. In terms of technology, art and culture, as well as sheer size, Constantinople was without parallel anywhere in Europe for a thousand years. Many languages were spoken in Constantinople. A 16th century Chinese geographical treatise specifically recorded that there were translators living in the city, indicating this was a multilingual, multicultural cosmopolitan. Women in literature Constantinople was home to the first known Western Armenian journal published and edited by a woman (Elpis Kesaratsian). Entering circulation in 1862, Kit'arr or Guitar stayed in print for only seven months. Female writers who openly expressed their desires were viewed as immodest, but this changed slowly as journals began to publish more "women's sections". In the 1880s, Matteos Mamurian invited Srpouhi Dussap to submit essays for Arevelian Mamal. According to Zaruhi Galemkearian's autobiography, she was told to write about women's place in the family and home after she published two volumes of poetry in the 1890s. By 1900, several Armenian journals had started to include works by female contributors including the Constantinople-based Tsaghik. Markets Even before Constantinople was founded, the markets of Byzantion were mentioned first by Xenophon and then by Theopompus who wrote that Byzantians "spent their time at the market and the harbour". In Justinian's age the Mese street running across the city from east to west was a daily market. Procopius claimed "more than 500 prostitutes" did business along the market street. Ibn Batutta who traveled to the city in 1325 wrote of the bazaars "Astanbul" in which the "majority of the artisans and salespeople in them are women". Architecture and Coinage The Byzantine Empire used Roman and Greek architectural models and styles to create its own unique type of architecture. The influence of Byzantine architecture and art can be seen in the copies taken from it throughout Europe. Particular examples include St Mark's Basilica in Venice, the basilicas of Ravenna, and many churches throughout the Slavic East. Also, alone in Europe until the 13th-century Italian florin, the Empire continued to produce sound gold coinage, the solidus of Diocletian becoming the bezant prized throughout the Middle Ages. Its city walls were much imitated (for example, see Caernarfon Castle) and its urban infrastructure was moreover a marvel throughout the Middle Ages, keeping alive the art, skill and technical expertise of the Roman Empire. In the Ottoman period Islamic architecture and symbolism were used. Great bathhouses were built in Byzantine centers such as Constantinople and Antioch. Religion Constantine's foundation gave prestige to the Bishop of Constantinople, who eventually came to be known as the Ecumenical Patriarch, and made it a prime center of Christianity alongside Rome. This contributed to cultural and theological differences between Eastern and Western Christianity eventually leading to the Great Schism that divided Western Catholicism from Eastern Orthodoxy from 1054 onwards. Constantinople is also of great religious importance to Islam, as the conquest of Constantinople is one of the signs of the End time in Islam. Education There were many institutions in ancient Constantinople such as the Imperial University of Constantinople, sometimes known as the University of the Palace Hall of Magnaura (), an Eastern Roman educational institution that could trace its corporate origins to 425 AD, when the emperor Theodosius II founded the Pandidacterium (). Media In the past the Bulgarian newspapers in the late Ottoman period were Makedoniya, Napredŭk, and Pravo. International status The city acted as a defence for the eastern provinces of the old Roman Empire against the barbarian invasions of the 5th century. The 18-meter-tall walls built by Theodosius II were, in essence, impregnable to the barbarians coming from south of the Danube river, who found easier targets to the west rather than the richer provinces to the east in Asia. From the 5th century, the city was also protected by the Anastasian Wall, a 60-kilometer chain of walls across the Thracian peninsula. Many scholars argue that these sophisticated fortifications allowed the east to develop relatively unmolested while Ancient Rome and the west collapsed. Constantinople's fame was such that it was described even in contemporary Chinese histories, the Old and New Book of Tang, which mentioned its massive walls and gates as well as a purported clepsydra mounted with a golden statue of a man. The Chinese histories even related how the city had been besieged in the 7th century by Muawiyah I and how he exacted tribute in a peace settlement. See also People from Constantinople List of people from Constantinople Secular buildings and monuments Augustaion Column of Justinian Basilica Cistern Column of Marcian Bucoleon Palace Horses of Saint Mark Obelisk of Theodosius Serpent Column Walled Obelisk Palace of Lausus Cistern of Philoxenos Palace of the Porphyrogenitus Prison of Anemas Valens Aqueduct Churches, monasteries and mosques Church of Saint Thekla of the Palace of Blachernae Church of Myrelaion Chora Church Church of Saints Sergius and Bacchus Church of the Holy Apostles Church of St. Polyeuctus Monastery of Christ Pantepoptes Lips Monastery Monastery of the Christ the Benefactor Hagia Irene Saint John the Forerunner by-the-Dome Church of Theotokos Kyriotissa Church of Saint Andrew in Krisei Nea Ekklesia Pammakaristos Church Stoudios Monastery Toklu Dede Mosque Church of Saint Theodore Monastery of the Pantokrator Unnamed Mosque established during Byzantine times for visiting Muslim dignitaries Miscellaneous Ahmed Bican Yazıcıoğlu Byzantine calendar Byzantine silk Eparch of Constantinople (List of eparchs) Sieges of Constantinople Third Rome Thracia Timeline of Istanbul history Notes References Bibliography Ball, Warwick (2016). Rome in the East: Transformation of an Empire, 2nd edition. London & New York: Routledge. . Emerson, Charles. 1913: In Search of the World Before the Great War (2013) compares Constantinople to 20 major world cities; pp 358–80. online review Ibrahim, Raymond (2018). Sword and Scimitar, 1st edition. New York. . Klein, Konstantin M.: Wienand, Johannes (2022) (eds.): City of Caesar, City of God: Constantinople and Jerusalem in Late Antiquity. De Gruyter, Berlin 2022, ISBN 978-3-11-071720-4. doi: City of Caesar, City of God. Korolija Fontana-Giusti, Gordana 'The Urban Language of Early Constantinople: The Changing Roles of the Arts and Architecture in the Formation of the New Capital and the New Consciousness' in Intercultural Transmission in the Medieval Mediterranean, (2012), Stephanie L. Hathaway and David W. Kim (eds), London: Continuum, pp 164–202. . Yule, Henry (1915). Henri Cordier (ed.), Cathay and the Way Thither: Being a Collection of Medieval Notices of China, Vol I: Preliminary Essay on the Intercourse Between China and the Western Nations Previous to the Discovery of the Cape Route. London: Hakluyt Society. Accessed 21 September 2016. External links Constantinople, from History of the Later Roman Empire, by J. B. Bury History of Constantinople from the "New Advent Catholic Encyclopedia". Monuments of Byzantium – Pantokrator Monastery of Constantinople Constantinoupolis on the web Select internet resources on the history and culture Info on the name change from the Foundation for the Advancement of Sephardic Studies and Culture , documenting the monuments of Byzantine Constantinople Byzantium 1200, a project aimed at creating computer reconstructions of the Byzantine monuments located in Istanbul in 1200 AD. Constantine and Constantinople How and why Constantinople was founded Hagia Sophia Mosaics The Deesis and other Mosaics of Hagia Sophia in Constantinople 320s establishments in the Roman Empire 330 establishments 1453 disestablishments in the Ottoman Empire 15th-century disestablishments in the Byzantine Empire Capitals of former nations Constantine the Great Holy cities Populated places along the Silk Road Populated places established in the 4th century Populated places disestablished in the 15th century Populated places of the Byzantine Empire Roman towns and cities in Turkey Thrace
5647
https://en.wikipedia.org/wiki/Columbus
Columbus
Columbus is a Latinized version of the Italian surname "Colombo". It most commonly refers to: Christopher Columbus (1451–1506), the Italian explorer Columbus, Ohio, the capital city of the U.S. state of Ohio Columbus, Georgia, the 2nd-largest city in the U.S. State of Georgia Columbus may also refer to: Places Extraterrestrial Columbus (crater), a crater on Mars Columbus (ISS module), the European module for the International Space Station Columbus (spacecraft), a program to develop a European space station 1986–1991 Italy Columbus (Rome), a residential district United States Columbus, Arkansas Columbus, Georgia, the 119th-largest city in the United States, and the 2nd-largest in Georgia after Atlanta Columbus, Illinois Columbus, Indiana, known for modern architecture Columbus, Kansas Columbus, Kentucky Columbus, Minnesota Columbus, Mississippi Columbus, Missouri Columbus, Montana Columbus, Nebraska Columbus, New Jersey Columbus, New Mexico Columbus, New York Columbus, North Carolina Columbus, North Dakota Columbus, Ohio, the largest city in the United States with this name Columbus, Texas Columbus, Wisconsin Columbus (town), Wisconsin Columbus Avenue (disambiguation) Columbus Circle, a traffic circle in Manhattan, New York Columbus City (disambiguation) Columbus Township (disambiguation) Persons with the name Forename Columbus Caldwell (1830–1908), American politician Columbus Germain (1827–1880), American politician Columbus Short (born 1982), American choreographer and actor Surname Bartholomew Columbus (c. 1461–1515), Christopher Columbus's younger brother Chris Columbus (filmmaker) (born 1958), American filmmaker Diego Columbus (1479/80–1526), Christopher Columbus' eldest son Ferdinand Columbus (1488–1539), Christopher Columbus' second son Scott Columbus (1956–2011), long-time drummer for the heavy metal band Manowar Arts, entertainment, and media Films Columbus (2015 film), an Indian comedy, subtitled "Discovering Love" Columbus (2017 film), an American drama set amidst the architecture of Columbus, Indiana Columbus (Star Trek), a shuttlecraft in the Star Trek series Music Opera Columbus (Egk), German-language opera by Egk, 1943 Columbus, 1855 opera by František Škroup Christophe Colomb, French-language opera by Milhaud often referred to as Columbus in English sources Other uses in music Columbus (Herzogenberg), large scale cantata by Heinrich von Herzogenberg 1870 "Colombus", song by Mary Black from No Frontiers "Columbus" (song), a song by the band Kent from their album Tillbaka till samtiden Christopher Columbus, pastiche of music by Offenbach to a new English libretto by Don White recorded by the Opera Rara label in 1977 Other uses in arts, entertainment, and media Columbus (novel), a 1941 novel about Christopher Columbus by Rafael Sabatini Columbus (Bartholdi), a statue depicting Christopher Columbus by Frédéric Auguste Bartholdi, in Providence, Rhode Island, US Columbus Edwards, the character known as Lum of Lum and Abner Brands and enterprises COLUMBUS, ab initio quantum chemistry software ColumBus, former name of Howard Transit in Howard County, Maryland Columbus Communications, a cable television and broadband speed Internet service provider in the Caribbean region Columbus Salame, an American food processing company Columbus Tubing, an Italian manufacturer of bicycle frame tubing Columbus Buggy Company, an American automotive manufacturer from 1875 to 1913 Ships Columbus (1824), a disposable ship built to transport lumber from North America to Britain MS Columbus, a cruise ship owned by Plantours & Partner GmbH MV Columbus, a cruise ship owned by Seajets SS Christopher Columbus, Great Lakes excursion liner (1893–1933) SS City of Columbus, a passenger steamer that sailed from Boston to Savannah and sank off Martha's Vineyard in 1884 SS Columbus (1873), an American merchantman converted in 1878 into the Russian cruiser Asia SS Columbus (1924), a transatlantic ocean liner for the North German Lloyd steamship line USS Columbus, various ships of the US Navy Other uses Columbus hops, a variety of hops Generation of Columbuses, a generation of Poles born ca. 1920, who had to fight twenty years later Columbus (shopping centre), a shopping centre in Vuosaari, Helsinki, Finland See also Christopher Columbus (disambiguation) Columbus City Hall (disambiguation) Columba Columbia (disambiguation) Columbus Day List of places named for Christopher Columbus
5648
https://en.wikipedia.org/wiki/Cornwall
Cornwall
Cornwall (; ) is a ceremonial county in South West England. It is recognised as one of the Celtic nations and is the homeland of the Cornish people. The county is bordered by the Atlantic Ocean to the north and west, Devon to the east, and the English Channel to the south. The largest settlement is Falmouth, and the county town is Truro. The county is rural, with an area of and population of 568,210. The largest settlements are Falmouth (23,061), Newquay (20,342), St Austell (19,958), and Truro (18,766). Most of Cornwall forms a single unitary authority area, and the Isles of Scilly have a unique local authority. The Cornish nationalist movement disputes the constitutional status of Cornwall and seeks greater autonomy within the United Kingdom. Cornwall is the westernmost part of the South West Peninsula. Its coastline is characterised by steep cliffs and, to the south, several rias, including those at the mouths of the rivers Fal and Fowey. It includes the southernmost point on Great Britain, Lizard Point, and forms a large part of the Cornwall Area of Outstanding Natural Beauty. The AONB also includes Bodmin Moor, an upland outcrop of the Cornubian batholith granite formation. The county contains many short rivers; the longest is the Tamar, which forms the border with Devon. Cornwall had a minor Roman presence, and later formed part of the Brittonic kingdom of Dumnonia. From the 7th century, the Britons in the South West increasingly came into conflict with the expanding Anglo-Saxon kingdom of Wessex, eventually being pushed west of the Tamar; by the Norman Conquest Cornwall was administered as part of England, though it retained its own culture. The remainder of the Middle Ages and Early Modern Period were relatively settled, with Cornwall developing its tin mining industry and becoming a duchy in 1337. During the Industrial Revolution, the tin and copper mines were expanded and then declined, with china clay extraction becoming a major industry. Railways were built, leading to a growth of tourism in the 20th century. The Cornish language became extinct as a living community language at the end of the 18th century, but is now being revived. Name The modern English name Cornwall is a compound of two terms coming from two different language groups: Corn- originates from the Proto-Celtic "*karnu-" ("horn", presumed in reference to "headland"), and is cognate with the English word "horn" and Latin "cornu" (both deriving from the Proto-Indo-European *ker-). There may also have been an Iron Age group that occupied the Cornish peninsula known as the Cornovii (i.e. "people of the horn or headland"). -wall derives from the Old English exonym "", meaning "foreigner", "slave" or "Brittonic-speaker" (as in Welsh). In the Cornish language, Cornwall is Kernow which stems from the same Proto-Celtic root. History Prehistory, Roman and post-Roman periods Humans reoccupied Britain after the last Ice Age. The area now known as Cornwall was first inhabited in the Palaeolithic and Mesolithic periods. It continued to be occupied by Neolithic and then by Bronze Age people. Cornwall in the Late Bronze Age formed part of a maritime trading-networked culture which researchers have dubbed the Atlantic Bronze Age system, and which extended over most of the areas of present-day Ireland, England, Wales, France, Spain, and Portugal. During the British Iron Age, Cornwall, like all of Britain (modern England, Scotland, Wales, and the Isle of Man), was inhabited by a Celtic-speaking people known as the Britons with distinctive cultural relations to neighbouring Brittany. The Common Brittonic spoken at this time eventually developed into several distinct tongues, including Cornish, Welsh, Breton, Cumbric and Pictish. The first written account of Cornwall comes from the 1st-century BC Sicilian Greek historian Diodorus Siculus, supposedly quoting or paraphrasing the 4th-century BCE geographer Pytheas, who had sailed to Britain: The identity of these merchants is unknown. It has been theorised that they were Phoenicians, but there is no evidence for this. Professor Timothy Champion, discussing Diodorus Siculus's comments on the tin trade, states that "Diodorus never actually says that the Phoenicians sailed to Cornwall. In fact, he says quite the opposite: the production of Cornish tin was in the hands of the natives of Cornwall, and its transport to the Mediterranean was organised by local merchants, by sea and then overland through France, passing through areas well outside Phoenician control." Isotopic evidence suggests that tin ingots found off the coast of Haifa, Israel, may have from Cornwall. Tin, required for the production of bronze, was a relatively rare and precious commodity in the Bronze Age – hence the interest shown in Devon and Cornwall's tin resources. (For further discussion of tin mining see the section on the economy below.) In the first four centuries AD, during the time of Roman dominance in Britain, Cornwall was rather remote from the main centres of Romanisation – the nearest being Isca Dumnoniorum, modern-day Exeter. However, the Roman road system extended into Cornwall with four significant Roman sites based on forts: Tregear near Nanstallon was discovered in the early 1970s, two others were found at Restormel Castle, Lostwithiel in 2007, and a third fort near Calstock was also discovered early in 2007. In addition, a Roman-style villa was found at Magor Farm, Illogan in 1935. Ptolemy's Geographike Hyphegesis mentions four towns controlled by the Dumnonii, three of which may have been in Cornwall. However, after 410 AD, Cornwall appears to have reverted to rule by Romano-Celtic chieftains of the Cornovii tribe as part of the Brittonic kingdom of Dumnonia (which also included present-day Devonshire and the Scilly Isles), including the territory of one Marcus Cunomorus, with at least one significant power base at Tintagel in the early 6th century. "King" Mark of Cornwall is a semi-historical figure known from Welsh literature, from the Matter of Britain, and, in particular, from the later Norman-Breton medieval romance of Tristan and Yseult, where he appears as a close relative of King Arthur, himself usually considered to be born of the Cornish people in folklore traditions derived from Geoffrey of Monmouth's 12th-century Historia Regum Britanniae. Archaeology supports ecclesiastical, literary and legendary evidence for some relative economic stability and close cultural ties between the sub-Roman Westcountry, South Wales, Brittany, the Channel Islands, and Ireland through the fifth and sixth centuries. In Cornwall, the arrival of Celtic saints such as Nectan, Paul Aurelian, Petroc, Piran, Samson and numerous others reinforced the preexisting Roman christianity. Conflict with Wessex The Battle of Deorham in 577 saw the separation of Dumnonia (and therefore Cornwall) from Wales, following which the Dumnonii often came into conflict with the expanding English kingdom of Wessex. Centwine of Wessex "drove the Britons as far as the sea" in 682, and by 690 St Bonifice, then a Saxon boy, was attending an abbey in Exeter, which was in turn ruled by a Saxon abbot. The Carmen Rhythmicum written by Aldhelm contains the earliest literary reference to Cornwall as distinct from Devon. Religious tensions between the Dumnonians (who celebrated celtic Christian traditions) and Wessex (who were Roman Catholic) are described in Aldhelm's letter to King Geraint. The Annales Cambriae report that in AD 722 the Britons of Cornwall won a battle at "Hehil". It seems likely that the enemy the Cornish fought was a West Saxon force, as evidenced by the naming of King Ine of Wessex and his kinsman Nonna in reference to an earlier Battle of Llongborth in 710. The Anglo-Saxon Chronicle stated in 815 (adjusted date) "and in this year king Ecgbryht raided in Cornwall from east to west." this has been interpreted to mean a raid from the Tamar to Land's End, and the end of Cornish independence. However, the Anglo-Saxon Chronicle states that in 825 (adjusted date) a battle took place between the Wealas (Cornish) and the Defnas (men of Devon) at Gafulforda. The Cornish giving battle here, and the later battle at Hingston Down, casts doubt on any claims of control Wessex had at this stage. In 838, the Cornish and their Danish allies were defeated by Egbert in the Battle of Hingston Down at Hengestesdune. In 875, the last recorded king of Cornwall, Dumgarth, is said to have drowned. Around the 880s, Anglo-Saxons from Wessex had established modest land holdings in the north eastern part of Cornwall; notably Alfred the Great who had acquired a few estates. William of Malmesbury, writing around 1120, says that King Athelstan of England (924–939) fixed the boundary between English and Cornish people at the east bank of the River Tamar. While elements of William's story, like the burning of Exeter, have been cast in doubt by recent writers Athelstan did re-establish a separate Cornish Bishop and relations between Wessex and the Cornish elite improved from the time of his rule. Eventually King Edgar was able to issue charters the width of Cornwall, and frequently sent emissaries or visited personally as seen by his appearances in the Bodmin Manumissions. Breton–Norman period One interpretation of the Domesday Book is that by this time the native Cornish landowning class had been almost completely dispossessed and replaced by English landowners, particularly Harold Godwinson himself. However, the Bodmin manumissions show that two leading Cornish figures nominally had Saxon names, but these were both glossed with native Cornish names. In 1068, Brian of Brittany may have been created Earl of Cornwall, and naming evidence cited by medievalist Edith Ditmas suggests that many other post-Conquest landowners in Cornwall were Breton allies of the Normans, the Bretons being descended from Britons who had fled to what is today Brittany during the early years of the Anglo-Saxon conquest. She also proposed this period for the early composition of the Tristan and Iseult cycle by poets such as Béroul from a pre-existing shared Brittonic oral tradition. Soon after the Norman conquest most of the land was transferred to the new Breton–Norman aristocracy, with the lion's share going to Robert, Count of Mortain, half-brother of King William and the largest landholder in England after the king with his stronghold at Trematon Castle near the mouth of the Tamar. Later medieval administration and society Subsequently, however, Norman absentee landlords became replaced by a new Cornish-Norman ruling class including scholars such as Richard Rufus of Cornwall. These families eventually became the new rulers of Cornwall, typically speaking Norman French, Breton-Cornish, Latin, and eventually English, with many becoming involved in the operation of the Stannary Parliament system, the Earldom and eventually the Duchy of Cornwall. The Cornish language continued to be spoken and acquired a number of characteristics establishing its identity as a separate language from Breton. Stannary parliaments The stannary parliaments and stannary courts were legislative and legal institutions in Cornwall and in Devon (in the Dartmoor area). The stannary courts administered equity for the region's tin-miners and tin mining interests, and they were also courts of record for the towns dependent on the mines. The separate and powerful government institutions available to the tin miners reflected the enormous importance of the tin industry to the English economy during the Middle Ages. Special laws for tin miners pre-date written legal codes in Britain, and ancient traditions exempted everyone connected with tin mining in Cornwall and Devon from any jurisdiction other than the stannary courts in all but the most exceptional circumstances. Piracy and smuggling Cornish piracy was active during the Elizabethan era on the west coast of Britain. Cornwall is well known for its wreckers who preyed on ships passing Cornwall's rocky coastline. During the 17th and 18th centuries Cornwall was a major smuggling area. Heraldry In later times, Cornwall was known to the Anglo-Saxons as "West Wales" to distinguish it from "North Wales" (the modern nation of Wales). The name appears in the Anglo-Saxon Chronicle in 891 as On Corn walum. In the Domesday Book it was referred to as Cornualia and in c. 1198 as Cornwal. Other names for the county include a latinisation of the name as Cornubia (first appears in a mid-9th-century deed purporting to be a copy of one dating from c. 705), and as Cornugallia in 1086. Physical geography Cornwall forms the tip of the south-west peninsula of the island of Great Britain, and is therefore exposed to the full force of the prevailing winds that blow in from the Atlantic Ocean. The coastline is composed mainly of resistant rocks that give rise in many places to tall cliffs. Cornwall has a border with only one other county, Devon, which is formed almost entirely by the River Tamar, and the remainder (to the north) by the Marsland Valley. Coastal areas The north and south coasts have different characteristics. The north coast on the Celtic Sea, part of the Atlantic Ocean, is more exposed and therefore has a wilder nature. The prosaically named High Cliff, between Boscastle and St Gennys, is the highest sheer-drop cliff in Cornwall at . However, there are also many extensive stretches of fine golden sand which form the beaches important to the tourist industry, such as those at Bude, Polzeath, Watergate Bay, Perranporth, Porthtowan, Fistral Beach, Newquay, St Agnes, St Ives, and on the south coast Gyllyngvase beach in Falmouth and the large beach at Praa Sands further to the south-west. There are two river estuaries on the north coast: Hayle Estuary and the estuary of the River Camel, which provides Padstow and Rock with a safe harbour. The seaside town of Newlyn is a popular holiday destination, as it is one of the last remaining traditional Cornish fishing ports, with views reaching over Mount's Bay. The south coast, dubbed the "Cornish Riviera", is more sheltered and there are several broad estuaries offering safe anchorages, such as at Falmouth and Fowey. Beaches on the south coast usually consist of coarser sand and shingle, interspersed with rocky sections of wave-cut platform. Also on the south coast, the picturesque fishing village of Polperro, at the mouth of the Pol River, and the fishing port of Looe on the River Looe are both popular with tourists. Inland areas The interior of the county consists of a roughly east–west spine of infertile and exposed upland, with a series of granite intrusions, such as Bodmin Moor, which contains the highest land within Cornwall. From east to west, and with approximately descending altitude, these are Bodmin Moor, Hensbarrow north of St Austell, Carnmenellis to the south of Camborne, and the Penwith or Land's End peninsula. These intrusions are the central part of the granite outcrops that form the exposed parts of the Cornubian batholith of south-west Britain, which also includes Dartmoor to the east in Devon and the Isles of Scilly to the west, the latter now being partially submerged. The intrusion of the granite into the surrounding sedimentary rocks gave rise to extensive metamorphism and mineralisation, and this led to Cornwall being one of the most important mining areas in Europe until the early 20th century. It is thought tin was mined here as early as the Bronze Age, and copper, lead, zinc and silver have all been mined in Cornwall. Alteration of the granite also gave rise to extensive deposits of China Clay, especially in the area to the north of St Austell, and the extraction of this remains an important industry. The uplands are surrounded by more fertile, mainly pastoral farmland. Near the south coast, deep wooded valleys provide sheltered conditions for flora that like shade and a moist, mild climate. These areas lie mainly on Devonian sandstone and slate. The north east of Cornwall lies on Carboniferous rocks known as the Culm Measures. In places these have been subjected to severe folding, as can be seen on the north coast near Crackington Haven and in several other locations. Lizard Peninsula The geology of the Lizard peninsula is unusual, in that it is mainland Britain's only example of an ophiolite, a section of oceanic crust now found on land. Much of the peninsula consists of the dark green and red Precambrian serpentinite, which forms spectacular cliffs, notably at Kynance Cove, and carved and polished serpentine ornaments are sold in local gift shops. This ultramafic rock also forms a very infertile soil which covers the flat and marshy heaths of the interior of the peninsula. This is home to rare plants, such as the Cornish Heath, which has been adopted as the county flower. Hills and high points Settlements and transport Cornwall's only city, and the home of the council headquarters, is Truro. Nearby Falmouth is notable as a port. St Just in Penwith is the westernmost town in England, though the same claim has been made for Penzance, which is larger. St Ives and Padstow are today small vessel ports with a major tourism and leisure sector in their economies. Newquay on the north coast is another major urban settlement which is known for its beaches and is a popular surfing destination, as is Bude further north, but Newquay is now also becoming important for its aviation-related industries. Camborne is the county's largest town and more populous than the capital Truro. Together with the neighbouring town of Redruth, it forms the largest urban area in Cornwall, and both towns were significant as centres of the global tin mining industry in the 19th century; nearby copper mines were also very productive during that period. St Austell is also larger than Truro and was the centre of the china clay industry in Cornwall. Until four new parishes were created for the St Austell area on 1 April 2009 St Austell was the largest settlement in Cornwall. Cornwall borders the county of Devon at the River Tamar. Major roads between Cornwall and the rest of Great Britain are the A38 which crosses the Tamar at Plymouth via the Tamar Bridge and the town of Saltash, the A39 road (Atlantic Highway) from Barnstaple, passing through North Cornwall to end in Falmouth, and the A30 which connects Cornwall to the M5 motorway at Exeter, crosses the border south of Launceston, crosses Bodmin Moor and connects Bodmin, Truro, Redruth, Camborne, Hayle and Penzance. Torpoint Ferry links Plymouth with Torpoint on the opposite side of the Hamoaze. A rail bridge, the Royal Albert Bridge built by Isambard Kingdom Brunel (1859), provides the other main land transport link. The city of Plymouth, a large urban centre in south west Devon, is an important location for services such as hospitals, department stores, road and rail transport, and cultural venues, particularly for people living in east Cornwall. Cardiff and Swansea, across the Bristol Channel, have at some times in the past been connected to Cornwall by ferry, but these do not operate now. The Isles of Scilly are served by ferry (from Penzance) and by aeroplane, having its own airport: St Mary's Airport. There are regular flights between St Mary's and Land's End Airport, near St Just, and Newquay Airport; during the summer season, a service is also provided between St Mary's and Exeter Airport, in Devon. Ecology Flora and fauna Cornwall has varied habitats including terrestrial and marine ecosystems. One noted species in decline locally is the Reindeer lichen, which species has been made a priority for protection under the national UK Biodiversity Action Plan. Botanists divide Cornwall and Scilly into two vice-counties: West (1) and East (2). The standard flora is by F. H. Davey Flora of Cornwall (1909). Davey was assisted by A. O. Hume and he thanks Hume, his companion on excursions in Cornwall and Devon, and for help in the compilation of that Flora, publication of which was financed by him. Climate Cornwall has a temperate Oceanic climate (Köppen climate classification: Cfb), with mild winters and cool summers. Cornwall has the mildest and one of the sunniest climates of the United Kingdom, as a result of its oceanic setting and the influence of the Gulf Stream. The average annual temperature in Cornwall ranges from on the Isles of Scilly to in the central uplands. Winters are among the warmest in the country due to the moderating effects of the warm ocean currents, and frost and snow are very rare at the coast and are also rare in the central upland areas. Summers are, however, not as warm as in other parts of southern England. The surrounding sea and its southwesterly position mean that Cornwall's weather can be relatively changeable. Cornwall is one of the sunniest areas in the UK. It has more than 1,541 hours of sunshine per year, with the highest average of 7.6 hours of sunshine per day in July. The moist, mild air coming from the southwest brings higher amounts of rainfall than in eastern Great Britain, at per year. However, this is not as much as in more northern areas of the west coast. The Isles of Scilly, for example, where there are on average fewer than two days of air frost per year, is the only area in the UK to be in the Hardiness zone 10. The islands have, on average, less than one day of air temperature exceeding 30 °C per year and are in the AHS Heat Zone 1. Extreme temperatures in Cornwall are particularly rare; however, extreme weather in the form of storms and floods is common. Due to climate change Cornwall faces more heatwaves and severe droughts, faster coastal erosion, stronger storms and higher wind speeds as well as the possibility of more high impact flooding. Culture Language Cornish language Cornish, a member of the Brythonic branch of the Celtic language family, is a revived language that died out as a first language in the late 18th century. It is closely related to the other Brythonic languages, Breton and Welsh, and less so to the Goidelic languages. Cornish has no legal status in the UK. There has been a revival of the language by academics and optimistic enthusiasts since the mid-19th century that gained momentum from the publication in 1904 of Henry Jenner's Handbook of the Cornish Language. It is a social networking community language rather than a social community group language. Cornwall Council encourages and facilitates language classes within the county, in schools and within the wider community. In 2002, Cornish was named as a UK regional language in the European Charter for Regional or Minority Languages. As a result, in 2005 its promoters received limited government funding. Several words originating in Cornish are used in the mining terminology of English, such as costean, gossan, gunnies, kibbal, kieve and vug. English dialect The Cornish language and culture influenced the emergence of particular pronunciations and grammar not used elsewhere in England. The Cornish dialect is spoken to varying degrees; however, someone speaking in broad Cornish may be practically unintelligible to one not accustomed to it. Cornish dialect has generally declined, as in most places it is now little more than a regional accent and grammatical differences have been eroded over time. Marked differences in vocabulary and usage still exist between the eastern and western parts of Cornwall. Flag Saint Piran's Flag is the national flag and ancient banner of Cornwall, and an emblem of the Cornish people. The banner of Saint Piran is a white cross on a black background (in terms of heraldry 'sable, a cross argent'). According to legend Saint Piran adopted these colours from seeing the white tin in the black coals and ashes during his discovery of tin. The Cornish flag is an exact reverse of the former Breton black cross national flag and is known by the same name "Kroaz Du". Arts and media Since the 19th century, Cornwall, with its unspoilt maritime scenery and strong light, has sustained a vibrant visual art scene of international renown. Artistic activity within Cornwall was initially centred on the art-colony of Newlyn, most active at the turn of the 20th century. This Newlyn School is associated with the names of Stanhope Forbes, Elizabeth Forbes, Norman Garstin and Lamorna Birch. Modernist writers such as D. H. Lawrence and Virginia Woolf lived in Cornwall between the wars, and Ben Nicholson, the painter, having visited in the 1920s came to live in St Ives with his then wife, the sculptor Barbara Hepworth, at the outbreak of the Second World War. They were later joined by the Russian emigrant Naum Gabo, and other artists. These included Peter Lanyon, Terry Frost, Patrick Heron, Bryan Wynter and Roger Hilton. St Ives also houses the Leach Pottery, where Bernard Leach, and his followers championed Japanese inspired studio pottery. Much of this modernist work can be seen in Tate St Ives. The Newlyn Society and Penwith Society of Arts continue to be active, and contemporary visual art is documented in a dedicated online journal. Local television programmes are provided by BBC South West & ITV West Country. Radio programmes are produced by BBC Radio Cornwall in Truro for the entire county, Heart West, Source FM for the Falmouth and Penryn areas, Coast FM for west Cornwall, Radio St Austell Bay for the St Austell area, NCB Radio for north Cornwall & Pirate FM. Music Cornwall has a folk music tradition that has survived into the present and is well known for its unusual folk survivals such as Mummers Plays, the Furry Dance in Helston played by the famous Helston Town Band, and Obby Oss in Padstow. Newlyn is home to a food and music festival that hosts live music, cooking demonstrations, and displays of locally caught fish. As in other former mining districts of Britain, male voice choirs and brass bands, such as Brass on the Grass concerts during the summer at Constantine, are still very popular in Cornwall. Cornwall also has around 40 brass bands, including the six-times National Champions of Great Britain, Camborne Youth Band, and the bands of Lanner and St Dennis. Cornish players are regular participants in inter-Celtic festivals, and Cornwall itself has several inter-Celtic festivals such as Perranporth's Lowender Peran folk festival. Contemporary musician Richard D. James (also known as Aphex Twin) grew up in Cornwall, as did Luke Vibert and Alex Parks, winner of Fame Academy 2003. Roger Taylor, the drummer from the band Queen was also raised in the county, and currently lives not far from Falmouth. The American singer-songwriter Tori Amos now resides predominantly in North Cornwall not far from Bude with her family. The lutenist, composer and festival director Ben Salfield lives in Truro. Mick Fleetwood of Fleetwood Mac was born in Redruth. Literature Cornwall's rich heritage and dramatic landscape have inspired numerous writers. Fiction Sir Arthur Quiller-Couch, author of many novels and works of literary criticism, lived in Fowey: his novels are mainly set in Cornwall. Daphne du Maurier lived at Menabilly near Fowey and many of her novels had Cornish settings: The Loving Spirit, Jamaica Inn, Rebecca, Frenchman's Creek, The King's General (partially), My Cousin Rachel, The House on the Strand and Rule Britannia. She is also noted for writing Vanishing Cornwall. Cornwall provided the inspiration for The Birds, one of her terrifying series of short stories, made famous as a film by Alfred Hitchcock. Conan Doyle's The Adventure of the Devil's Foot featuring Sherlock Holmes is set in Cornwall. Winston Graham's series Poldark, Kate Tremayne's Adam Loveday series, Susan Cooper's novels Over Sea, Under Stone and Greenwitch, and Mary Wesley's The Camomile Lawn are all set in Cornwall. Writing under the pseudonym of Alexander Kent, Douglas Reeman sets parts of his Richard Bolitho and Adam Bolitho series in the Cornwall of the late 18th and the early 19th centuries, particularly in Falmouth. Gilbert K. Chesterton placed the action of many of his stories there. Medieval Cornwall is the setting of the trilogy by Monica Furlong, Wise Child, Juniper and Colman, as well as part of Charles Kingsley's Hereward the Wake. Hammond Innes's novel, The Killer Mine; Charles de Lint's novel The Little Country; and Chapters 24–25 of J. K. Rowling's Harry Potter and the Deathly Hallows take place in Cornwall (Shell Cottage, on the beach outside the fictional village of Tinworth). David Cornwell, who wrote espionage novels under the name John le Carré, lived and worked in Cornwall. Nobel Prize-winning novelist William Golding was born in St Columb Minor in 1911, and returned to live near Truro from 1985 until his death in 1993. D. H. Lawrence spent a short time living in Cornwall. Rosamunde Pilcher grew up in Cornwall, and several of her books take place there. St. Michael's Mount in Cornwall (under the fictional name of Mount Polbearne) is the setting of the Little Beach Street Bakery series by Jenny Colgan, who spent holidays in Cornwall as a child. The book series includes Little Beach Street Bakery (2014), Summer at Little Beach Street Bakery (2015), Christmas at Little Beach Street Bakery (2016), and Sunrise by the Sea (2021). In the Paddington Bear novels by Michael Bond the title character is said to have landed at an unspecified port in Cornwall having travelled in a lifeboat aboard a cargo ship from darkest Peru. From here he travels to London on a train and eventually arrives at Paddington Station. Enid Blyton's 1953 novel Five Go Down to the Sea (the twelfth book in The Famous Five series) is set in Cornwall, near the fictional coastal village of Tremannon. Poetry The late Poet Laureate Sir John Betjeman was famously fond of Cornwall and it featured prominently in his poetry. He is buried in the churchyard at St Enodoc's Church, Trebetherick. Charles Causley, the poet, was born in Launceston and is perhaps the best known of Cornish poets. Jack Clemo and the scholar A. L. Rowse were also notable Cornishmen known for their poetry; The Rev. R. S. Hawker of Morwenstow wrote some poetry which was very popular in the Victorian period. The Scottish poet W. S. Graham lived in West Cornwall from 1944 until his death in 1986. The poet Laurence Binyon wrote "For the Fallen" (first published in 1914) while sitting on the cliffs between Pentire Point and The Rumps and a stone plaque was erected in 2001 to commemorate the fact. The plaque bears the inscription "FOR THE FALLEN / Composed on these cliffs, 1914". The plaque also bears below this the fourth stanza (sometimes referred to as "The Ode") of the poem: They shall grow not old, as we that are left grow old Age shall not weary them, nor the years condemn At the going down of the sun and in the morning We will remember them Other literary works Cornwall produced a substantial number of passion plays such as the Ordinalia during the Middle Ages. Many are still extant, and provide valuable information about the Cornish language. See also Cornish literature Colin Wilson, a prolific writer who is best known for his debut work The Outsider (1956) and for The Mind Parasites (1967), lived in Gorran Haven, a small village on the southern Cornish coast. The writer D. M. Thomas was born in Redruth but lived and worked in Australia and the United States before returning to his native Cornwall. He has written novels, poetry, and other works, including translations from Russian. Thomas Hardy's drama The Queen of Cornwall (1923) is a version of the Tristan story; the second act of Richard Wagner's opera Tristan und Isolde takes place in Cornwall, as do Gilbert and Sullivan's operettas The Pirates of Penzance and Ruddigore. Clara Vyvyan was the author of various books about many aspects of Cornish life such as Our Cornwall. She once wrote: "The Loneliness of Cornwall is a loneliness unchanged by the presence of men, its freedoms a freedom inexpressible by description or epitaph. Your cannot say Cornwall is this or that. Your cannot describe it in a word or visualise it in a second. You may know the country from east to west and sea to sea, but if you close your eyes and think about it no clear-cut image rises before you. In this quality of changefulness have we possibly surprised the secret of Cornwall's wild spirit—in this intimacy the essence of its charm? Cornwall!". A level of Tomb Raider: Legend, a game dealing with Arthurian Legend, takes place in Cornwall at a museum above King Arthur's tomb. The adventure game The Lost Crown is set in the fictional town of Saxton, which uses the Cornish settlements of Polperro, Talland and Looe as its model. The fairy tale Jack the Giant Killer takes place in Cornwall. The Mousehole Cat, a children's book written by Antonia Barber and illustrated by Nicola Bayley, is set in the Cornish village Mousehole and based on the legend of Tom Bawcock and the continuing tradition of Tom Bawcock's Eve. Sports The main sports played in Cornwall are rugby, football and cricket. Athletes from Truro have done well in Olympic and Commonwealth Games fencing, winning several medals. Surfing is popular, particularly with tourists, thousands of whom take to the water throughout the summer months. Some towns and villages have bowling clubs, and a wide variety of British sports are played throughout Cornwall. Cornwall is also one of the few places in England where shinty is played; the English Shinty Association is based in Penryn. The Cornwall County Cricket Club plays as one of the minor counties of English cricket. Truro, and all of the towns and some villages have football clubs belonging to the Cornwall County Football Association, and some clubs have teams competing higher within the English football league pyramid. Of these, the highest ranked — by two flights — is Truro City F.C., who will be playing in the National League South in the 2023–24 season. Other notable Cornish teams include Mousehole A.F.C., Helston Athletic F.C., and Falmouth Town F.C. Rugby football Viewed as an "important identifier of ethnic affiliation", rugby union has become a sport strongly tied to notions of Cornishness. and since the 20th century, rugby union has emerged as one of the most popular spectator and team sports in Cornwall (perhaps the most popular), with professional Cornish rugby footballers being described as a "formidable force", "naturally independent, both in thought and deed, yet paradoxically staunch English patriots whose top players have represented England with pride and passion". In 1985, sports journalist Alan Gibson made a direct connection between the love of rugby in Cornwall and the ancient parish games of hurling and wrestling that existed for centuries before rugby officially began. Among Cornwall's native sports are a distinctive form of Celtic wrestling related to Breton wrestling, and Cornish hurling, a kind of mediaeval football played with a silver ball (distinct from Irish Hurling). Cornish Wrestling is Cornwall's oldest sport and as Cornwall's native tradition it has travelled the world to places like Victoria, Australia and Grass Valley, California following the miners and gold rushes. Cornish hurling now takes place at St. Columb Major, St Ives, and less frequently at Bodmin. In rugby league, Cornwall R.L.F.C., founded in 2021, will represent the county in the professional league system. The semi-pro club will start in the third tier RFL League 1. At an amateur level, the county is represented by Cornish Rebels. Surfing and watersports Due to its long coastline, various maritime sports are popular in Cornwall, notably sailing and surfing. International events in both are held in Cornwall. Cornwall hosted the Inter-Celtic Watersports Festival in 2006. Surfing in particular is very popular, as locations such as Bude and Newquay offer some of the best surf in the UK. Pilot gig rowing has been popular for many years and the World championships takes place annually on the Isles of Scilly. On 2 September 2007, 300 surfers at Polzeath beach set a new world record for the highest number of surfers riding the same wave as part of the Global Surf Challenge and part of a project called Earthwave to raise awareness about global warming. Fencing As its population is comparatively small, and largely rural, Cornwall's contribution to British national sport in the United Kingdom has been limited; the county's greatest successes have come in fencing. In 2014, half of the men's GB team fenced for Truro Fencing Club, and 3 Truro fencers appeared at the 2012 Olympics. Cuisine Cornwall has a strong culinary heritage. Surrounded on three sides by the sea amid fertile fishing grounds, Cornwall naturally has fresh seafood readily available; Newlyn is the largest fishing port in the UK by value of fish landed, and is known for its wide range of restaurants. Television chef Rick Stein has long operated a fish restaurant in Padstow for this reason, and Jamie Oliver chose to open his second restaurant, Fifteen, in Watergate Bay near Newquay. MasterChef host and founder of Smiths of Smithfield, John Torode, in 2007 purchased Seiners in Perranporth. One famous local fish dish is Stargazy pie, a fish-based pie in which the heads of the fish stick through the piecrust, as though "star-gazing". The pie is cooked as part of traditional celebrations for Tom Bawcock's Eve, but is not generally eaten at any other time. Cornwall is perhaps best known though for its pasties, a savoury dish made with pastry. Today's pasties usually contain a filling of beef steak, onion, potato and swede with salt and white pepper, but historically pasties had a variety of different fillings. "Turmut, 'tates and mate" (i.e. "Turnip, potatoes and meat", turnip being the Cornish and Scottish term for swede, itself an abbreviation of 'Swedish Turnip', the British term for rutabaga) describes a filling once very common. For instance, the licky pasty contained mostly leeks, and the herb pasty contained watercress, parsley, and shallots. Pasties are often locally referred to as oggies. Historically, pasties were also often made with sweet fillings such as jam, apple and blackberry, plums or cherries. The wet climate and relatively poor soil of Cornwall make it unsuitable for growing many arable crops. However, it is ideal for growing the rich grass required for dairying, leading to the production of Cornwall's other famous export, clotted cream. This forms the basis for many local specialities including Cornish fudge and Cornish ice cream. Cornish clotted cream has Protected Geographical Status under EU law, and cannot be made anywhere else. Its principal manufacturer is A. E. Rodda & Son of Scorrier. Local cakes and desserts include Saffron cake, Cornish heavy (hevva) cake, Cornish fairings biscuits, figgy 'obbin, Cream tea and whortleberry pie. There are also many types of beers brewed in Cornwall—those produced by Sharp's Brewery, Skinner's Brewery, Keltek Brewery and St Austell Brewery are the best known—including stouts, ales and other beer types. There is some small scale production of wine, mead and cider. Politics and administration Cornish national identity Cornwall is recognised by Cornish and Celtic political groups as one of six Celtic nations, alongside Brittany, Ireland, the Isle of Man, Scotland and Wales. (The Isle of Man Government and the Welsh Government also recognise Asturias and Galicia.) Cornwall is represented, as one of the Celtic nations, at the Festival Interceltique de Lorient, an annual celebration of Celtic culture held in Brittany. Cornwall Council consider Cornwall's unique cultural heritage and distinctiveness to be one of the area's major assets. They see Cornwall's language, landscape, Celtic identity, political history, patterns of settlement, maritime tradition, industrial heritage, and non-conformist tradition, to be among the features making up its "distinctive" culture. However, it is uncertain exactly how many of the people living in Cornwall consider themselves to be Cornish; results from different surveys (including the national census) have varied. In the 2001 census, 7 per cent of people in Cornwall identified themselves as Cornish, rather than British or English. However, activists have argued that this underestimated the true number as there was no explicit "Cornish" option included in the official census form. Subsequent surveys have suggested that as many as 44 per cent identify as Cornish. Many people in Cornwall say that this issue would be resolved if a Cornish option became available on the census. The question and content recommendations for the 2011 census provided an explanation of the process of selecting an ethnic identity which is relevant to the understanding of the often quoted figure of 37,000 who claimed Cornish identity. The 2021 census found that 17% of people in Cornwall identified as being Cornish (89,000), with 14% of people in Cornwall identifying as Cornish-only (80,000). Again there was no tick-box provided, and "Cornish" had to be written-in as "Other". On 24 April 2014 it was announced that Cornish people have been granted minority status under the European Framework Convention for the Protection of National Minorities. Local politics Cornwall forms two local government districts; Cornwall and the Isles of Scilly. The district of Cornwall is governed by Cornwall Council, a unitary authority based at Lys Kernow in Truro, and the Council of the Isles of Scilly governs the archipelago from Hugh Town. The Crown Court is based at the Courts of Justice in Truro. Magistrates' Courts are found in Truro (but at a different location to the Crown Court) and at Bodmin. The Isles of Scilly form part of the ceremonial county of Cornwall, and have, at times, been served by the same county administration. Since 1890 they have been administered by their own unitary authority, the Council of the Isles of Scilly. They are grouped with Cornwall for other administrative purposes, such as the National Health Service and Devon and Cornwall Police. Before reorganisation on 1 April 2009, council functions throughout the rest of Cornwall were organised in two tiers, with Cornwall County Council and district councils for its six districts, Caradon, Carrick, Kerrier, North Cornwall, Penwith, and Restormel. While projected to streamline services, cut red tape and save around £17 million a year, the reorganisation was met with wide opposition, with a poll in 2008 showing 89% disapproval from Cornish residents. The first elections for the unitary authority were held on 4 June 2009. The council has 123 seats; the largest party (in 2017) is the Conservatives, with 46 seats. The Liberal Democrats are the second-largest party, with 37 seats, with the Independents the third-largest grouping with 30. Before the creation of the unitary council, the former county council had 82 seats, the majority of which were held by the Liberal Democrats, elected at the 2005 county council elections. The six former districts had a total of 249 council seats, and the groups with greatest numbers of councillors were Liberal Democrats, Conservatives and Independents. Parliament and national politics Following a review by the Boundary Commission for England taking effect at the 2010 general election, Cornwall is divided into six county constituencies to elect MPs to the House of Commons of the United Kingdom. Before the 2010 boundary changes Cornwall had five constituencies, all of which were won by Liberal Democrats at the 2005 general election. In the 2010 general election Liberal Democrat candidates won three constituencies and Conservative candidates won three other constituencies. At the 2015 general election all six Cornish seats were won by Conservative candidates; all these Conservative MPs retained their seats at the 2017 general election, and the Conservatives won all six constituencies again at the 2019 general election. Until 1832, Cornwall had 44 MPs—more than any other county—reflecting the importance of tin to the Crown. Most of the increase in numbers of MPs came between 1529 and 1584 after which there was no change until 1832. Although Cornwall does not have a designated government department, in 2007 while Leader of the Opposition David Cameron created a Shadow Secretary of State for Cornwall. The position was not made into a formal UK Cabinet position when Cameron entered government following the 2010 United Kingdom general election Devolution movement Cornish nationalists have organised into two political parties: Mebyon Kernow, formed in 1951, and the Cornish Nationalist Party. In addition to the political parties, there are various interest groups such as the Revived Cornish Stannary Parliament and the Celtic League. The Cornish Constitutional Convention was formed in 2000 as a cross-party organisation including representatives from the private, public and voluntary sectors to campaign for the creation of a Cornish Assembly, along the lines of the National Assembly for Wales, Northern Ireland Assembly and the Scottish Parliament. Between 5 March 2000 and December 2001, the campaign collected the signatures of 41,650 Cornish residents endorsing the call for a devolved assembly, along with 8,896 signatories from outside Cornwall. The resulting petition was presented to the Prime Minister, Tony Blair. Emergency services Devon and Cornwall Police Cornwall Fire and Rescue Service South Western Ambulance Service Cornwall Air Ambulance HM Coastguard Cornwall Search & Rescue Team British Transport Police Economy Cornwall is one of the poorest parts of the United Kingdom in terms of per capita GDP and average household incomes. At the same time, parts of the county, especially on the coast, have high house prices, driven up by demand from relatively wealthy retired people and second-home owners. The GVA per head was 65% of the UK average for 2004. The GDP per head for Cornwall and the Isles of Scilly was 79.2% of the EU-27 average for 2004, the UK per head average was 123.0%. In 2011, the latest available figures, Cornwall's (including the Isles of Scilly) measure of wealth was 64% of the European average per capita. Historically mining of tin (and later also of copper) was important in the Cornish economy. The first reference to this appears to be by Pytheas: see above. Julius Caesar was the last classical writer to mention the tin trade, which appears to have declined during the Roman occupation. The tin trade revived in the Middle Ages and its importance to the Kings of England resulted in certain privileges being granted to the tinners; the Cornish rebellion of 1497 is attributed to grievances of the tin miners. In the mid-19th century, however, the tin trade again fell into decline. Other primary sector industries that have declined since the 1960s include china clay production, fishing and farming. Today, the Cornish economy depends heavily on its tourist industry, which makes up around a quarter of the economy. The official measures of deprivation and poverty at district and 'sub-ward' level show that there is great variation in poverty and prosperity in Cornwall with some areas among the poorest in England and others among the top half in prosperity. For example, the ranking of 32,482 sub-wards in England in the index of multiple deprivation (2006) ranged from 819th (part of Penzance East) to 30,899th (part of Saltash Burraton in Caradon), where the lower number represents the greater deprivation. Cornwall was one of two UK areas designated as 'less developed regions' by the European Union, which, prior to Brexit, meant the area qualified for EU Cohesion Policy grants. It was granted Objective 1 status by the European Commission for 2000 to 2006, followed by further rounds of funding known as 'Convergence Funding' from 2007 to 2013 and 'Growth Programme' for 2014 to 2020. Tourism Cornwall has a tourism-based seasonal economy which is estimated to contribute up to 24% of Cornwall's gross domestic product. In 2011 tourism brought £1.85 billion into the Cornish economy. Cornwall's unique culture, spectacular landscape and mild climate make it a popular tourist destination, despite being somewhat distant from the United Kingdom's main centres of population. Surrounded on three sides by the English Channel and Celtic Sea, Cornwall has many miles of beaches and cliffs; the South West Coast Path follows a complete circuit of both coasts. Other tourist attractions include moorland, country gardens, museums, historic and prehistoric sites, and wooded valleys. Five million tourists visit Cornwall each year, mostly drawn from within the UK. Visitors to Cornwall are served by the airport at Newquay, whilst private jets, charters and helicopters are also served by Perranporth airfield; nightsleeper and daily rail services run between Cornwall, London and other regions of the UK. Newquay and Porthtowan are popular destinations for surfers. In recent years, the Eden Project near St Austell has been a major financial success, drawing one in eight of Cornwall's visitors in 2004. In the summer of 2018, due to the recognition of its beaches and weather through social media and the marketing of travel companies, Cornwall received about 20 per cent more visitors than the usual 4.5 million figure. The sudden rise and demand of tourism in Cornwall caused multiple traffic and safety issues in coastal areas. In October 2021, Cornwall was longlisted for the UK City of Culture 2025, but failed to make the March 2022 shortlist. Fishing Other industries include fishing, although this has been significantly re-structured by EU fishing policies ( the Southwest Handline Fishermen's Association has started to revive the fishing industry). Agriculture Agriculture, once an important part of the Cornish economy, has declined significantly relative to other industries. However, there is still a strong dairy industry, with products such as Cornish clotted cream. Mining Mining of tin and copper was also an industry, but today the derelict mine workings survive only as a World Heritage Site. However, the Camborne School of Mines, which was relocated to Penryn in 2004, is still a world centre of excellence in the field of mining and applied geology and the grant of World Heritage status has attracted funding for conservation and heritage tourism. China clay extraction has also been an important industry in the St Austell area, but this sector has been in decline, and this, coupled with increased mechanisation, has led to a decrease in employment in this sector, although the industry still employs around 2,133 people in Cornwall, and generates over £80 million to the local economy. In March 2016, a Canadian company, Strongbow Exploration, had acquired, from administration, a 100% interest in the South Crofty tin mine and the associated mineral rights in Cornwall with the aim of reopening the mine and bringing it back to full production. Work is currently ongoing to build a water filtration plant in order to dewater the mine. Internet Cornwall is the landing point for twenty-two of the world's fastest high-speed undersea and transatlantic fibre optic cables, making Cornwall an important hub within Europe's Internet infrastructure. The Superfast Cornwall project completed in 2015, and saw 95% of Cornish houses and businesses connected to a fibre-based broadband network, with over 90% of properties able to connect with speeds above 24 Mbit/s. Aerospace The county's newest industry is aviation: Newquay Airport is the home of a growing business park with Enterprise Zone status, known as Aerohub. Also a space launch facility, Spaceport Cornwall, has been established at Newquay, in partnership with Goonhilly satellite tracking station near Helston in south Cornwall. Demographics Cornwall's population was 537,400 in the 2011 census, with a population density of 144 people per square kilometre, ranking it 40th and 41st, respectively, among the 47 counties of England. Cornwall's population was 95.7% White British and has a relatively high rate of population growth. At 11.2% in the 1980s and 5.3% in the 1990s, it had the fifth-highest population growth rate of the counties of England. The natural change has been a small population decline, and the population increase is due to inward migration into Cornwall. According to the 1991 census, the population was 469,800. Cornwall has a relatively high retired population, with 22.9% of pensionable age, compared with 20.3% for the United Kingdom as a whole. This may be due partly to Cornwall's rural and coastal geography increasing its popularity as a retirement location, and partly to outward migration of younger residents to more economically diverse areas. Education Over 10,000 students attend Cornwall's two universities, Falmouth University and the University of Exeter (including Camborne School of Mines). Falmouth University is a specialist public university for the creative industries and arts, while the University Of Exeter has two campuses in Cornwall, Truro and Penryn, the latter shared with Falmouth. Penryn campus is home to educational departments such as the rapidly growing Centre for Ecology and Conservation (CEC), the Environment and Sustainability Institute (ESI), and the Institute of Cornish Studies. Cornwall has a comprehensive education system, with 31 state and eight independent secondary schools. There are three further education colleges: Truro and Penwith College, Cornwall College and Callywith College which opened in September 2017. The Isles of Scilly only has one school, while the former Restormel district has the highest school population, and school year sizes are around 200, with none above 270. Before the introduction of comprehensive schools there were a number of grammar schools and secondary modern schools, e.g. the schools that later became Sir James Smith's School and Wadebridge School. There are also primary schools in many villages and towns: e.g. St Mabyn Church of England Primary School. See also Christianity in Cornwall Index of Cornwall-related articles Outline of Cornwall – overview of the wide range of topics covered by this subject Tamar Valley AONB Duchy of Cornwall Notes References Sources A second edition was published in 2001 by the House of Stratus, Thirsk: the original text new illustrations and an afterword by Halliday's son Further reading (illustrated edition Published by Victor Gollancz, London, 1981, , photographs by Christian Browning) (Available online on Google Books). (Available online on Digital Book Index) (Available online on Google Books). (eleven chapters by various hands, including three previously published essays) External links Cornwall Council The History of Parliament: the House of Commons – Cornwall, County, 1386 to 1831 Images of daily life in late 19th century Cornwall Images of Cornwall at the English Heritage Archive Celtic nations English unitary authorities created in 2009 Local government districts of South West England NUTS 2 statistical regions of the United Kingdom Peninsulas of England Unitary authority districts of England Counties in South West England Counties of England established in antiquity Former kingdoms
5649
https://en.wikipedia.org/wiki/Constitutional%20monarchy
Constitutional monarchy
Constitutional monarchy, also known as limited monarchy, parliamentary monarchy or democratic monarchy, is a form of monarchy in which the monarch exercises their authority in accordance with a constitution and is not alone in making decisions. Constitutional monarchies differ from absolute monarchies (in which a monarch is the only decision-maker) in that they are bound to exercise powers and authorities within limits prescribed by an established legal framework. Constitutional monarchies range from countries such as Liechtenstein, Monaco, Morocco, Jordan, Kuwait, Bahrain and Bhutan, where the constitution grants substantial discretionary powers to the sovereign, to countries such the United Kingdom and other Commonwealth realms, the Netherlands, Spain, Belgium, Norway, Sweden, Lesotho, Malaysia, Thailand, Cambodia, and Japan, where the monarch retains significantly less, if any, personal discretion in the exercise of their authority. On the surface level, this distinction may be hard to establish, with numerous liberal democracies restraining monarchic power in practice rather than written law, e.g., the constitution of the United Kingdom, which affords the monarch substantial, if limited, legislative and executive powers. Constitutional monarchy may refer to a system in which the monarch acts as a non-party political head of state under the constitution, whether codified or uncodified. While most monarchs may hold formal authority and the government may legally operate in the monarch's name, in the form typical in Europe the monarch no longer personally sets public policy or chooses political leaders. Political scientist Vernon Bogdanor, paraphrasing Thomas Macaulay, has defined a constitutional monarch as "A sovereign who reigns but does not rule". In addition to acting as a visible symbol of national unity, a constitutional monarch may hold formal powers such as dissolving parliament or giving royal assent to legislation. However, such powers generally may only be exercised strictly in accordance with either written constitutional principles or unwritten constitutional conventions, rather than any personal political preferences of the sovereign. In The English Constitution, British political theorist Walter Bagehot identified three main political rights which a constitutional monarch may freely exercise: the right to be consulted, the right to encourage, and the right to warn. Many constitutional monarchies still retain significant authorities or political influence, however, such as through certain reserve powers, and may also play an important political role. The Commonwealth realms share the same person as hereditary monarchy under the Westminster system of constitutional governance. Two constitutional monarchies – Malaysia and Cambodia – are elective monarchies, in which the ruler is periodically selected by a small electoral college. The concept of semi-constitutional monarch identifies constitutional monarchies where the monarch retains substantial powers, on a par with a president in a presidential or semi-presidential system. As a result, constitutional monarchies where the monarch has a largely ceremonial role may also be referred to as 'parliamentary monarchies' to differentiate them from semi-constitutional monarchies. Strongly limited constitutional monarchies, such as those of the United Kingdom and Australia, have been referred to as crowned republics by writers H. G. Wells and Glenn Patmore. History The oldest constitutional monarchy dating back to ancient times was that of the Hittites. They were an ancient Anatolian people that lived during the Bronze Age whose king had to share his authority with an assembly, called the Panku, which was the equivalent to a modern-day deliberative assembly or a legislature. Members of the Panku came from scattered noble families who worked as representatives of their subjects in an adjutant or subaltern federal-type landscape. Constitutional and absolute monarchy England, Scotland and the United Kingdom In the Kingdom of England, the Glorious Revolution of 1688 furthered the constitutional monarchy, restricted by laws such as the Bill of Rights 1689 and the Act of Settlement 1701, although the first form of constitution was enacted with the Magna Carta of 1215. At the same time, in Scotland, the Convention of Estates enacted the Claim of Right Act 1689, which placed similar limits on the Scottish monarchy. Queen Anne was the last monarch to veto an Act of Parliament when, on 11 March 1708, she blocked the Scottish Militia Bill. However Hanoverian monarchs continued to selectively dictate government policies. For instance King George III constantly blocked Catholic Emancipation, eventually precipitating the resignation of William Pitt the Younger as prime minister in 1801. The sovereign's influence on the choice of prime minister gradually declined over this period. King William IV was the last monarch to dismiss a prime minister, when in 1834 he removed Lord Melbourne as a result of Melbourne's choice of Lord John Russell as Leader of the House of Commons. Queen Victoria was the last monarch to exercise real personal power, but this diminished over the course of her reign. In 1839, she became the last sovereign to keep a prime minister in power against the will of Parliament when the Bedchamber crisis resulted in the retention of Lord Melbourne's administration. By the end of her reign, however, she could do nothing to block the unacceptable (to her) premierships of William Gladstone, although she still exercised power in appointments to the Cabinet. For example in 1886 she vetoed Gladstone's choice of Hugh Childers as War Secretary in favour of Sir Henry Campbell-Bannerman. Today, the role of the British monarch is by convention effectively ceremonial. The British Parliament and the Government – chiefly in the office of Prime Minister of the United Kingdom – exercise their powers under "Royal (or Crown) Prerogative": on behalf of the monarch and through powers still formally possessed by the monarch. No person may accept significant public office without swearing an oath of allegiance to the King. With few exceptions, the monarch is bound by constitutional convention to act on the advice of the Government. Continental Europe Poland developed the first constitution for a monarchy in continental Europe, with the Constitution of 3 May 1791; it was the second single-document constitution in the world just after the first republican Constitution of the United States. Constitutional monarchy also occurred briefly in the early years of the French Revolution, but much more widely afterwards. Napoleon Bonaparte is considered the first monarch proclaiming himself as an embodiment of the nation, rather than as a divinely appointed ruler; this interpretation of monarchy is germane to continental constitutional monarchies. German philosopher Georg Wilhelm Friedrich Hegel, in his work Elements of the Philosophy of Right (1820), gave the concept a philosophical justification that concurred with evolving contemporary political theory and the Protestant Christian view of natural law. Hegel's forecast of a constitutional monarch with very limited powers whose function is to embody the national character and provide constitutional continuity in times of emergency was reflected in the development of constitutional monarchies in Europe and Japan. Executive monarchy versus ceremonial monarchy There exist at least two different types of constitutional monarchies in the modern world – executive and ceremonial. In executive monarchies, the monarch wields significant (though not absolute) power. The monarchy under this system of government is a powerful political (and social) institution. By contrast, in ceremonial monarchies, the monarch holds little or no actual power or direct political influence, though they frequently have a great deal of social and cultural influence. Ceremonial and executive monarchy should not be confused with democratic and non-democratic monarchical systems. For example, in Liechtenstein and Monaco, the ruling monarchs wield significant executive power. However, while they are theoretically very powerful within their small states, they are not absolute monarchs and have very limited de facto power compared to the Islamic monarchs, which is why their countries are generally considered to be liberal democracies. For instance, when Hereditary Prince Alois of Liechtenstein threatened to veto a referendum to legalize abortion in 2011, it came as a surprise because the prince had not vetoed any law for over 30 years (in the end, this referendum failed to make it to a vote). Modern constitutional monarchy As originally conceived, a constitutional monarch was head of the executive branch and quite a powerful figure even though their power was limited by the constitution and the elected parliament. Some of the framers of the U.S. Constitution may have envisioned the president as an elected constitutional monarch, as the term was then understood, following Montesquieu's account of the separation of powers. The present-day concept of a constitutional monarchy developed in the United Kingdom, where the democratically elected parliaments, and their leader, the prime minister, exercise power, with the monarchs having ceded power and remaining as a titular position. In many cases the monarchs, while still at the very top of the political and social hierarchy, were given the status of "servants of the people" to reflect the new, egalitarian position. In the course of France's July Monarchy, Louis-Philippe I was styled "King of the French" rather than "King of France". Following the unification of Germany, Otto von Bismarck rejected the British model. In the constitutional monarchy established under the Constitution of the German Empire which Bismarck inspired, the Kaiser retained considerable actual executive power, while the Imperial Chancellor needed no parliamentary vote of confidence and ruled solely by the imperial mandate. However, this model of constitutional monarchy was discredited and abolished following Germany's defeat in the First World War. Later, Fascist Italy could also be considered a constitutional monarchy, in that there was a king as the titular head of state while actual power was held by Benito Mussolini under a constitution. This eventually discredited the Italian monarchy and led to its abolition in 1946. After the Second World War, surviving European monarchies almost invariably adopted some variant of the constitutional monarchy model originally developed in Britain. Nowadays a parliamentary democracy that is a constitutional monarchy is considered to differ from one that is a republic only in detail rather than in substance. In both cases, the titular head of statemonarch or presidentserves the traditional role of embodying and representing the nation, while the government is carried on by a cabinet composed predominantly of elected Members of Parliament. However, three important factors distinguish monarchies such as the United Kingdom from systems where greater power might otherwise rest with Parliament. These are: The Royal Prerogative, under which the monarch may exercise power under certain very limited circumstances Sovereign Immunity, under which the monarch may do no wrong under the law because the responsible government is instead deemed accountable The immunity of the monarch from some taxation or restrictions on property use Other privileges may be nominal or ceremonial (e.g., where the executive, judiciary, police or armed forces act on the authority of or owe allegiance to the Crown). Today slightly more than a quarter of constitutional monarchies are Western European countries, including the United Kingdom, Spain, the Netherlands, Belgium, Norway, Denmark, Luxembourg, Monaco, Liechtenstein and Sweden. However, the two most populous constitutional monarchies in the world are in Asia: Japan and Thailand. In these countries, the prime minister holds the day-to-day powers of governance, while the monarch retains residual (but not always insignificant) powers. The powers of the monarch differ between countries. In Denmark and in Belgium, for example, the monarch formally appoints a representative to preside over the creation of a coalition government following a parliamentary election, while in Norway the King chairs special meetings of the cabinet. In nearly all cases, the monarch is still the nominal chief executive, but is bound by convention to act on the advice of the Cabinet. Only a few monarchies (most notably Japan and Sweden) have amended their constitutions so that the monarch is no longer even the nominal chief executive. There are fifteen constitutional monarchies under King Charles III, which are known as Commonwealth realms. Unlike some of their continental European counterparts, the Monarch and his Governors-General in the Commonwealth realms hold significant "reserve" or "prerogative" powers, to be wielded in times of extreme emergency or constitutional crises, usually to uphold parliamentary government. For example, during the 1975 Australian constitutional crisis, the Governor-General dismissed the Australian Prime Minister Gough Whitlam. The Australian Senate had threatened to block the Government's budget by refusing to pass the necessary appropriation bills. On 11 November 1975, Whitlam intended to call a half-Senate election to try to break the deadlock. When he sought the Governor-General's approval of the election, the Governor-General instead dismissed him as Prime Minister. Shortly after that, he installed leader of the opposition Malcolm Fraser in his place. Acting quickly before all parliamentarians became aware of the government change, Fraser and his allies secured passage of the appropriation bills, and the Governor-General dissolved Parliament for a double dissolution election. Fraser and his government were returned with a massive majority. This led to much speculation among Whitlam's supporters as to whether this use of the Governor-General's reserve powers was appropriate, and whether Australia should become a republic. Among supporters of constitutional monarchy, however, the event confirmed the monarchy's value as a source of checks and balances against elected politicians who might seek powers in excess of those conferred by the constitution, and ultimately as a safeguard against dictatorship. In Thailand's constitutional monarchy, the monarch is recognized as the Head of State, Head of the Armed Forces, Upholder of the Buddhist Religion, and Defender of the Faith. The immediate former King, Bhumibol Adulyadej, was the longest-reigning monarch in the world and in all of Thailand's history, before passing away on 13 October 2016. Bhumibol reigned through several political changes in the Thai government. He played an influential role in each incident, often acting as mediator between disputing political opponents. (See Bhumibol's role in Thai Politics.) Among the powers retained by the Thai monarch under the constitution, lèse majesté protects the image of the monarch and enables him to play a role in politics. It carries strict criminal penalties for violators. Generally, the Thai people were reverent of Bhumibol. Much of his social influence arose from this reverence and from the socioeconomic improvement efforts undertaken by the royal family. In the United Kingdom, a frequent debate centres on when it is appropriate for a British monarch to act. When a monarch does act, political controversy can often ensue, partially because the neutrality of the crown is seen to be compromised in favour of a partisan goal, while some political scientists champion the idea of an "interventionist monarch" as a check against possible illegal action by politicians. For instance, the monarch of the United Kingdom can theoretically exercise an absolute veto over legislation by withholding royal assent. However, no monarch has done so since 1708, and it is widely believed that this and many of the monarch's other political powers are lapsed powers. List of current constitutional monarchies There are currently 43 monarchies worldwide. Ceremonial constitutional monarchies Executive constitutional monarchies Former constitutional monarchies The Kingdom of Afghanistan was a constitutional monarchy under Mohammad Zahir Shah from 1964 to 1973. Kingdom of Albania from 1928 until 1939, Albania was a Constitutional Monarchy ruled by the House of Zogu, King Zog I. The Anglo-Corsican Kingdom was a brief period in the history of Corsica (1794–1796) when the island broke with Revolutionary France and sought military protection from Great Britain. Corsica became an independent kingdom under George III of the United Kingdom, but with its own elected parliament and a written constitution guaranteeing local autonomy and democratic rights. Barbados from gaining its independence in 1966 until 2021, was a constitutional monarchy in the Commonwealth of Nations with a Governor-General representing the Monarchy of Barbados. After an extensive history of republican movements, a republic was declared on 30 November 2021. Brazil from 1822, with the proclamation of independence and rise of the Empire of Brazil by Pedro I of Brazil to 1889, when Pedro II was deposed by a military coup. Kingdom of Bulgaria until 1946 when Tsar Simeon was deposed by the communist assembly. Many republics in the Commonwealth of Nations were constitutional monarchies for some period after their independence, including South Africa (1910–1961), Ceylon from 1948 to 1972 (now Sri Lanka), Fiji (1970–1987), Gambia (1965–1970), Ghana (1957–1960), Guyana (1966–1970), Trinidad and Tobago (1962–1976), and Barbados (1966–2021). Egypt was a constitutional monarchy starting from the later part of the Khedivate, with parliamentary structures and a responsible khedival ministry developing in the 1860s and 1870s. The constitutional system continued through the Khedivate period and developed during the Sultanate and then Kingdom of Egypt, which established an essentially democratic liberal constitutional regime under the Egyptian Constitution of 1923. This system persisted until the declaration of a republic after the Free Officers Movement coup in 1952. For most of this period, however, Egypt was occupied by the United Kingdom, and overall political control was in the hands of British colonial officials nominally accredited as diplomats to the Egyptian royal court but actually able to overrule any decision of the monarch or elected government. The Grand Principality of Finland was a constitutional monarchy though its ruler, Alexander I, was simultaneously an autocrat and absolute ruler in Russia. France, several times from 1789 through the 19th century. The transformation of the Estates General of 1789 into the National Assembly initiated an ad-hoc transition from the absolute monarchy of the Ancien Régime to a new constitutional system. France formally became an executive constitutional monarchy with the promulgation of the French Constitution of 1791, which took effect on 1 October of that year. This first French constitutional monarchy was short-lived, ending with the overthrow of the monarchy and establishment of the French First Republic after the Insurrection of 10 August 1792. Several years later, in 1804, Napoleon Bonaparte proclaimed himself Emperor of the French in what was ostensibly a constitutional monarchy, though modern historians often call his reign as an absolute monarchy. The Bourbon Restoration (under Louis XVIII and Charles X), the July Monarchy (under Louis-Philippe), and the Second Empire (under Napoleon III) were also constitutional monarchies, although the power of the monarch varied considerably between them and sometimes within them. The German Empire from 1871 to 1918, (as well as earlier confederations, and the monarchies it consisted of) was also a constitutional monarchy—see Constitution of the German Empire. Greece until 1973 when Constantine II was deposed by the military government. The decision was formalized by a plebiscite 8 December 1974. Hawaii, which was an absolute monarchy from its founding in 1810, transitioned to a constitutional monarchy in 1840 when King Kamehameha III promulgated the kingdom's first constitution. This constitutional form of government continued until the monarchy was overthrown in an 1893 coup. The Kingdom of Hungary. In 1848–1849 and 1867–1918 as part of Austria-Hungary. In the interwar period (1920–1944) Hungary remained a constitutional monarchy without a reigning monarch. Iceland. The Act of Union, a 1 December 1918 agreement with Denmark, established Iceland as a sovereign kingdom united with Denmark under a common king. Iceland abolished the monarchy and became a republic on 17 June 1944 after the Icelandic constitutional referendum, 24 May 1944. India was a constitutional monarchy, with George VI as head of state and the Earl Mountbatten as governor-general, for a brief period between gaining its independence from the British on 15 August 1947 and becoming a republic when it adopted its constitution on 26 January 1950, henceforth celebrated as Republic Day. Pahlavi Iran under Mohammad Reza Shah Pahlavi was a constitutional monarchy, which had been originally established during the Persian Constitutional Revolution in 1906. Italy until 2 June 1946, when a referendum proclaimed the end of the Kingdom and the beginning of the Republic. The Kingdom of Laos was a constitutional monarchy until 1975, when Sisavang Vatthana was forced to abdicate by the communist Pathet Lao. Malta was a constitutional monarchy with Elizabeth II as Queen of Malta, represented by a Governor-General appointed by her, for the first ten years of independence from 21 September 1964 to the declaration of the Republic of Malta on 13 December 1974. Mexico was twice an Empire. The First Mexican Empire lasted from 19 May 1822 to 19 March 1823, with Agustin I elected as emperor. Then, the Mexican monarchists and conservatives, with the help of the Austrian and Spanish crowns and Napoleon III of France, elected Maximilian of Austria as Emperor of Mexico. This constitutional monarchy lasted three years, from 1864 to 1867. Montenegro until 1918 when it merged with Serbia and other areas to form Yugoslavia. Nepal until 28 May 2008, when King Gyanendra was deposed, and the Federal Democratic Republic of Nepal was declared. Ottoman Empire from 1876 until 1878 and again from 1908 until the dissolution of the empire in 1922. Pakistan was a constitutional monarchy for a brief period between gaining its independence from the British on 14 August 1947 and becoming a republic when it adopted the first Constitution of Pakistan on 23 March 1956. The Dominion of Pakistan had a total of two monarchs (George VI and Elizabeth II) and four Governor-Generals (Muhammad Ali Jinnah being the first). Republic Day (or Pakistan Day) is celebrated every year on 23 March to commemorate the adoption of its Constitution and the transition of the Dominion of Pakistan to the Islamic Republic of Pakistan. The Polish–Lithuanian Commonwealth, formed after the Union of Lublin in 1569 and lasting until the final partition of the state in 1795, operated much like many modern European constitutional monarchies (into which it was officially changed by the establishment of the Constitution of 3 May 1791, which historian Norman Davies calls "the first constitution of its kind in Europe"). The legislators of the unified state truly did not see it as a monarchy at all, but as a republic under the presidency of the King . Poland–Lithuania also followed the principle of , had a bicameral parliament, and a collection of entrenched legal documents amounting to a constitution along the lines of the modern United Kingdom. The King was elected and had the duty of maintaining the people's rights. Portugal was a monarchy since 1139 and a constitutional monarchy from 1822 to 1828, and again from 1834 until 1910, when Manuel II was overthrown by a military coup. From 1815 to 1825 it was part of the United Kingdom of Portugal, Brazil and the Algarves which was a constitutional monarchy for the years 1820–23. Kingdom of Romania from its establishment in 1881 until 1947 when Michael I was forced to abdicate by the communists. Kingdom of Serbia from 1882 until 1918, when it merged with the State of Slovenes, Croats and Serbs into the unitary Yugoslav Kingdom, that was led by the Serbian Karadjordjevic dynasty. Trinidad and Tobago was a constitutional monarchy with Elizabeth II as Queen of Trinidad and Tobago, represented by a Governor-General appointed by her, for the first fourteen years of independence from 31 August 1962 to the declaration of the Republic of Trinidad and Tobago on 1 August 1976. Republic Day is celebrated every year on 24 September. Yugoslavia from 1918 (as Kingdom of Serbs, Croats and Slovenes) until 1929 and from 1931 (as Kingdom of Yugoslavia) until 1944 when under pressure from the Allies Peter II recognized the communist government. Unusual constitutional monarchies Andorra is a diarchy, being headed by two co-princes: the bishop of Urgell and the president of France. Andorra, Monaco and Liechtenstein are the only countries with reigning princes. Belgium is the only remaining explicit popular monarchy: the formal title of its king is King of the Belgians rather than King of Belgium. Historically, several defunct constitutional monarchies followed this model; the Belgian formulation is recognized to have been modelled on the title "King of the French" granted by the Charter of 1830 to monarch of the July Monarchy. Japan is the only country remaining with an emperor. Luxembourg is the only country remaining with a grand duke. Malaysia is a federal country with an elective monarchy: the Yang di-Pertuan Agong is selected from among nine state rulers who are also constitutional monarchs themselves. Papua New Guinea. Unlike in most other Commonwealth realms, sovereignty is constitutionally vested in the citizenry of Papua New Guinea and the preamble to the constitution states "that all power belongs to the people—acting through their duly elected representatives". The monarch has been, according to section 82 of the constitution, "requested by the people of Papua New Guinea, through their Constituent Assembly, to become [monarch] and Head of State of Papua New Guinea" and thus acts in that capacity. Spain. The Constitution of Spain does not even recognize the monarch as sovereign, but just as the head of state (Article 56). Article 1, Section 2, states that "the national sovereignty is vested in the Spanish people". United Arab Emirates is a federal country with an elective monarchy, the President or Ra'is, being selected from among the rulers of the seven emirates, each of whom is a hereditary absolute monarch in their own emirate. See also Australian Monarchist League Criticism of monarchy Monarchism Figurehead Parliamentary republic Reserve power References Citations Sources – excerpted from – originally published as Georg Friedrich Wilhelm Hegel, Philosophie des Rechts. – England and the Netherlands in the 17th and 18th centuries were parliamentary democracies. Further reading Monarchy Constitutional state types
5653
https://en.wikipedia.org/wiki/Clarke%27s%20three%20laws
Clarke's three laws
British science fiction writer Arthur C. Clarke formulated three adages that are known as Clarke's three laws, of which the third law is the best known and most widely cited. They are part of his ideas in his extensive writings about the future. The laws The laws are: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. The only way of discovering the limits of the possible is to venture a little way past them into the impossible. Any sufficiently advanced technology is indistinguishable from magic. Origins One account stated that Clarke's laws were developed after the editor of his works in French started numbering the author's assertions. All three laws appear in Clarke's essay "Hazards of Prophecy: The Failure of Imagination", first published in Profiles of the Future (1962); however, they were not all published at the same time. Clarke's first law was proposed in the 1962 edition of the essay, as "Clarke's Law" in Profiles of the Future. The second law is offered as a simple observation in the same essay but its status as Clarke's second law was conferred by others. It was initially a derivative of the first law and formally became Clarke's second law where the author proposed the third law in the 1973 revision of Profiles of the Future, which included an acknowledgement. It was also here that Clarke wrote about the third law in these words: "As three laws were good enough for Newton, I have modestly decided to stop there". The third law is the best known and most widely cited. It was published in a 1968 letter to Science magazine and eventually added to the 1973 revision of the "Hazards of Prophecy" essay. In 1952, Isaac Asimov in his book Foundation and Empire (part 1.1 Search for Magicians) wrote down a similar phrase "... an uninformed public tends to confuse scholarship with magicians..." It also echoes a statement in a 1942 story by Leigh Brackett: "Witchcraft to the ignorant, ... simple science to the learned". Even earlier examples of this sentiment may be found in Wild Talents (1932) by Charles Fort: "...a performance that may someday be considered understandable, but that, in these primitive times, so transcends what is said to be the known that it is what I mean by magic," and in the short story The Hound of Death (1933) by Agatha Christie: "The supernatural is only the nature of which the laws are not yet understood." Virginia Woolf's 1928 novel Orlando: A Biography explicitly compares advanced technology to magic: Clarke gave an example of the third law when he said that while he "would have believed anyone who told him back in 1962 that there would one day exist a book-sized object capable of holding the content of an entire library, he would never have accepted that the same device could find a page or word in a second and then convert it into any typeface and size from Albertus Extra Bold to Zurich Calligraphic", referring to his memory of "seeing and hearing Linotype machines which slowly converted 'molten lead into front pages that required two men to lift them'". Variants of the third law The third law has inspired many snowclones and other variations: Any sufficiently advanced extraterrestrial intelligence is indistinguishable from God. (Shermer's last law) Any sufficiently advanced act of benevolence is indistinguishable from malevolence (referring to artificial intelligence) Any sufficiently advanced incompetence is indistinguishable from malice (Grey's law) Corollaries Isaac Asimov's Corollary to Clarke's First Law: "When, however, the lay public rallies round an idea that is denounced by distinguished but elderly scientists and supports that idea with great fervour and emotion – the distinguished but elderly scientists are then, after all, probably right." A contrapositive of the third law is "Any technology distinguishable from magic is insufficiently advanced." (Gehm's corollary) See also Asimov's References External links The origins of the Three Laws "What's Your Law?" (lists some of the corollaries) "A Gadget Too Far" at Infinity Plus Adages Arthur C. Clarke Technology folklore Technology forecasting Principles
5654
https://en.wikipedia.org/wiki/Caspar%20David%20Friedrich
Caspar David Friedrich
Caspar David Friedrich (5 September 1774 – 7 May 1840) was a German Romantic landscape painter, generally considered the most important German artist of his generation. He is best known for his allegorical landscapes, which typically feature contemplative figures silhouetted against night skies, morning mists, barren trees or Gothic ruins. His primary interest was the contemplation of nature, and his often symbolic and anti-classical work seeks to convey a subjective, emotional response to the natural world. Friedrich's paintings characteristically set a human presence in diminished perspective amid expansive landscapes, reducing the figures to a scale that, according to the art historian Christopher John Murray, directs "the viewer's gaze towards their metaphysical dimension". Friedrich was born in the town of Greifswald on the Baltic Sea in what was at the time Swedish Pomerania. He studied in Copenhagen until 1798, before settling in Dresden. He came of age during a period when, across Europe, a growing disillusionment with materialistic society was giving rise to a new appreciation of spirituality. This shift in ideals was often expressed through a reevaluation of the natural world, as artists such as Friedrich, J. M. W. Turner and John Constable sought to depict nature as a "divine creation, to be set against the artifice of human civilization". Friedrich's work brought him renown early in his career. Contemporaries such as the French sculptor David d'Angers spoke of him as having discovered "the tragedy of landscape". His work nevertheless fell from favour during his later years, and he died in obscurity. As Germany moved towards modernisation in the late 19th century, a new sense of urgency characterised its art, and Friedrich's contemplative depictions of stillness came to be seen as products of a bygone age. The early 20th century brought a renewed appreciation of his art, beginning in 1906 with an exhibition of thirty-two of his paintings in Berlin. His work influenced Expressionist artists and later Surrealists and Existentialists. The rise of Nazism in the early 1930s saw a resurgence in Friedrich's popularity, but this was followed by a sharp decline as his paintings were, by association with the Nazi movement, seen as promoting German nationalism. In the late 1970s Friedrich regained his reputation as an icon of the German Romantic movement and a painter of international importance. Life Early years and family Caspar David Friedrich was born on 5 September 1774, in Greifswald, Swedish Pomerania, on the Baltic coast of Germany. The sixth of ten children, he was raised in the strict Lutheran creed of his father Adolf Gottlieb Friedrich, a candle-maker and soap boiler. Records of the family's financial circumstances are contradictory; while some sources indicate the children were privately tutored, others record that they were raised in relative poverty. He became familiar with death from an early age. His mother, Sophie, died in 1781 when he was seven. A year later, his sister Elisabeth died, and a second sister, Maria, succumbed to typhus in 1791. Arguably the greatest tragedy of his childhood happened in 1787 when his brother Johann Christoffer died: at the age of thirteen, Caspar David witnessed his younger brother fall through the ice of a frozen lake, and drown. Some accounts suggest that Johann Christoffer perished while trying to rescue Caspar David, who was also in danger on the ice. Friedrich began his formal study of art in 1790 as a private student of artist Johann Gottfried Quistorp at the University of Greifswald in his home city, at which the art department is now named Caspar-David-Friedrich-Institut in his honour. Quistorp took his students on outdoor drawing excursions; as a result, Friedrich was encouraged to sketch from life at an early age. Through Quistorp, Friedrich met and was subsequently influenced by the theologian Ludwig Gotthard Kosegarten, who taught that nature was a revelation of God. Quistorp introduced Friedrich to the work of the German 17th-century artist Adam Elsheimer, whose works often included religious subjects dominated by landscape, and nocturnal subjects. During this period he also studied literature and aesthetics with Swedish professor Thomas Thorild. Four years later Friedrich entered the prestigious Academy of Copenhagen, where he began his education by making copies of casts from antique sculptures before proceeding to drawing from life. Living in Copenhagen afforded the young painter access to the Royal Picture Gallery's collection of 17th-century Dutch landscape painting. At the Academy he studied under teachers such as Christian August Lorentzen and the landscape painter Jens Juel. These artists were inspired by the Sturm und Drang movement and represented a midpoint between the dramatic intensity and expressive manner of the budding Romantic aesthetic and the waning neo-classical ideal. Mood was paramount, and influence was drawn from such sources as the Icelandic legend of Edda, the poems of Ossian and Norse mythology. Move to Dresden Friedrich settled permanently in Dresden in 1798. During this early period, he experimented in printmaking with etchings and designs for woodcuts which his furniture-maker brother cut. By 1804 he had produced 18 etchings and four woodcuts; they were apparently made in small numbers and only distributed to friends. Despite these forays into other media, he gravitated toward working primarily with ink, watercolour and sepias. With the exception of a few early pieces, such as Landscape with Temple in Ruins (1797), he did not work extensively with oils until his reputation was more established. Landscapes were his preferred subject, inspired by frequent trips, beginning in 1801, to the Baltic coast, Bohemia, the Krkonoše and the Harz Mountains. Mostly based on the landscapes of northern Germany, his paintings depict woods, hills, harbors, morning mists and other light effects based on a close observation of nature. These works were modeled on sketches and studies of scenic spots, such as the cliffs on Rügen, the surroundings of Dresden and the river Elbe. He executed his studies almost exclusively in pencil, even providing topographical information, yet the subtle atmospheric effects characteristic of Friedrich's mid-period paintings were rendered from memory. These effects took their strength from the depiction of light, and of the illumination of sun and moon on clouds and water: optical phenomena peculiar to the Baltic coast that had never before been painted with such an emphasis. His reputation as an artist was established when he won a prize in 1805 at the Weimar competition organised by Johann Wolfgang von Goethe. At the time, the Weimar competition tended to draw mediocre and now-forgotten artists presenting derivative mixtures of neo-classical and pseudo-Greek styles. The poor quality of the entries began to prove damaging to Goethe's reputation, so when Friedrich entered two sepia drawings—Procession at Dawn and Fisher-Folk by the Sea—the poet responded enthusiastically and wrote, "We must praise the artist's resourcefulness in this picture fairly. The drawing is well done, the procession is ingenious and appropriate ... his treatment combines a great deal of firmness, diligence and neatness ... the ingenious watercolour ... is also worthy of praise." Friedrich completed the first of his major paintings in 1808, at the age of 34. Cross in the Mountains, today known as the Tetschen Altar, is an altarpiece panel said to have been commissioned for a family chapel in Tetschen, Bohemia. The panel depicts a cross in profile at the top of a mountain, alone, and surrounded by pine trees. Although the altarpiece was generally coldly received, it was Friedrich's first painting to receive wide publicity. The artist's friends publicly defended the work, while art critic Basilius von Ramdohr published a long article challenging Friedrich's use of landscape in a religious context. He rejected the idea that landscape painting could convey explicit meaning, writing that it would be "a veritable presumption, if landscape painting were to sneak into the church and creep onto the altar". Friedrich responded with a programme describing his intentions in 1809, comparing the rays of the evening sun to the light of the Holy Father. This statement marked the only time Friedrich recorded a detailed interpretation of his own work, and the painting was among the few commissions the artist ever received. Following the purchase of two of his paintings by the Prussian Crown Prince, Friedrich was elected a member of the Berlin Academy in 1810. Yet in 1816, he sought to distance himself from Prussian authority and applied that June for Saxon citizenship. The move was not expected; the Saxon government was pro-French, while Friedrich's paintings were seen as generally patriotic and distinctly anti-French. Nevertheless, with the aid of his Dresden-based friend Graf Vitzthum von Eckstädt, Friedrich attained citizenship, and in 1818, membership in the Saxon Academy with a yearly dividend of 150 thalers. Although he had hoped to receive a full professorship, it was never awarded him as, according to the German Library of Information, "it was felt that his painting was too personal, his point of view too individual to serve as a fruitful example to students." Politics too may have played a role in stalling his career: Friedrich's decidedly Germanic subjects and costuming frequently clashed with the era's prevailing pro-French attitudes. Marriage On 21 January 1818, Friedrich married Caroline Bommer, the twenty-five-year-old daughter of a dyer from Dresden. The couple had three children, with their first, Emma, arriving in 1820. Physiologist and painter Carl Gustav Carus notes in his biographical essays that marriage did not impact significantly on either Friedrich's life or personality, yet his canvasses from this period, including Chalk Cliffs on Rügen—painted after his honeymoon—display a new sense of levity, while his palette is brighter and less austere. Human figures appear with increasing frequency in the paintings of this period, which Siegel interprets as a reflection that "the importance of human life, particularly his family, now occupies his thoughts more and more, and his friends, his wife, and his townspeople appear as frequent subjects in his art." Around this time, he found support from two sources in Russia. In 1820, the Grand Duke Nikolai Pavlovich, at the behest of his wife Alexandra Feodorovna, visited Friedrich's studio and returned to Saint Petersburg with a number of his paintings, an exchange that began a patronage that continued for many years. Not long thereafter, the poet Vasily Zhukovsky, tutor to the Grand Duke's son (later Tsar Alexander II), met Friedrich in 1821 and found in him a kindred spirit. For decades Zhukovsky helped Friedrich both by purchasing his work himself and by recommending his art to the royal family; his assistance toward the end of Friedrich's career proved invaluable to the ailing and impoverished artist. Zhukovsky remarked that his friend's paintings "please us by their precision, each of them awakening a memory in our mind." Friedrich was acquainted with Philipp Otto Runge, another leading German painter of the Romantic period. He was also a friend of Georg Friedrich Kersting, and painted him at work in his unadorned studio, and of the Norwegian painter Johan Christian Clausen Dahl (1788–1857). Dahl was close to Friedrich during the artist's final years, and he expressed dismay that to the art-buying public, Friedrich's pictures were only "curiosities". While the poet Zhukovsky appreciated Friedrich's psychological themes, Dahl praised the descriptive quality of Friedrich's landscapes, commenting that "artists and connoisseurs saw in Friedrich's art only a kind of mystic, because they themselves were only looking out for the mystic ... They did not see Friedrich's faithful and conscientious study of nature in everything he represented". During this period Friedrich frequently sketched memorial monuments and sculptures for mausoleums, reflecting his obsession with death and the afterlife; he even created designs for some of the funerary art in Dresden's cemeteries. Some of these works were lost in the fire that destroyed Munich's Glass Palace (1931) and later in the 1945 bombing of Dresden. Later life Friedrich's reputation steadily declined over the final fifteen years of his life. As the ideals of early Romanticism passed from fashion, he came to be viewed as an eccentric and melancholy character, out of touch with the times. Gradually his patrons fell away. By 1820, he was living as a recluse and was described by friends as the "most solitary of the solitary". Towards the end of his life he lived in relative poverty. He became isolated and spent long periods of the day and night walking alone through woods and fields, often beginning his strolls before sunrise. He suffered his first stroke in June 1835, which left him with minor limb paralysis and greatly reduced his ability to paint. As a result, he was unable to work in oil; instead he was limited to watercolour, sepia and reworking older compositions. Although his vision remained strong, he had lost the full strength of his hand. Yet he was able to produce a final 'black painting', Seashore by Moonlight (1835–1836), described by Vaughan as the "darkest of all his shorelines, in which richness of tonality compensates for the lack of his former finesse". Symbols of death appeared in his work from this period. Soon after his stroke, the Russian royal family purchased a number of his earlier works, and the proceeds allowed him to travel to Teplitz—in today's Czech Republic—to recover. During the mid-1830s, Friedrich began a series of portraits and he returned to observing himself in nature. As the art historian William Vaughan observed, however, "He can see himself as a man greatly changed. He is no longer the upright, supportive figure that appeared in Two Men Contemplating the Moon in 1819. He is old and stiff ... he moves with a stoop". By 1838, he was capable working in a small format only. He and his family were living in poverty and grew increasingly dependent for support on the charity of friends. Death Friedrich died in Dresden on 7 May 1840, and was buried in Dresden's Trinitatis-Friedhof (Trinity Cemetery) east of the city centre (the entrance to which he had painted some 15 years earlier). His simple flat gravestone lies north-west of the central roundel within the main avenue. By this time his reputation and fame had waned, and his passing was little noticed within the artistic community. His artwork had certainly been acknowledged during his lifetime, but not widely. While the close study of landscape and an emphasis on the spiritual elements of nature were commonplace in contemporary art, his interpretations were highly original and personal. By 1838, his work no longer sold or received attention from critics; the Romantic movement had moved away from the early idealism that the artist had helped found. Carl Gustav Carus later wrote a series of articles which paid tribute to Friedrich's transformation of the conventions of landscape painting. However, Carus' articles placed Friedrich firmly in his time, and did not place the artist within a continuing tradition. Only one of his paintings had been reproduced as a print, and that was produced in very few copies. Themes Landscape and the sublime The visualisation and portrayal of landscape in an entirely new manner was Friedrich's key innovation. He sought not just to explore the blissful enjoyment of a beautiful view, as in the classic conception, but rather to examine an instant of sublimity, a reunion with the spiritual self through the contemplation of nature. Friedrich was instrumental in transforming landscape in art from a backdrop subordinated to human drama to a self-contained emotive subject. Friedrich's paintings commonly employed the Rückenfigur—a person seen from behind, contemplating the view. The viewer is encouraged to place himself in the position of the Rückenfigur, by which means he experiences the sublime potential of nature, understanding that the scene is as perceived and idealised by a human. Friedrich created the idea of a landscape full of romantic feeling—die romantische Stimmungslandschaft. His art details a wide range of geographical features, such as rock coasts, forests and mountain scenes, and often used landscape to express religious themes. During his time, most of the best-known paintings were viewed as expressions of a religious mysticism. He wrote: "The artist should paint not only what he sees before him, but also what he sees within him. If, however, he sees nothing within him, then he should also refrain from painting that which he sees before him. Otherwise, his pictures will be like those folding screens behind which one expects to find only the sick or the dead." Expansive skies, storms, mist, forests, ruins and crosses bearing witness to the presence of God are frequent elements in Friedrich's landscapes. Though death finds symbolic expression in boats that move away from shore—a Charon-like motif—and in the poplar tree, it is referenced more directly in paintings like The Abbey in the Oakwood (1808–1810), in which monks carry a coffin past an open grave, toward a cross, and through the portal of a church in ruins. He was one of the first artists to portray winter landscapes in which the land is rendered as stark and dead. Friedrich's winter scenes are solemn and still—according to the art historian Hermann Beenken, Friedrich painted winter scenes in which "no man has yet set his foot. The theme of nearly all the older winter pictures had been less winter itself than life in winter. In the 16th and 17th centuries, it was thought impossible to leave out such motifs as the crowd of skaters, the wanderer ... It was Friedrich who first felt the wholly detached and distinctive features of a natural life. Instead of many tones, he sought the one; and so, in his landscape, he subordinated the composite chord into one single basic note". Bare oak trees and tree stumps, such as those in Raven Tree (), Man and Woman Contemplating the Moon (), and Willow Bush under a Setting Sun (), are recurring elements of his paintings, and usually symbolise death. Countering the sense of despair are Friedrich's symbols for redemption: the cross and the clearing sky promise eternal life, and the slender moon suggests hope and the growing closeness of Christ. In his paintings of the sea, anchors often appear on the shore, also indicating a spiritual hope. In The Abbey in the Oakwood, the movement of the monks away from the open grave and toward the cross and the horizon imparts Friedrich's message that the final destination of man's life lies beyond the grave. With dawn and dusk constituting prominent themes of his landscapes, Friedrich's own later years were characterised by a growing pessimism. His work becomes darker, revealing a fearsome monumentality. The Wreck of the Hope—also known as The Polar Sea or The Sea of Ice (1823–1824)—perhaps best summarises Friedrich's ideas and aims at this point, though in such a radical way that the painting was not well received. Completed in 1824, it depicted a grim subject, a shipwreck in the Arctic Ocean; "the image he produced, with its grinding slabs of travertine-colored floe ice chewing up a wooden ship, goes beyond documentary into allegory: the frail bark of human aspiration crushed by the world's immense and glacial indifference." Friedrich's written commentary on aesthetics was limited to a collection of aphorisms set down in 1830, in which he explained the need for the artist to match natural observation with an introspective scrutiny of his own personality. His best-known remark advises the artist to "close your bodily eye so that you may see your picture first with the spiritual eye. Then bring to the light of day that which you have seen in the darkness so that it may react upon others from the outside inwards." Loneliness and death Both Friedrich's life and art have at times been perceived by some to have been marked with an overwhelming sense of loneliness. Art historians and some of his contemporaries attribute such interpretations to the losses suffered during his youth to the bleak outlook of his adulthood, while Friedrich's pale and withdrawn appearance helped reinforce the popular notion of the "taciturn man from the North". Friedrich suffered depressive episodes in 1799, 1803–1805, c. 1813, in 1816 and between 1824 and 1826. There are noticeable thematic shifts in the works he produced during these episodes, which see the emergence of such motifs and symbols as vultures, owls, graveyards and ruins. From 1826 these motifs became a permanent feature of his output, while his use of color became more dark and muted. Carus wrote in 1929 that Friedrich "is surrounded by a thick, gloomy cloud of spiritual uncertainty", though the noted art historian and curator Hubertus Gassner disagrees with such notions, seeing in Friedrich's work a positive and life-affirming subtext inspired by Freemasonry and religion. Germanic folklore Reflecting Friedrich's patriotism and resentment during the 1813 French occupation of the dominion of Pomerania, motifs from German folklore became increasingly prominent in his work. An anti-French German nationalist, Friedrich used motifs from his native landscape to celebrate Germanic culture, customs and mythology. He was impressed by the anti-Napoleonic poetry of Ernst Moritz Arndt and Theodor Körner, and the patriotic literature of Adam Müller and Heinrich von Kleist. Moved by the deaths of three friends killed in battle against France, as well as by Kleist's 1808 drama Die Hermannsschlacht, Friedrich undertook a number of paintings in which he intended to convey political symbols solely by means of the landscape—a first in the history of art. In Old Heroes' Graves (1812), a dilapidated monument inscribed "Arminius" invokes the Germanic chieftain, a symbol of nationalism, while the four tombs of fallen heroes are slightly ajar, freeing their spirits for eternity. Two French soldiers appear as small figures before a cave, lower and deep in a grotto surrounded by rock, as if farther from heaven. A second political painting, Fir Forest with the French Dragoon and the Raven (c. 1813), depicts a lost French soldier dwarfed by a dense forest, while on a tree stump a raven is perched—a prophet of doom, symbolizing the anticipated defeat of France. Legacy Influence Alongside other Romantic painters, Friedrich helped position landscape painting as a major genre within Western art. Of his contemporaries, Friedrich's style most influenced the painting of Johan Christian Dahl (1788–1857). Among later generations, Arnold Böcklin (1827–1901) was strongly influenced by his work, and the substantial presence of Friedrich's works in Russian collections influenced many Russian painters, in particular Arkhip Kuindzhi (c. 1842–1910) and Ivan Shishkin (1832–1898). Friedrich's spirituality anticipated American painters such as Albert Pinkham Ryder (1847–1917), Ralph Blakelock (1847–1919), the painters of the Hudson River School and the New England Luminists. At the turn of the 20th century, Friedrich was rediscovered by the Norwegian art historian Andreas Aubert (1851–1913), whose writing initiated modern Friedrich scholarship, and by the Symbolist painters, who valued his visionary and allegorical landscapes. The Norwegian Symbolist Edvard Munch (1863–1944) would have seen Friedrich's work during a visit to Berlin in the 1880s. Munch's 1899 print The Lonely Ones echoes Friedrich's Rückenfigur (back figure), although in Munch's work the focus has shifted away from the broad landscape and toward the sense of dislocation between the two melancholy figures in the foreground. Friedrich's modern revival gained momentum in 1906, when thirty-two of his works were featured in an exhibition in Berlin of Romantic-era art. His landscapes exercised a strong influence on the work of German artist Max Ernst (1891–1976), and as a result other Surrealists came to view Friedrich as a precursor to their movement. In 1934, the Belgian painter René Magritte (1898–1967) paid tribute in his work The Human Condition, which directly echoes motifs from Friedrich's art in its questioning of perception and the role of the viewer. A few years later, the Surrealist journal Minotaure included Friedrich in a 1939 article by the critic Marie Landsberger, thereby exposing his work to a far wider circle of artists. The influence of The Wreck of Hope (or The Sea of Ice) is evident in the 1940–41 painting Totes Meer by Paul Nash (1889–1946), a fervent admirer of Ernst. Friedrich's work has been cited as an inspiration by other major 20th-century artists, including Mark Rothko (1903–1970), Gerhard Richter (b. 1932), Gotthard Graubner and Anselm Kiefer (b. 1945). Friedrich's Romantic paintings have also been singled out by writer Samuel Beckett (1906–89), who, standing before Man and Woman Contemplating the Moon, said "This was the source of Waiting for Godot, you know." In his 1961 article "The Abstract Sublime", originally published in ARTnews, the art historian Robert Rosenblum drew comparisons between the Romantic landscape paintings of both Friedrich and Turner with the Abstract Expressionist paintings of Mark Rothko. Rosenblum specifically describes Friedrich's 1809 painting The Monk by the Sea, Turner's The Evening Star and Rothko's 1954 Light, Earth and Blue as revealing affinities of vision and feeling. According to Rosenblum, "Rothko, like Friedrich and Turner, places us on the threshold of those shapeless infinities discussed by the aestheticians of the Sublime. The tiny monk in the Friedrich and the fisher in the Turner establish a poignant contrast between the infinite vastness of a pantheistic God and the infinite smallness of His creatures. In the abstract language of Rothko, such literal detail—a bridge of empathy between the real spectator and the presentation of a transcendental landscape—is no longer necessary; we ourselves are the monk before the sea, standing silently and contemplatively before these huge and soundless pictures as if we were looking at a sunset or a moonlit night." Critical opinion Until 1890, and especially after his friends had died, Friedrich's work lay in near-oblivion for decades. Yet, by 1890, the symbolism in his work began to ring true with the artistic mood of the day, especially in central Europe. However, despite a renewed interest and an acknowledgment of his originality, his lack of regard for "painterly effect" and thinly rendered surfaces jarred with the theories of the time. During the 1930s, Friedrich's work was used in the promotion of Nazi ideology, which attempted to fit the Romantic artist within the nationalistic Blut und Boden. It took decades for Friedrich's reputation to recover from this association with Nazism. His reliance on symbolism and the fact that his work fell outside the narrow definitions of modernism contributed to his fall from favour. In 1949, art historian Kenneth Clark wrote that Friedrich "worked in the frigid technique of his time, which could hardly inspire a school of modern painting", and suggested that the artist was trying to express in painting what is best left to poetry. Clark's dismissal of Friedrich reflected the damage the artist's reputation sustained during the late 1930s. Friedrich's reputation suffered further damage when his imagery was adopted by a number of Hollywood directors, including Walt Disney, built on the work of such German cinema masters as Fritz Lang and F. W. Murnau, within the horror and fantasy genres. His rehabilitation was slow, but enhanced through the writings of such critics and scholars as Werner Hofmann, Helmut Börsch-Supan and Sigrid Hinz, who successfully rebutted the political associations ascribed to his work, developed a catalogue raisonné, and placed Friedrich within a purely art-historical context. By the 1970s, he was again being exhibited in major international galleries and found favour with a new generation of critics and art historians. Today, his international reputation is well established. He is a national icon in his native Germany, and highly regarded by art historians and connoisseurs across the Western World. He is generally viewed as a figure of great psychological complexity, and according to Vaughan, "a believer who struggled with doubt, a celebrator of beauty haunted by darkness. In the end, he transcends interpretation, reaching across cultures through the compelling appeal of his imagery. He has truly emerged as a butterfly—hopefully one that will never again disappear from our sight". Work Friedrich was a prolific artist who produced more than 500 attributed works. In line with the Romantic ideals of his time, he intended his paintings to function as pure aesthetic statements, so he was cautious that the titles given to his work were not overly descriptive or evocative. It is likely that some of today's more literal titles, such as The Stages of Life, were not given by the artist himself, but were instead adopted during one of the revivals of interest in Friedrich. Complications arise when dating Friedrich's work, in part because he often did not directly name or date his canvases. He kept a carefully detailed notebook on his output, however, which has been used by scholars to tie paintings to their completion dates. Notes References Sources External links Hermitage Museum Archive CasparDavidFriedrich.org – 89 paintings by Caspar David Friedrich Biographical timeline, Hamburg Kunsthalle Caspar David Friedrich and the German romantic landscape German masters of the nineteenth century: paintings and drawings from the Federal Republic of Germany, full text exhibition catalog from The Metropolitan Museum of Art, which contains material on Caspar David Friedrich (no. 29-36) German romantic painters German landscape painters People from Greifswald German Lutherans People from Swedish Pomerania University of Greifswald alumni 18th-century German painters 18th-century German male artists German male painters 19th-century German painters 19th-century male artists Royal Danish Academy of Fine Arts alumni People associated with the University of Greifswald 1774 births 1840 deaths Artists of the Moravian Church Academic staff of the Dresden Academy of Fine Arts 19th-century mystics
5655
https://en.wikipedia.org/wiki/Courtney%20Love
Courtney Love
Courtney Michelle Love (née Harrison; born July 9, 1964) is an American singer, guitarist, songwriter, and actress. A figure in the alternative and grunge scenes of the 1990s, her career has spanned four decades. She rose to prominence as the lead vocalist and rhythm guitarist of the alternative rock band Hole, which she formed in 1989. Love has drawn public attention for her uninhibited live performances and confrontational lyrics, as well as her highly publicized personal life following her marriage to Nirvana frontman Kurt Cobain. In 2020, NME named her one of the most influential singers in alternative culture of the last 30 years. Love had an itinerant childhood, but was primarily raised in Portland, Oregon, where she played in a series of short-lived bands and was active in the local punk scene. After briefly being in a juvenile hall, she spent a year living in Dublin and Liverpool before returning to the United States and pursuing an acting career. She appeared in supporting roles in the Alex Cox films Sid and Nancy (1986) and Straight to Hell (1987) before forming the band Hole in Los Angeles with guitarist Eric Erlandson. The group received critical acclaim from underground rock press for their 1991 debut album, produced by Kim Gordon, while their second release, Live Through This (1994), was met with critical accolades and multi-platinum sales. In 1995, Love returned to acting, earning a Golden Globe Award nomination for her performance as Althea Leasure in Miloš Forman's The People vs. Larry Flynt (1996), which established her as a mainstream actress. The following year, Hole's third album, Celebrity Skin (1998), was nominated for three Grammy Awards. Love continued to work as an actress into the early 2000s, appearing in big-budget pictures such as Man on the Moon (1999) and Trapped (2002), before releasing her first solo album, America's Sweetheart, in 2004. The subsequent several years were marred with publicity surrounding Love's legal troubles and drug relapse, which resulted in a mandatory lockdown rehabilitation sentence in 2005 while she was writing a second solo album. That project became Nobody's Daughter, released in 2010 as a Hole album but without the former Hole lineup. Between 2014 and 2015, Love released two solo singles and returned to acting in the network series Sons of Anarchy and Empire. In 2020, she confirmed she was writing new music. Love has also been active as a writer; she co-created and co-wrote three volumes of a manga, Princess Ai, between 2004 and 2006, and wrote a memoir, Dirty Blonde (2006). Life and career 1964–1982: Childhood and education Courtney Michelle Harrison was born July 9, 1964, at Saint Francis Memorial Hospital in San Francisco, California, the first child of psychotherapist Linda Carroll (née Risi; born 1944) and Hank Harrison (1941–2022), a publisher and road manager for the Grateful Dead. Her parents met at a party held for Dizzy Gillespie in 1963, and the two married in Reno, Nevada after Carroll discovered she was pregnant. Carroll, who was adopted at birth, is the biological daughter of novelist Paula Fox. Love's matrilineal great-grandmother was Elsie Fox (née de Sola), a Cuban writer who co-wrote the film The Last Train from Madrid with Love's great-grandfather, Paul Hervey Fox, cousin of writer Faith Baldwin and actor Douglas Fairbanks. Phil Lesh, the founding bassist of the Grateful Dead, is Love's godfather. According to Love, she was named after Courtney Farrell, the protagonist of Pamela Moore's 1956 novel Chocolates for Breakfast. Love is of Cuban, English, German, Irish, and Welsh descent. Through her mother's subsequent marriages, Love has two younger half-sisters, three younger half-brothers (one of whom died in infancy), and one adopted brother. Love spent her early years in Haight-Ashbury, San Francisco, until her parents divorced in 1970. In a custody hearing, her mother, as well as one of her father's girlfriends, testified that Hank had dosed Courtney with LSD when she was a toddler. Carroll also alleged that Hank threatened to abduct his daughter and flee with her to a foreign country. Though Hank denied these allegations, his custody was revoked. In 1970, Carroll relocated with Love to the rural community of Marcola, Oregon where they lived along the Mohawk River while Carroll completed her psychology degree at the University of Oregon. There, Carroll remarried to schoolteacher Frank Rodríguez, who legally adopted Love. Though Love was baptized a Roman Catholic, her mother maintained an unorthodox home; according to Love, "There were hairy, wangly-ass hippies running around naked [doing] Gestalt therapy", and her mother raised her in a gender-free household with "no dresses, no patent leather shoes, no canopy beds, nothing". Love attended a Montessori school in Eugene, Oregon, where she struggled academically and socially. She has said that she began seeing psychiatrists at "like, [age] three. Observational therapy. TM for tots. You name it, I've been there." At age nine, a psychologist noted that she exhibited signs of autism, among them tactile defensiveness. Love commented in 1995: "When I talk about being introverted, I was diagnosed autistic. At an early age, I would not speak. Then I simply bloomed." In 1972, Love's mother divorced Rodríguez, remarried to sportswriter David Menely, and moved the family to Nelson, New Zealand. Love was enrolled at Nelson College for Girls, but soon expelled for misbehavior. In 1973, Carroll sent Love back to Portland, Oregon, to be raised by her former stepfather and other family friends. At age 14, Love was arrested for shoplifting from a Portland department store and remanded at Hillcrest Correctional Facility, a juvenile hall in Salem, Oregon. While at Hillcrest, she became acquainted with records by Patti Smith, the Runaways, and the Pretenders, who later inspired her to start a band. She was intermittently placed in foster care throughout late 1979 until becoming legally emancipated in 1980, after which she remained staunchly estranged from her mother. Shortly after her emancipation, Love spent two months in Japan working as a topless dancer, but was deported after her passport was confiscated. She returned to Portland and began working at the strip club Mary's Club, adopting the surname Love to conceal her identity; she later adopted Love as her surname. She worked odd jobs, including as a DJ at a gay disco. Love said she lacked social skills, and learned them while frequenting gay clubs and spending time with drag queens. During this period, she enrolled at Portland State University, studying English and philosophy. She later commented that, had she not found a passion for music, she would have sought a career working with children. In 1981, Love was granted a small trust fund that had been left by her maternal grandparents, which she used to travel to Dublin, Ireland, where her biological father was living. She audited courses at Trinity College, studying theology for two semesters. She later received honorary patronage from Trinity's University Philosophical Society in 2010. While in Dublin, Love met musician Julian Cope of the Teardrop Explodes at one of the band's concerts. Cope took a liking to Love and offered to let her stay at his Liverpool home in his absence. She traveled to London, where she was met by her friend and future bandmate, Robin Barbur, from Portland. Recalling Cope's offer, Love and Barbur moved into Cope's home with him and several other artists, including Pete de Freitas of Echo & the Bunnymen. De Freitas was initially hesitant to allow the girls to stay, but acquiesced as they were "alarmingly young and obviously had nowhere else to go". Love recalled: "They kind of took me in. I was sort of a mascot; I would get them coffee or tea during rehearsals." Cope writes of Love frequently in his 1994 autobiography, Head-On, in which he refers to her as "the adolescent". In July 1982, Love returned to the United States. In late 1982, she attended a Faith No More concert in San Francisco and convinced the members to let her join as a singer. The group recorded material with Love as a vocalist, but fired her; according to keyboardist Roddy Bottum, who remained Love's friend in the years after, the band wanted a "male energy". Love returned to working abroad as an erotic dancer, briefly in Taiwan, and then at a taxi dance hall in Hong Kong. By Love's account, she first used heroin while working at the Hong Kong dance hall, having mistaken it for cocaine. While still inebriated from the drug, Love was pursued by a wealthy male client who requested that she return with him to the Philippines, and gave her money to purchase new clothes. She used the money to purchase an airfare back to the United States. 1983–1987: Early music projects and film At age 19, through her then-boyfriend's mother, film costume designer Bernadene Mann, Love took a job at Paramount Studios cleaning out the wardrobe department of vintage pieces that had suffered dry rot or other damage. During this time, Love became interested in vintage fashion. She subsequently returned to Portland, where she formed short-lived musical projects with her friends Ursula Wehr and Robin Barbur (namely Sugar Babylon, later known as Sugar Babydoll). After meeting Kat Bjelland at the Satyricon nightclub in 1984, the two formed the group the Pagan Babies. Love asked Bjelland to start the band with her as a guitarist, and the two moved to San Francisco in June 1985, where they recruited bassist Jennifer Finch and drummer Janis Tanaka. According to Bjelland, "[Courtney] didn't play an instrument at the time" aside from keyboards, so Bjelland would transcribe Love's musical ideas on guitar for her. The group played several house shows and recorded one 4-track demo before disbanding in late 1985. After Pagan Babies, Love moved to Minneapolis, where Bjelland had formed the group Babes in Toyland, and briefly worked as a concert promoter before returning to California. Drummer Lori Barbero recalled Love's time in Minneapolis: Deciding to shift her focus to acting, Love enrolled at the San Francisco Art Institute and studied film under experimental director George Kuchar, featuring in one of his short films, Club Vatican. She also took experimental theater courses in Oakland taught by Whoopi Goldberg. In 1985, Love submitted an audition tape for the role of Nancy Spungen in the Sid Vicious biopic Sid and Nancy (1986) and was given a minor supporting role by director Alex Cox. After filming Sid and Nancy in New York City, she worked at a peep show in Times Square and squatted at the ABC No Rio social center and Pyramid Club in the East Village. That year, Cox cast her in a leading role in his film Straight to Hell (1987), a Spaghetti Western starring Joe Strummer, Dennis Hopper, and Grace Jones, shot in Spain in 1986. The film was poorly reviewed by critics, but it caught the attention of Andy Warhol, who featured Love in an episode of Andy Warhol's Fifteen Minutes. She also had a part in the 1988 Ramones music video for "I Wanna Be Sedated", appearing as a bride among dozens of party guests. Displeased by the "celebutante" fame she had attained, Love abandoned her acting career in 1988 and resumed work as a stripper in Oregon, where she was recognized by customers at a bar in the small town of McMinnville. This prompted Love to go into isolation and relocate to Anchorage, Alaska, where she lived for three months to "gather her thoughts", supporting herself by working at a strip club frequented by local fishermen. "I decided to move to Alaska because I needed to get my shit together and learn how to work", she said in retrospect. "So I went on this sort of vision quest. I got rid of all my earthly possessions. I had my bad little strip clothes and some big sweaters, and I moved into a trailer with a bunch of other strippers." 1988–1991: Beginnings of Hole At the end of 1988, Love taught herself to play guitar and relocated to Los Angeles, where she placed an ad in a local music zine: "I want to start a band. My influences are Big Black, Sonic Youth, and Fleetwood Mac." By 1989, Love had recruited guitarist Eric Erlandson; bassist Lisa Roberts, her neighbor; and drummer Caroline Rue, whom she met at a Gwar concert. Love named the band Hole after a line from Euripides' Medea ("There is a hole that pierces right through me") and a conversation in which her mother told her that she could not live her life "with a hole running through her". On July 23, 1989, Love married Leaving Trains vocalist James Moreland in Las Vegas; the marriage was annulled the same year. She later said that Moreland was a transvestite and that they had married "as a joke". After forming Hole, Love and Erlandson had a romantic relationship that lasted over a year. In Hole's formative stages, Love continued to work at strip clubs in Hollywood (including Jumbo's Clown Room and the Seventh Veil), saving money to purchase backline equipment and a touring van, while rehearsing at a Hollywood studio loaned to her by the Red Hot Chili Peppers. Hole played their first show in November 1989 at Raji's, a rock club in central Hollywood. Their debut single, "Retard Girl", was issued in April 1990 through the Long Beach indie label Sympathy for the Record Industry and was played by Rodney Bingenheimer on local rock station KROQ. Hole appeared on the cover of Flipside, a Los Angeles-based punk fanzine. In early 1991, they released their second single, "Dicknail", through Sub Pop Records. With no wave, noise rock, and grindcore bands being major influences on Love, Hole's first studio album, Pretty on the Inside, captured an abrasive sound and contained disturbing, graphic lyrics, described by Q as "confrontational [and] genuinely uninhibited". The record was released in September 1991 on Caroline Records, produced by Kim Gordon of Sonic Youth with assistant production from Gumball's Don Fleming; Love and Gordon had met when Hole opened for Sonic Youth during their promotional tour for Goo at the Whisky a Go Go in November 1990. In early 1991, Love sent Gordon a personal letter asking her to produce the record for the band, to which she agreed. Pretty on the Inside received generally positive critical reception from indie and punk rock critics and was named one of the 20 best albums of the year by Spin. It gained a following in the United Kingdom, charting at 59 on the UK Albums Chart, and its lead single, "Teenage Whore", entered the UK Indie Chart at number one. The album's feminist slant led many to tag the band as part of the riot grrrl movement, a movement with which Love did not associate. The band toured in support of the record, headlining with Mudhoney in Europe; in the United States, they opened for the Smashing Pumpkins, and performed at CBGB in New York City. During the tour, Love briefly dated Smashing Pumpkins frontman Billy Corgan and then the Nirvana frontman Kurt Cobain. The journalist Michael Azerrad states that Love and Cobain met in 1989 at the Satyricon nightclub in Portland, Oregon. However, the Cobain biographer Charles Cross gives the date as February 12, 1990; Cross said that Cobain playfully wrestled Love to the floor after she said that he looked like Dave Pirner of Soul Asylum. According to Love, she met Cobain at a Dharma Bums show in Portland, while Love's bandmate Eric Erlandson said that he and Love were introduced to Cobain in a parking lot after a concert at the Hollywood Palladium on May 17, 1991. In late 1991, Love and Cobain became re-acquainted through Jennifer Finch, one of Love's friends and former bandmates. Love and Cobain were a couple by 1992. 1992–1995: Marriage to Kurt Cobain, Live Through This and breakthrough Shortly after completing the tour for Pretty on the Inside, Love married Cobain on Waikiki Beach in Honolulu, Hawaii, on February 24, 1992. She wore a satin and lace dress once owned by actress Frances Farmer, and Cobain wore plaid pajamas. During Love's pregnancy, Hole recorded a cover of "Over the Edge" for a Wipers tribute album, and recorded their fourth single, "Beautiful Son", which was released in April 1993. On August 18, the couple's only child, a daughter, Frances Bean Cobain, was born in Los Angeles. They relocated to Carnation, Washington, and then Seattle. Love's first major media exposure came in a September 1992 profile with Cobain for Vanity Fair by Lynn Hirschberg, entitled "Strange Love". Cobain had become a major public figure following the surprise success of Nirvana's album Nevermind. Love was urged by her manager to participate in the cover story. During the prior year, Love and Cobain had developed a heroin addiction; the profile painted them in an unflattering light, suggesting that Love had been addicted to heroin during her pregnancy. The Los Angeles Department of Children and Family Services investigated, and custody of Frances was temporarily awarded to Love's sister Jaimee. Love claimed she was misquoted by Hirschberg, and asserted that she had immediately quit heroin during her first trimester after she discovered she was pregnant. Love later said the article had serious implications for her marriage and Cobain's mental state, suggesting it was a factor in his suicide two years later. On September 8, 1993, Love and Cobain made their only public performance together at the Rock Against Rape benefit in Hollywood, performing two acoustic duets of "Pennyroyal Tea" and "Where Did You Sleep Last Night". Love also performed electric versions of two new Hole songs, "Doll Parts" and "Miss World", both written for their upcoming second album. In October 1993, Hole recorded their second album, Live Through This, in Atlanta. The album featured a new lineup with bassist Kristen Pfaff and drummer Patty Schemel. In April 1994, Cobain killed himself in the Seattle home he shared with Love, who was in rehab in Los Angeles at the time. In the following months, Love was rarely seen in public, staying at her home with friends and family. Cobain's remains were cremated and his ashes divided into portions by Love, who kept some in a teddy bear and some in an urn. In June, she traveled to the Namgyal Buddhist Monastery in Ithaca, New York and had Cobain's ashes ceremonially blessed by Buddhist monks. Another portion was mixed into clay and made into memorial sculptures. Live Through This was released one week after Cobain's death on Geffen's subsidiary label DGC. On June 16, Pfaff died of a heroin overdose in Seattle. For Hole's impending tour, Love recruited the Canadian bassist Melissa Auf der Maur. Hole's performance on August 26, 1994, at the Reading Festival—Love's first public performance following Cobain's death—was described by MTV as "by turns macabre, frightening and inspirational". John Peel wrote in The Guardian that Love's disheveled appearance "would have drawn whistles of astonishment in Bedlam", and that her performance "verged on the heroic ... Love steered her band through a set which dared you to pity either her recent history or that of the band ... The band teetered on the edge of chaos, generating a tension which I cannot remember having felt before from any stage." Live Through This was certified platinum in April 1995 and received numerous accolades. The success combined with Cobain's suicide produced publicity for Love, and she was featured on Barbara Walters' 10 Most Fascinating People in 1995. Her erratic onstage behavior and various legal troubles during Hole's tour compounded the media coverage of her. Hole performed a series of riotous concerts over the following year, with Love frequently appearing hysterical onstage, flashing crowds, stage diving, and getting into fights with audience members. One journalist reported that at the band's show in Boston in December 1994: "Love interrupted the music and talked about her deceased husband Kurt Cobain, and also broke out into Tourette syndrome-like rants. The music was great, but the raving was vulgar and offensive, and prompted some of the audience to shout back at her." In January 1995, Love was arrested in Melbourne for disrupting a Qantas flight after getting into an argument with a stewardess. On July 4, 1995, at the Lollapalooza Festival in George, Washington, Love threw a lit cigarette at musician Kathleen Hanna before punching her in the face, alleging that she had made a joke about her daughter. She pleaded guilty to an assault charge and was sentenced to anger management classes. In November 1995, two male teenagers sued Love for allegedly punching them during a Hole concert in Orlando, Florida in March 1995. The judge dismissed the case on grounds that the teens "weren't exposed to any greater amount of violence than could reasonably be expected at an alternative rock concert". Love later said she had little memory of 1994 and 1995, as she had been using large quantities of heroin and Rohypnol at the time. 1996–2002: Acting success and Celebrity Skin After Hole's world tour concluded in 1996, Love made a return to acting, first in small roles in the Jean-Michel Basquiat biopic Basquiat and the drama Feeling Minnesota (1996), and then a starring role as Larry Flynt's wife Althea in Miloš Forman's critically acclaimed 1996 film The People vs. Larry Flynt. Love went through rehabilitation and quit using heroin at the insistence of Forman; she was ordered to take multiple urine tests under the supervision of Columbia Pictures while filming, and passed all of them. Despite Columbia Pictures' initial reluctance to hire Love due to her troubled past, her performance received acclaim, earning a Golden Globe nomination for Best Actress, and a New York Film Critics Circle Award for Best Supporting Actress. Critic Roger Ebert called her work in the film "quite a performance; Love proves she is not a rock star pretending to act, but a true actress." She won several other awards from various film critic associations for the film. During this time, Love maintained what the media noted as a more decorous public image, and she appeared in ad campaigns for Versace and in a Vogue Italia spread. Following the release of The People vs. Larry Flynt, she dated her co-star Edward Norton, with whom she remained until 1999. In late 1997, Hole released the compilations My Body, the Hand Grenade and The First Session, both of which featured previously recorded material. Love attracted media attention in May 1998 after punching journalist Belissa Cohen at a party; the suit was settled out of court for an undisclosed sum. In September 1998, Hole released their third studio album, Celebrity Skin, which featured a stark power pop sound that contrasted with their earlier punk influences. Love divulged her ambition of making an album where "art meets commerce ... there are no compromises made, it has commercial appeal, and it sticks to [our] original vision." She said she was influenced by Neil Young, Fleetwood Mac, and My Bloody Valentine when writing the album. Smashing Pumpkins frontman Billy Corgan co-wrote several songs. Celebrity Skin was well received by critics; Rolling Stone called it "accessible, fiery and intimate—often at the same time ... a basic guitar record that's anything but basic." Celebrity Skin went multi-platinum, and topped "Best of Year" lists at Spin and The Village Voice. It garnered Hole's only number-one single on the Modern Rock Tracks chart with "Celebrity Skin". Hole promoted the album through MTV performances and at the 1998 Billboard Music Awards, and were nominated for three Grammy Awards at the 41st Grammy Awards ceremony. Before the release of Celebrity Skin, Love and Fender designed a low-priced Squier brand guitar, the Vista Venus. The instrument featured a shape inspired by Mercury, a little-known independent guitar manufacturer, Stratocaster, and Rickenbacker's solid body guitars. It had a single-coil and a humbucker pickup and was available in 6-string and 12-string versions. In an early 1999 interview, Love said about the Venus: "I wanted a guitar that sounded really warm and pop, but which required just one box to go dirty ... And something that could also be your first band guitar. I didn't want it all teched out. I wanted it real simple, with just one pickup switch." Hole toured with Marilyn Manson on the Beautiful Monsters Tour in 1999, but dropped out after nine performances; Love and Manson disagreed over production costs, and Hole was forced to open for Manson under an agreement with Interscope Records. Hole resumed touring with Imperial Teen. Love later said Hole also abandoned the tour due to Manson and Korn's (whom they also toured with in Australia) sexualized treatment of teenage female audience members. Love told interviewers at 99X.FM in Atlanta: "What I really don't like—there are certain girls that like us, or like me, who are really messed up ... they're very young, and they do not need to be taken and raped, or filmed having enema contests ... [they were] going out into the audience and picking up fourteen and fifteen-year-old girls who obviously cut themselves, and then [I had] to see them in the morning ... it's just uncool." In 1999, Love was awarded an Orville H. Gibson award for Best Female Rock Guitarist. During this time, she starred opposite Jim Carrey as his partner Lynne Margulies in the Andy Kaufman biopic Man on the Moon (1999), followed by a role as William S. Burroughs's wife Joan Vollmer in Beat (2000) alongside Kiefer Sutherland. Love was cast as the lead in John Carpenter's sci-fi horror film Ghosts of Mars, but backed out after injuring her foot. She sued the ex-wife of her then-boyfriend, James Barber, whom Love alleged had caused the injury by running over her foot with her Volvo. The following year, she returned to film opposite Lili Taylor in Julie Johnson (2001), in which she played a woman who has a lesbian relationship; Love won an Outstanding Actress award at L.A.'s Outfest. She was then cast in the thriller Trapped (2002), alongside Kevin Bacon and Charlize Theron. The film was a box-office flop. In the interim, Hole had become dormant. In March 2001, Love began a "punk rock femme supergroup", Bastard, enlisting Schemel, Veruca Salt co-frontwoman Louise Post, and bassist Gina Crosley. Post recalled: "[Love] was like, 'Listen, you guys: I've been in my Malibu, manicure, movie-star world for two years, alright? I wanna make a record. And let's leave all that grunge shit behind us, eh? We were being so improvisational, and singing together, and with a trust developing between us. It was the shit." The group recorded a demo tape, but by September 2001, Post and Crosley had left, with Post citing "unhealthy and unprofessional working conditions". In May 2002, Hole announced their breakup amid continuing litigation with Universal Music Group over their record contract. In 1997, Love and former Nirvana members Krist Novoselic and Dave Grohl formed a limited liability company, Nirvana LLC, to manage Nirvana's business dealings. In June 2001, Love filed a lawsuit to dissolve it, blocking the release of unreleased Nirvana material and delaying the release of the Nirvana compilation With the Lights Out. Grohl and Novoselic sued Love, calling her "irrational, mercurial, self-centered, unmanageable, inconsistent and unpredictable". She responded with a letter stating that "Kurt Cobain was Nirvana" and that she and his family were the "rightful heirs" to the Nirvana legacy. 2003–2008: Solo work and legal troubles In February 2003, Love was arrested at Heathrow Airport for disrupting a flight and was banned from Virgin Airlines. In October, she was arrested in Los Angeles after breaking several windows of her producer and then-boyfriend James Barber's home and was charged with being under the influence of a controlled substance; the ordeal resulted in her temporarily losing custody of her daughter. After the breakup of Hole, Love began composing material with songwriter Linda Perry, and in July 2003 signed a contract with Virgin Records. She began recording her debut solo album, America's Sweetheart, in France shortly after. Virgin Records released America's Sweetheart in February 2004; it received mixed reviews. Charles Aaron of Spin called it a "jaw-dropping act of artistic will and a fiery, proper follow-up to 1994's Live Through This" and awarded it eight out of ten, while Amy Phillips of The Village Voice wrote: "[Love is] willing to act out the dream of every teenage brat who ever wanted to have a glamorous, high-profile hissyfit, and she turns those egocentric nervous breakdowns into art. Sure, the art becomes less compelling when you've been pulling the same stunts for a decade. But, honestly, is there anybody out there who fucks up better?" The album sold fewer than 100,000 copies. Love later expressed regret over the record, blaming her drug problems at the time. Shortly after it was released, she told Kurt Loder on TRL: "I cannot exist as a solo artist. It's a joke." On March 17, 2004, Love appeared on the Late Show with David Letterman to promote America's Sweetheart. Her appearance drew media coverage when she lifted her shirt multiple times, flashed Letterman, and stood on his desk. The New York Times wrote: "The episode was not altogether surprising for Ms. Love, 39, whose most public moments have veered from extreme pathos—like the time she read the suicide note of her famous husband, Kurt Cobain, on MTV—to angry feminism to catfights to incoherent ranting." Hours later, in the early morning of March 18, Love was arrested in Manhattan for allegedly striking a fan with a microphone stand during a small concert in the East Village. She was released within hours and performed a scheduled concert the following evening at the Bowery Ballroom. Four days later, she called in multiple times to The Howard Stern Show, claiming in broadcast conversations with Stern that the incident had not occurred, and that actress Natasha Lyonne, who was at the concert, was told by the alleged victim that he had been paid $10,000 to file a false claim leading to Love's arrest. On July 9, 2004, her 40th birthday, Love was arrested for failing to make a court appearance for the March 2004 charges, and taken to Bellevue Hospital, allegedly incoherent, where she was placed on a 72-hour watch. According to police, she was believed to be a potential danger to herself, but deemed mentally sound and released to a rehab facility two days later. Amidst public criticism and press coverage, comedian Margaret Cho published an opinion piece, "Courtney Deserves Better from Feminists", arguing that negative associations of Love with her drug and personal problems (including from feminists) overshadowed her music and wellbeing. Love pleaded guilty in October 2004 to disorderly conduct over the incident in East Village. Love's appearance as a roaster on the Comedy Central Roast of Pamela Anderson in August 2005, in which she appeared intoxicated and disheveled, attracted further media attention. One review said that Love "acted as if she belonged in an institution". Six days after the broadcast, Love was sentenced to a 28-day lockdown rehab program for being under the influence of a controlled substance, violating her probation. To avoid jail time, she accepted an additional 180-day rehab sentence in September 2005. In November 2005, after completing the program, Love was discharged from the rehab center under the provision that she complete further outpatient rehab. In subsequent interviews, Love said she had been addicted to substances including prescription drugs, cocaine, and crack cocaine. She said she had been sober since completing rehabilitation in 2007, and cited her Soka Gakkai Buddhist practice (which she began in 1988) as integral to her sobriety. In the midst of her legal troubles, Love had endeavors in writing and publishing. She co-wrote a semi-autobiographical manga, Princess Ai (Japanese: プリンセス·アイ物語), with Stu Levy, illustrated by Misaho Kujiradou and Ai Yazawa; it was released in three volumes in the United States and Japan between 2004 and 2006. In 2006, Love published a memoir, Dirty Blonde, and began recording her second solo album, How Dirty Girls Get Clean, collaborating again with Perry and Billy Corgan. Love had written several songs, including an anti-cocaine song titled "Loser Dust", during her time in rehab in 2005. She told Billboard: "My hand-eye coordination was so bad [after the drug use], I didn't even know chords anymore. It was like my fingers were frozen. And I wasn't allowed to make noise [in rehab] ... I never thought I would work again." Tracks and demos for the album leaked online in 2006, and a documentary, The Return of Courtney Love, detailing the making of the album, aired on the British television network More4 in the fall of that year. A rough acoustic version of "Never Go Hungry Again", recorded during an interview for The Times in November, was also released. Incomplete audio clips of the song "Samantha", originating from an interview with NPR, were distributed on the internet in 2007. 2009–2012: Hole revival and visual art In March 2009, fashion designer Dawn Simorangkir brought a libel suit against Love concerning a defamatory post Love made on her Twitter account, which was eventually settled for $450,000. Several months later, in June 2009, NME published an article detailing Love's plan to reunite Hole and release a new album, Nobody's Daughter. In response, former Hole guitarist Eric Erlandson stated in Spin magazine that contractually no reunion could take place without his involvement; therefore Nobody's Daughter would remain Love's solo record, as opposed to a "Hole" record. Love responded to Erlandson's comments in a Twitter post, claiming "he's out of his mind, Hole is my band, my name, and my Trademark". Nobody's Daughter was released worldwide as a Hole album on April 27, 2010. For the new line-up, Love recruited guitarist Micko Larkin, Shawn Dailey (bass guitar), and Stu Fisher (drums, percussion). Nobody's Daughter featured material written and recorded for Love's unfinished solo album, How Dirty Girls Get Clean, including "Pacific Coast Highway", "Letter to God", "Samantha", and "Never Go Hungry", although they were re-produced in the studio with Larkin and engineer Michael Beinhorn. The album's subject matter was largely centered on Love's tumultuous life between 2003 and 2007, and featured a polished folk rock sound, and more acoustic guitar work than previous Hole albums. The first single from Nobody's Daughter was "Skinny Little Bitch", released to promote the album in March 2010. The album received mixed reviews. Robert Sheffield of Rolling Stone gave the album three out of five, saying Love "worked hard on these songs, instead of just babbling a bunch of druggy bullshit and assuming people would buy it, the way she did on her 2004 flop, America's Sweetheart". Sal Cinquemani of Slant Magazine also gave the album three out of five: "It's Marianne Faithfull's substance-ravaged voice that comes to mind most often while listening to songs like 'Honey' and 'For Once in Your Life'. The latter track is, in fact, one of Love's most raw and vulnerable vocal performances to date ... the song offers a rare glimpse into the mind of a woman who, for the last 15 years, has been as famous for being a rock star as she's been for being a victim." Love and the band toured internationally from 2010 into late 2012 promoting the record, with their pre-release shows in London and at South by Southwest receiving critical acclaim. In 2011, Love participated in Hit So Hard, a documentary chronicling bandmate Schemel's time in Hole. In May 2012, Love debuted an art collection at Fred Torres Collaborations in New York titled "And She's Not Even Pretty", which contained over 40 drawings and paintings by Love composed in ink, colored pencil, pastels, and watercolors. Later in the year, she collaborated with Michael Stipe on the track "Rio Grande" for Johnny Depp's sea shanty album Son of Rogues Gallery, and in 2013, co-wrote and contributed vocals on "Rat A Tat" from Fall Out Boy's album Save Rock and Roll, also appearing in the song's music video. 2013–2015: Return to acting; libel lawsuits After dropping the Hole name and performing as a solo artist in late 2012, Love appeared in spring 2013 advertisements for Yves Saint Laurent alongside Kim Gordon and Ariel Pink. Love completed a solo tour of North America in mid-2013, which was purported to be in promotion of an upcoming solo album; however, it was ultimately dubbed a "greatest hits" tour, and featured songs from Love's and Hole's back catalogue. Love told Billboard at the time that she had recorded eight songs in the studio. Love was subject of a second landmark libel lawsuit brought against her in January 2014 by her former attorney Rhonda Holmes, who accused Love of online defamation, seeking $8 million in damages. It was the first case of alleged Twitter-based libel in U.S. history to make it to trial. The jury, however, found in Love's favor. A subsequent defamation lawsuit filed by fashion designer Simorangkir in February 2014, however, resulted in Love being ordered to pay a further $350,000 in recompense. On April 22, 2014, Love debuted the song "You Know My Name" on BBC Radio 6 to promote her tour of the United Kingdom. It was released as a double A-side single with the song "Wedding Day" on May 4, 2014, on her own label Cherry Forever Records via Kobalt Label Services. The tracks were produced by Michael Beinhorn, and feature Tommy Lee on drums. In an interview with the BBC, Love revealed that she and former Hole guitarist Eric Erlandson had reconciled, and had been rehearsing new material together, along with former bassist Melissa Auf der Maur and drummer Patty Schemel, though she did not confirm a reunion of the band. On May 1, 2014, in an interview with Pitchfork, Love commented further on the possibility of Hole reuniting, saying: "I'm not going to commit to it happening, because we want an element of surprise. There's a lot of is to be dotted and ts to be crossed." Love was cast in several television series in supporting parts throughout 2014, including the FX series Sons of Anarchy, Revenge, and Lee Daniels' network series Empire in a recurring guest role as Elle Dallas. The track "Walk Out on Me", featuring Love, was included on the Empire: Original Soundtrack from Season 1 album, which debuted at number 1 on the Billboard 200. Alexis Petridis of The Guardian praised the track, saying: "The idea of Courtney Love singing a ballad with a group of gospel singers seems faintly terrifying ... The reality is brilliant. Love's voice fits the careworn lyrics, effortlessly summoning the kind of ravaged darkness that Lana Del Rey nearly ruptures herself trying to conjure up." In January 2015, Love starred in a New York City stage production, Kansas City Choir Boy, a "pop opera" conceived by and co-starring Todd Almond. Charles Isherwood of The New York Times praised her performance, noting a "soft-edged and bewitching" stage presence, and wrote: "Her voice, never the most supple or rangy of instruments, retains the singular sound that made her an electrifying front woman for the band Hole: a single sustained noted can seem to simultaneously contain a plea, a wound and a threat." The show toured later in the year, with performances in Boston and Los Angeles. In April 2015, the journalist Anthony Bozza sued Love, alleging a contractual violation regarding his co-writing of her memoir. Love performed as the opening act for Lana Del Rey on her Endless Summer Tour for eight West Coast shows in May and June 2015. During her tenure, Love debuted the single "Miss Narcissist", released on Wavves' independent label Ghost Ramp. She was also cast in a supporting role in James Franco's film The Long Home, based on the novel by William Gay, her first film role in over ten years; as of 2022, it remains unreleased. 2016–present: Fashion and forthcoming music In January 2016, Love released a clothing line in collaboration with Sophia Amoruso, "Love, Courtney", featuring 18 pieces reflecting her personal style. In November 2016, she began filming the pilot for A Midsummer's Nightmare, a Shakespeare anthology series adapted for Lifetime. She starred as Kitty Menéndez in Menendez: Blood Brothers, a biopic television film based on the lives of Lyle and Erik Menéndez, which premiered on Lifetime in June 2017. In October 2017, shortly after the Harvey Weinstein scandal made news, a 2005 video of Love warning young actresses about Weinstein went viral. In the footage, while on the red carpet for the Comedy Central Roast of Pamela Anderson, Love was asked by Natasha Leggero if she had any advice for "a young girl moving to Hollywood"; she responded, "If Harvey Weinstein invites you to a private party in the Four Seasons [hotel], don't go." She later tweeted, "Although I wasn't one of his victims, I was eternally banned by [Creative Artists Agency] for speaking out." In the same year, Love was cast in Justin Kelly's biopic JT LeRoy, portraying a film producer opposite Laura Dern. In March 2018, she appeared in the music video for Marilyn Manson's "Tattooed in Reverse", and in April she appeared as a guest judge on RuPaul's Drag Race. In December, Love was awarded a restraining order against Sam Lutfi, who had acted as her manager for the previous six years, alleging verbal abuse and harassment. Her daughter, Frances, and sister, Jaimee, were also awarded restraining orders against Lutfi. In January 2019, a Los Angeles County judge extended the three-year order to five years, citing Lutfi's tendency to "prey upon people". On August 18, 2019, Love performed a solo set at the Yola Día festival in Los Angeles, which also featured performances by Cat Power and Lykke Li. On September 9, Love garnered press attention when she publicly criticized Joss Sackler, an heiress to the Sackler family OxyContin fortune, after she allegedly offered Love $100,000 to attend her fashion show during New York Fashion Week. In the same statement, Love indicated that she had relapsed into opioid addiction in 2018, stating that she had recently celebrated a year of sobriety. In October 2019, Love relocated from Los Angeles to London. On November 21, 2019, Love recorded the song "Mother", written and produced by Lawrence Rothman, as part of the soundtrack for the horror film The Turning (2020). In January 2020, she received the Icon Award at the NME Awards; NME described her as "one of the most influential singers in alternative culture of the last 30 years". The following month, she confirmed she was writing a new record which she described as "really sad ... [I'm] writing in minor chords, and that appeals to my sadness." In March 2021, Love said she had been hospitalized with acute anemia in August 2020, which had nearly killed her and reduced her weight to ; she made a full recovery. In August 2022, Love revealed the completion of her memoir, The Girl with the Most Cake, after a nearly ten-year period of writing. It was announced on May 15, 2023, that Love had been cast in Assassination, a biographical film about the Assassination of John F. Kennedy, directed by David Mamet and co-starring Viggo Mortensen, Shia LaBeouf, Al Pacino, and John Travolta. Artistry Influences Love has been candid about her diverse musical influences, the earliest being Patti Smith, The Runaways, and The Pretenders, artists she discovered while in juvenile hall as a young teenager. As a child, her first exposure to music was records that her parents received each month through Columbia Record Club. The first record Love owned was Leonard Cohen's Songs of Leonard Cohen (1967), which she obtained from her mother: "He was so lyric-conscious and morbid, and I was a pretty morbid kid", she recalled. As a teenager, she named Flipper, Kate Bush, Soft Cell, Joni Mitchell, Laura Nyro, Lou Reed, and Dead Kennedys among her favorite artists. While in Dublin at age fifteen, Love attended a Virgin Prunes concert, an event she credited as being a pivotal influence: "I had never seen so much sex, snarl, poetry, evil, restraint, grace, filth, raw power and the very essence of rock and roll", she recalled. "[I had seen] U2 [who] gave me lashes of love and inspiration, and a few nights later the Virgin Prunes fuckedmeup." Decades later, in 2009, Love introduced the band's frontman Gavin Friday at a Carnegie Hall event, and performed a song with him. Though often associated with punk music, Love has noted that her most significant musical influences have been post-punk and new wave artists. Commenting in 2021, Love said: Over the years, Love has also named several other new wave and post-punk bands as influences, including The Smiths, Siouxsie and the Banshees, Television, and Bauhaus. Love's diverse genre interests were illustrated in a 1991 interview with Flipside, in which she stated: "There's a part of me that wants to have a grindcore band and another that wants to have a Raspberries-type pop band." Discussing the abrasive sound of Hole's debut album, she said she felt she had to "catch up with all my hip peers who'd gone all indie on me, and who made fun of me for liking R.E.M. and The Smiths." She has also embraced the influence of experimental artists and punk rock groups, including Sonic Youth, Swans, Big Black, Diamanda Galás, the Germs, and The Stooges. While writing Celebrity Skin, she drew influence from Neil Young and My Bloody Valentine. She has also cited her contemporary PJ Harvey as an influence, saying: "The one rock star that makes me know I'm shit is Polly Harvey. I'm nothing next to the purity that she experiences." Literature and poetry have often been a major influence on her songwriting; Love said she had "always wanted to be a poet, but there was no money in it." She has named the works of T.S. Eliot and Charles Baudelaire as influential, and referenced works by Dante Rossetti, William Shakespeare, Rudyard Kipling, and Anne Sexton in her lyrics. Musical style and lyrics Musically, Love's work with Hole and her solo efforts have been characterized as alternative rock; Hole's early material, however, was described by critics as being stylistically closer to grindcore and aggressive punk rock. Spins October 1991 review of Hole's first album noted Love's layering of harsh and abrasive riffs buried more sophisticated musical arrangements. In 1998, she stated that Hole had "always been a pop band. We always had a subtext of pop. I always talked about it, if you go back ... what'll sound like some weird Sonic Youth tuning back then to you was sounding like the Raspberries to me, in my demented pop framework." Love's lyrical content is composed from a female's point of view, and her lyrics have been described as "literate and mordant" and noted by scholars for "articulating a third-wave feminist consciousness." Simon Reynolds, in reviewing Hole's debut album, noted: "Ms. Love's songs explore the full spectrum of female emotions, from vulnerability to rage. The songs are fueled by adolescent traumas, feelings of disgust about the body, passionate friendships with women and the desire to escape domesticity. Her lyrical style could be described as emotional nudism." Journalist and critic Kim France, in critiquing Love's lyrics, referred to her as a "dark genius" and likened her work to that of Anne Sexton. Love has remarked that lyrics have always been the most important component of songwriting for her: "The important thing for me ... is it has to look good on the page. I mean, you can love Led Zeppelin and not love their lyrics ... but I made a big effort in my career to have what's on the page mean something." Common themes present in Love's lyrics during her early career included body image, rape, suicide, conformity, pregnancy, prostitution, and death. In a 1991 interview with Everett True, she said: "I try to place [beautiful imagery] next to fucked up imagery, because that's how I view things ... I sometimes feel that no one's taken the time to write about certain things in rock, that there's a certain female point of view that's never been given space." Critics have noted that Love's later musical work is more lyrically introspective. Celebrity Skin and America's Sweetheart are lyrically centered on celebrity life, Hollywood, and drug addiction, while continuing Love's interest in vanity and body image. Nobody's Daughter was lyrically reflective of Love's past relationships and her struggle for sobriety, with the majority of its lyrics written while she was in rehab in 2006. Performance Love has a contralto vocal range. According to Love, she never wanted to be a singer, but rather aspired to be a skilled guitarist: "I'm such a lazy bastard though that I never did that", she said. "I was always the only person with the nerve to sing, and so I got stuck with it." She has been regularly noted by critics for her husky vocals as well as her "banshee [-like]" screaming abilities. Her vocals have been compared to those of Johnny Rotten, and David Fricke of Rolling Stone described them as "lung-busting" and "a corrosive, lunatic wail". Upon the release of Hole's 2010 album, Nobody's Daughter, Amanda Petrusich of Pitchfork compared Love's raspy, unpolished vocals to those of Bob Dylan. In 2023, Rolling Stone ranked Love at number 130 on its list of the 200 Greatest Singers of All Time. She has played a variety of Fender guitars throughout her career, including a Jaguar and a vintage 1965 Jazzmaster; the latter was purchased by the Hard Rock Cafe and is on display in New York City. Between 1989 and 1991, Love primarily played a Rickenbacker 425 because she "preferred the 3/4 neck", but she destroyed the guitar onstage at a 1991 concert opening for the Smashing Pumpkins. In the mid-1990s, she often played a guitar made by Mercury, an obscure company that manufactured custom guitars, as well as a Univox Hi-Flier. Fender's Vista Venus, designed by Love in 1998, was partially inspired by Rickenbacker guitars as well as her Mercury. During tours after the release of Nobody's Daughter (post-2010), Love has played a Rickenbacker 360 onstage. Her setup has included Fender tube gear, Matchless, Ampeg, Silvertone and a solid-state 1976 Randall Commander. Love has referred to herself as "a shit guitar player", further commenting in a 2014 interview: "I can still write a song, but [the guitar playing] sounds like shit ... I used to be a good rhythm player but I am no longer dependable." Throughout her career, she has also garnered a reputation for unpredictable live shows. In the 1990s, her performances with Hole were characterized by confrontational behavior, with Love stage diving, smashing guitars or throwing them into the audience, wandering into the crowd at the end of sets, and engaging in sometimes incoherent rants. Critics and journalists have noted Love for her comical, often stream-of-consciousness-like stage banter. Music journalist Robert Hilburn wrote in 1993 that, "rather than simply scripted patter, Love's comments between songs [have] the natural feel of someone who is sharing her immediate feelings." In a review of a live performance published in 2010, it was noted that Love's onstage "one-liners [were] worthy of the Comedy Store." Philanthropy In 1993, Love and husband Kurt Cobain performed an acoustic set together at the Rock Against Rape benefit in Los Angeles, which raised awareness and provided resources for victims of sexual abuse. In 2000, Love publicly advocated for reform of the record industry in a personal letter published by Salon. In the letter, Love said: "It's not piracy when kids swap music over the Internet using Napster or Gnutella or Freenet or iMesh or beaming their CDs into a My.MP3.com or MyPlay.com music locker. It's piracy when those guys that run those companies make side deals with the cartel lawyers and label heads so that they can be 'the label's friend', and not the artists'." In a subsequent interview with Carrie Fisher, she said that she was interested in starting a union for recording artists, and also discussed race relations in the music industry, advocating for record companies to "put money back into the black community [whom] white people have been stealing from for years." Love has been a long-standing supporter of LGBT causes. She has frequently collaborated with Los Angeles Gay and Lesbian Center, taking part in the center's "An Evening with Women" events. The proceeds of the event help provide food and shelter for homeless youth; services for seniors; legal assistance; domestic violence services; health and mental health services, and cultural arts programs. Love participated with Linda Perry for the event in 2012, and performed alongside Aimee Mann and comedian Wanda Sykes. Speaking on her collaboration on the event, Love said: "Seven thousand kids in Los Angeles a year go out on the street, and forty percent of those kids are gay, lesbian, or transgender. They come out to their parents, and become homeless ... for whatever reason, I don't really know why, but gay men have a lot of foundations—I've played many of them—but the lesbian side of it doesn't have as much money and/or donors, so we're excited that this has grown to cover women and women's affairs." She has also contributed to AIDS organizations, partaking in benefits for amfAR and the RED Campaign. In May 2011, she donated six of her husband Cobain's personal vinyl records for auction at Mariska Hargitay's Joyful Heart Foundation event for victims of child abuse, rape, and domestic violence. She has also supported the Sophie Lancaster Foundation. Influence Love has had an impact on female-fronted alternative acts and performers. She has been cited as influential on young female instrumentalists in particular, having once infamously proclaimed: "I want every girl in the world to pick up a guitar and start screaming ... I strap on that motherfucking guitar and you cannot fuck with me. That's my feeling." In The Electric Guitar: A History of an American Icon, it is noted: With over 3 million records sold in the United States alone, Hole became one of the most successful rock bands of all time fronted by a woman. VH1 ranked Love 69 in their list of The 100 Greatest Women in Music History in 2012. In 2015, the Phoenix New Times declared Love the number one greatest female rock star of all time, writing: "To build a perfect rock star, there are several crucial ingredients: musical talent, physical attractiveness, tumultuous relationships, substance abuse, and public meltdowns, just to name a few. These days, Love seems to have rebounded from her epic tailspin and has leveled out in a slightly more normal manner, but there's no doubt that her life to date is the type of story people wouldn't believe in a novel or a movie." Among the alternative musicians who have cited Love as an influence are Scout Niblett; Brody Dalle of The Distillers; Dee Dee Penny of Dum Dum Girls; Victoria Legrand of Beach House; Annie Hardy of Giant Drag; and Nine Black Alps. Contemporary female pop artists Lana Del Rey, Avril Lavigne, Tove Lo, and Sky Ferreira have also cited Love as an influence. Love has frequently been recognized as the most high-profile contributor of feminist music during the 1990s, and for "subverting [the] mainstream expectations of how a woman should look, act, and sound." According to music journalist Maria Raha, "Hole was the highest-profile female-fronted band of the '90s to openly and directly sing about feminism." Patti Smith, a major influence of Love's, also praised her, saying: "I hate genderizing things ... [but] when I heard Hole, I was amazed to hear a girl sing like that. Janis Joplin was her own thing; she was into Big Mama Thornton and Bessie Smith. But what Courtney Love does, I'd never heard a girl do that." She has also been a gay icon since the mid-1990s, and has jokingly referred to her fanbase as consisting of "females, gay guys, and a few advanced, evolved heterosexual men." Love's aesthetic image, particularly in the early 1990s, also became influential and was dubbed "kinderwhore" by critics and media. The subversive fashion mainly consisted of vintage babydoll dresses accompanied by smeared makeup and red lipstick. MTV reporter Kurt Loder described Love as looking like "a debauched rag doll" onstage. Love later said she had been influenced by the fashion of Chrissy Amphlett of the Divinyls. Interviewed in 1994, Love commented "I would like to think–in my heart of hearts–that I'm changing some psychosexual aspects of rock music. Not that I'm so desirable. I didn't do the kinder-whore thing because I thought I was so hot. When I see the look used to make one more appealing, it pisses me off. When I started, it was a What Ever Happened to Baby Jane? thing. My angle was irony." Discography Hole discography Pretty on the Inside (1991) Live Through This (1994) Celebrity Skin (1998) Nobody's Daughter (2010) Solo discography America's Sweetheart (2004) Filmography Sid and Nancy (1986) Straight to Hell (1987) The People vs. Larry Flynt (1996) 200 Cigarettes (1999) Man on the Moon (1999) Julie Johnson (2001) Trapped (2002) Bibliography Footnotes References Sources External links Works by or about Courtney Love (library search via WorldCat) 1964 births Living people American alternative rock musicians Alternative rock guitarists American punk rock guitarists American women rock singers Alternative rock singers American punk rock singers Women punk rock singers American rock songwriters American women singer-songwriters Hole (band) members Faith No More members Sympathy for the Record Industry artists Feminist musicians Musicians from Portland, Oregon Guitarists from San Francisco Guitarists from Oregon Singer-songwriters from Oregon Singers from San Francisco Songwriters from San Francisco American film actresses American television actresses Actresses from Portland, Oregon Actresses from San Francisco American Buddhists American contraltos American feminists American LGBT rights activists American people of Cuban descent American people of English descent American people of German descent American people of Irish descent American people of Welsh descent American people convicted of assault American women painters Artists with autism Converts to Buddhism from Roman Catholicism Converts to Sōka Gakkai People educated at Nelson College for Girls People from Lane County, Oregon Portland State University alumni Alumni of Trinity College Dublin Singer-songwriters from California 20th-century squatters Writers from San Francisco 20th-century American actresses 21st-century American actresses 20th-century American singer-songwriters 21st-century American singer-songwriters 20th-century American artists 20th-century American women guitarists 20th-century American women artists 21st-century American women artists 20th-century American writers 21st-century American writers 20th-century American women singers 20th-century American women writers 21st-century American women guitarists 20th-century American guitarists 21st-century American guitarists 21st-century American women writers 21st-century American women singers American activists with disabilities Actors with autism American actors with disabilities
5657
https://en.wikipedia.org/wiki/Cow%20%28disambiguation%29
Cow (disambiguation)
Cow is a colloquial term for cattle, and the name of female cattle. Cow, cows or COW may also refer to: Science and technology Cow, an adult female of several animals AT2018cow, a large astronomical explosion also known as "The Cow" Distillation cow, a piece of glassware that allows fractions to be collected without breaking vacuum Cell on wheels, a means of providing temporary mobile phone network coverage Copy-on-write, in computing Literature Al-Baqara, the second and longest sura of the Qur'an, usually translated as "The Cow" Cows, a 1998 novel by Matthew Stokoe Cow, the English translation of Beat Sterchi's novel Blösch "Cows!", a children's story from the Railway Series book Edward the Blue Engine by the Reverend Wilbert Awdry "Cows", a poem from The Wiggles' album Big Red Car Film and television The Cow (1969 film), an Iranian film The Cow (1989 film), a Soviet animated short Cow (2009 film), a Chinese film Cow (2021 film), a British documentary film Cow (public service announcement), an anti texting while driving public service announcement Cows (TV series), a pilot and cancelled television sitcom produced by Eddie Izzard for Channel 4 in 1997 Cow, a character in the animated series Cow and Chicken Computer Originated World, referring to the globe ID the BBC1 TV network used from 1985 to 1991 Music Cows (band), a noise rock band from Minneapolis Cow (demo), a 1987 EP by Inspiral Carpets "Cows", a song by Grandaddy from their 1992 album Prepare to Bawl Other uses Cerritos On Wheels, municipal bus service operated by the City of Cerritos, California, United States College of Wooster, liberal arts college in Wooster, Ohio, United States Cow Hell Swamp, Georgia, United States Crude oil washing Cows (ice cream), a Canadian ice cream brand Cowdenbeath railway station, Scotland, National Rail station code Cow, part of a cow-calf railroad locomotive set COWS, a mnemonic for Cold Opposite, Warm Same in the caloric reflex test See also Vacas (English: Cows), a 1991 Spanish film Kráva (English: The Cow), a 1994 Czech film by Karel Kachyňa Sacred cow (disambiguation) Cow Run (disambiguation) Cowes Kow (disambiguation)
5658
https://en.wikipedia.org/wiki/Human%20cannibalism
Human cannibalism
Human cannibalism is the act or practice of humans eating the flesh or internal organs of other human beings. A person who practices cannibalism is called a cannibal. The meaning of "cannibalism" has been extended into zoology to describe an individual of a species consuming all or part of another individual of the same species as food, including sexual cannibalism. Neanderthals are believed to have practised cannibalism, and Neanderthals may have been eaten by anatomically modern humans. Cannibalism was also practised in ancient Egypt, Roman Egypt and during famines in Egypt such as the great famine of 1199–1202. The Island Carib people of the Lesser Antilles, from whom the word "cannibalism" is derived, acquired a long-standing reputation as cannibals after their legends were recorded in the 17th century. Some controversy exists over the accuracy of these legends and the prevalence of actual cannibalism in the culture. Cannibalism has been well documented in much of the world, including Fiji, the Amazon Basin, the Congo, and the Māori people of New Zealand. Cannibalism was also practised in New Guinea and in parts of the Solomon Islands, and human flesh was sold at markets in some parts of Melanesia. Fiji was once known as the "Cannibal Isles". Cannibalism has recently been both practised and fiercely condemned in several wars, especially in Liberia and the Democratic Republic of the Congo. It was still practised in Papua New Guinea as of 2012, for cultural reasons and in ritual as well as in war in various Melanesian tribes. Cannibalism has been said to test the bounds of cultural relativism because it challenges anthropologists "to define what is or is not beyond the pale of acceptable human behavior". A few scholars argue that no firm evidence exists that cannibalism has ever been a socially acceptable practice anywhere in the world, at any time in history, but such views have been largely rejected as irreconcilable with the actual evidence. A form of cannibalism popular in early modern Europe was the consumption of body parts or blood for medical purposes. This practice was at its height during the 17th century, although as late as the second half of the 19th century some peasants attending an execution are recorded to have "rushed forward and scraped the ground with their hands that they might collect some of the bloody earth, which they subsequently crammed in their mouth, in hope that they might thus get rid of their disease." Cannibalism has occasionally been practised as a last resort by people suffering from famine. Famous examples include the ill-fated Donner Party (1846–1847) and, more recently, the crash of Uruguayan Air Force Flight 571 (1972), after which some survivors ate the bodies of the dead. Additionally, there are cases of people engaging in cannibalism for sexual pleasure, such as Jeffrey Dahmer, Armin Meiwes, Issei Sagawa, and Albert Fish. There is resistance to formally labelling cannibalism a mental disorder. Etymology The word "cannibal" is derived from Spanish caníbal or caríbal, originally used as a name for the Caribs, a people from the West Indies said to have eaten human flesh. The older term anthropophagy, meaning "eating humans", is also used for human cannibalism. Reasons and types Cannibalism has been practised under a variety of circumstances and for various motives. To adequately express this diversity, Shirley Lindenbaum suggests that "it might be better to talk about 'cannibalisms'" in the plural. Institutionalized, survival, and pathological cannibalism One major distinction is whether cannibal acts are accepted by the culture in which they occur – institutionalized cannibalism – or whether they are merely practised under starvation conditions to ensure one's immediate survival – survival cannibalism – or by isolated individuals considered criminal and often pathological by society at large – cannibalism as psychopathology or "aberrant behavior". Institutionalized cannibalism, sometimes also called "learned cannibalism", is the consumption of human body parts as "an institutionalized practice" generally accepted in the culture where it occurs. By contrast, survival cannibalism means "the consumption of others under conditions of starvation such as shipwreck, military siege, and famine, in which persons normally averse to the idea are driven [to it] by the will to live". Also known as famine cannibalism, such forms of cannibalism resorted to only in situations of extreme necessity have occurred in many cultures where cannibalism is otherwise clearly rejected. The survivors of the shipwrecks of the Essex and Méduse in the 19th century are said to have engaged in cannibalism, as did the members of Franklin's lost expedition and the Donner Party. Such cases often involve only necro-cannibalism (eating the corpse of someone already dead) as opposed to homicidal cannibalism (killing someone for food). In modern English law, the latter is always considered a crime, even in the most trying circumstances. The case of R v Dudley and Stephens, in which two men were found guilty of murder for killing and eating a cabin boy while adrift at sea in a lifeboat, set the precedent that necessity is no defence to a charge of murder. This decision outlawed and effectively ended the practice of shipwrecked sailors drawing lots in order to determine who would be killed and eaten to prevent the others from starving, a time-honoured practice formerly known as a "custom of the sea". In other cases, cannibalism is an expression of a psychopathology or mental disorder, condemned by the society in which it occurs and "considered to be an indicator of [a] severe personality disorder or psychosis". Well-known cases include Albert Fish, Issei Sagawa, and Armin Meiwes. Exo-, endo-, and autocannibalism Within institutionalized cannibalism, exocannibalism is often distinguished from endocannibalism. Endocannibalism refers to the consumption of a person from the same community. Often it is a part of a funerary ceremony, similar to burial or cremation in other cultures. The consumption of the recently deceased in such rites can be considered "an act of affection" and a major part of the grieving process. It has also been explained as a way of guiding the souls of the dead into the bodies of living descendants. In contrast, exocannibalism is the consumption of a person from outside the community. It is frequently "an act of aggression, often in the context of warfare", where the flesh of killed or captured enemies may be eaten to celebrate one's victory over them. Both types of cannibalism can also be fuelled by the belief that eating a person's flesh or internal organs will endow the cannibal with some of the characteristics of the deceased. However, several authors investigating exocannibalism in New Zealand, New Guinea, and the Congo Basin observe that such beliefs were absent in these regions. A further type, different from both exo- and endocannibalism, is autocannibalism (also called autophagy or self-cannibalism), "the act of eating parts of oneself". It does not ever seem to have been an institutionalized practice, but occasionally occurs as pathological behaviour, or due to other reasons such as curiosity. Also on record are instances of forced autocannibalism committed as acts of aggression, where individuals are forced to eat parts of their own bodies as a form of torture. Additional motives and explanations Exocannibalism is thus often associated with the consumption of enemies as an act of aggression, a practice also known as war cannibalism. Endocannibalism is often associated with the consumption of deceased relatives in funerary rites driven by affection – a practice known as funerary or mortuary cannibalism. But acts of institutionalized cannibalism can also be driven by various other motives, for which additional names have been coined. Medicinal cannibalism (also called medical cannibalism) means "the ingestion of human tissue ... as a supposed medicine or tonic". In contrast to other forms of cannibalism, which Europeans generally frowned upon, the "medicinal ingestion" of various "human body parts was widely practiced throughout Europe from the sixteenth to the eighteenth centuries", with early records of the practice going back to the first century CE. It was also frequently practised in China. Sacrificial cannibalism refers the consumption of the flesh of victims of human sacrifice, for example among the Aztecs. Human and animal remains excavated in Knossos, Crete, have been interpreted as evidence of a ritual in which children and sheep were sacrificed and eaten together during the Bronze Age. According to Ancient Roman reports, the Celts in Britain practised sacrificial cannibalism, and archaeological evidence backing these claims has by now been found. Human predation is the hunting of people from unrelated and possibly hostile groups in order to eat them. In parts of the Southern New Guinea lowland rain forests, hunting people "was an opportunistic extension of seasonal foraging or pillaging strategies", with human bodies just as welcome as those of animals as sources of protein, according to the anthropologist Bruce M. Knauft. As populations living near coasts and rivers were usually better nourished and hence often physically larger and stronger than those living inland, they "raided inland 'bush' peoples with impunity and often with little fear of retaliation". Cases of human predation are also on record for the neighbouring Bismarck Archipelago and for Australia. In the Congo Basin, there lived groups such as the Zappo Zaps who hunted humans for food even when game was plentiful. The term gastronomic cannibalism has been suggested for cases where human flesh is eaten to "provide a supplement to the regular diet" – thus essentially for its nutritional value – or, in an alternative definition, for cases where it is "eaten without ceremony (other than culinary), in the same manner as the flesh of any other animal". While the term has been criticized as being too vague to clearly identify a specific type of cannibalism, various records indicate that nutritional or culinary concerns could indeed play a role in such acts even outside of periods of starvation. Referring to the Congo Basin, where many of the eaten were butchered slaves rather than enemies killed in war, the anthropologist Emil Torday notes that "the most common [reason for cannibalism] was simply gastronomic: the natives loved 'the flesh that speaks' [as human flesh was commonly called] and paid for it". The historian Key Ray Chong observes that, throughout Chinese history, "learned cannibalism was often practiced ... for culinary appreciation". In his popular book Guns, Germs and Steel, Jared Diamond suggests that "protein starvation is probably also the ultimate reason why cannibalism was widespread in traditional New Guinea highland societies", and both in New Zealand and Fiji, cannibals explained their acts as due to a lack of animal meat. In Liberia, a former cannibal argued that it would have been wasteful to let the flesh of killed enemies spoil, and eaters of human flesh in the Bismarck Archipelago expressed the same sentiment. In many cases, human flesh was also described as particularly delicious, especially when it came from women, children, or both. Such statements are on record for various regions and peoples, including the Aztecs, today's Liberia and Nigeria, the Fang people in west-central Africa, the Congo Basin, 12th to 14th-century China, Sumatra, Australia, New Zealand, and Fiji. There is a debate among anthropologists on how important functionalist reasons are for the understanding of institutionalized cannibalism. Diamond is not alone in suggesting "that the consumption of human flesh was of nutritional benefit for some populations in New Guinea" and the same case has been made for other "tropical peoples ... exploiting a diverse range of animal foods", including human flesh. The materialist anthropologist Marvin Harris argued that a "shortage of animal protein" was also the underlying reason for Aztec cannibalism. The cultural anthropologist Marshall Sahlins, on the other hand, rejected such explanations as overly simplistic, stressing that cannibal customs must be regarded as "complex phenomen[a]" with "myriad attributes" which can only be understood if one considers "symbolism, ritual, and cosmology" in addition to their "practical function". While not a motive, the term innocent cannibalism has been suggested for cases of people eating human flesh without knowing what they are eating. It is a subject of myths, such as the myth of Thyestes who unknowingly ate the flesh of his own sons. There are also actual cases on record, for example from the Congo Basin, where cannibalism had been quite widespread and where even in the 1950s travellers were sometimes served a meat dish, learning only afterwards that the meat had been of human origin. In pre-modern medicine, an explanation given by the now-discredited theory of humorism for cannibalism was that it was caused by a black acrimonious humor, which, being lodged in the linings of the ventricles of the heart, produced a voracity for human flesh. On the other hand, the French philosopher Michel de Montaigne understood war cannibalism as a way of expressing vengeance and hatred towards one's enemies and celebrating one's victory over them, thus giving an interpretation that is close to modern explanations. He also pointed out that some acts of Europeans in his own time could be considered as equally barbarous, making his essay "Of Cannibals" () a precursor to later ideas of cultural relativism. Medical aspects A well-known case of mortuary cannibalism is that of the Fore tribe in New Guinea, which resulted in the spread of the prion disease kuru. Although the Fore's mortuary cannibalism was well-documented, the practice had ceased before the cause of the disease was recognized. However, some scholars argue that although post-mortem dismemberment was the practice during funeral rites, cannibalism was not. Marvin Harris theorizes that it happened during a famine period coincident with the arrival of Europeans and was rationalized as a religious rite. In 2003, a publication in Science received a large amount of press attention when it suggested that early humans may have practised extensive cannibalism. According to this research, genetic markers commonly found in modern humans worldwide suggest that today many people carry a gene that evolved as protection against the brain diseases that can be spread by consuming human brain tissue. A 2006 reanalysis of the data questioned this hypothesis, because it claimed to have found a data collection bias, which led to an erroneous conclusion. This claimed bias came from incidents of cannibalism used in the analysis not being due to local cultures, but having been carried out by explorers, stranded seafarers or escaped convicts. The original authors published a subsequent paper in 2008 defending their conclusions. Myths, legends and folklore Cannibalism features in the folklore and legends of many cultures and is most often attributed to evil characters or as extreme retribution for some wrongdoing. Examples include the witch in "Hansel and Gretel", Lamia of Greek mythology and the witch Baba Yaga of Slavic folklore. A number of stories in Greek mythology involve cannibalism, in particular the eating of close family members, e.g., the stories of Thyestes, Tereus and especially Cronus, who became Saturn in the Roman pantheon. The story of Tantalus is another example, though here a family member is prepared for consumption by others. The wendigo is a creature appearing in the legends of the Algonquian people. It is thought of variously as a malevolent cannibalistic spirit that could possess humans or a monster that humans could physically transform into. Those who indulged in cannibalism were at particular risk, and the legend appears to have reinforced this practice as taboo. The Zuni people tell the story of the Átahsaia – a giant who cannibalizes his fellow demons and seeks out human flesh. The wechuge is a demonic cannibalistic creature that seeks out human flesh appearing in the mythology of the Athabaskan people. It is said to be half monster and half human-like; however, it has many shapes and forms. Scepticism William Arens, author of The Man-Eating Myth: Anthropology and Anthropophagy, questions the credibility of reports of cannibalism and argues that the description by one group of people of another people as cannibals is a consistent and demonstrable ideological and rhetorical device to establish perceived cultural superiority. Arens bases his thesis on a detailed analysis of various "classic" cases of cannibalism reported by explorers, missionaries, and anthropologists. He claims that all of them were steeped in racism, unsubstantiated, or based on second-hand or hearsay evidence. Though widely discussed, Arens's book generally failed to convince the academic community. Claude Lévi-Strauss observes that, in spite of his "brilliant but superficial book ... [n]o serious ethnologist disputes the reality of cannibalism". Shirley Lindenbaum notes that, while after "Arens['s] ... provocative suggestion ... many anthropologists ... reevaluated their data", the outcome was an improved and "more nuanced" understanding of where, why and under which circumstances cannibalism took place rather than a confirmation of his claims: "Anthropologists working in the Americas, Africa, and Melanesia now acknowledge that institutionalized cannibalism occurred in some places at some times. Archaeologists and evolutionary biologists are taking cannibalism seriously." Lindenbaum and others point out that Arens displays a "strong ethnocentrism". His refusal to admit that institutionalized cannibalism ever existed seems to be motivated by the implied idea "that cannibalism is the worst thing of all" – worse than any other behaviour people engaged in, and therefore uniquely suited to vilifying others. Kajsa Ekholm Friedman calls this "a remarkable opinion in a culture [the European/American one] that has been capable of the most extreme cruelty and destructive behavior, both at home and in other parts of the world." She observes that, contrary to European values and expectations, "in many parts of the Congo region there was no negative evaluation of cannibalism. On the contrary, people expressed their strong appreciation of this very special meat and could not understand the hysterical reactions from the white man's side." And why indeed, she goes on to ask, should they have had the same negative reactions to cannibalism as Arens and his contemporaries? Implicitly he assumes that everybody throughout human history must have shared the strong taboo placed by his own culture on cannibalism, but he never attempts to explain why this should be so, and "neither logic nor historical evidence justifies" this viewpoint, as Christian Siefkes commented. Accusations of cannibalism could be used to characterize indigenous peoples as "uncivilized", "primitive", or even "inhuman." While this means that the reliability of reports of cannibal practices must be carefully evaluated especially if their wording suggests such a context, many actual accounts do not fit this pattern. The earliest firsthand account of cannibal customs in the Caribbean comes from Diego Álvarez Chanca, who accompanied Christopher Columbus on his second voyage. His description of the customs of the Caribs of Guadeloupe includes their cannibalism (men killed or captured in war were eaten, while captured boys were "castrated [and used as] servants until they gr[e]w up, when they [were] slaughtered" for consumption), but he nevertheless notes "that these people are more civilized than the other islanders" (who did not practice cannibalism). Nor was he an exception. Among the earliest reports of cannibalism in the Caribbean and the Americas, there are some (like those of Amerigo Vespucci) that seem to mostly consist of hearsay and "gross exaggerations", but others (by Chanca, Columbus himself, and other early travellers) show "genuine interest and respect for the natives" and include "numerous cases of sincere praise". Reports of cannibalism from other continents follow similar patterns. Condescending remarks can be found, but many Europeans who described cannibal customs in Central Africa wrote about those who practised them in quite positive terms, calling them "splendid" and "the finest people" and not rarely, like Chanca, actually considering them as "far in advance of" and "intellectually and morally superior" to the non-cannibals around them. Writing from Melanesia, the missionary George Brown explicitly rejects the European prejudice of picturing cannibals as "particularly ferocious and repulsive", noting instead that many cannibals he met were "no more ferocious than" others and "indeed ... very nice people". Reports or assertions of cannibal practices could nevertheless be used to promote the use of military force as a means of "civilizing" and "pacifying" the "savages". During the Spanish conquest of the Aztec Empire and its earlier conquests in the Caribbean there were widespread reports of cannibalism, and cannibals became exempted from Queen Isabella's prohibition on enslaving the indigenous. Another example of the sensationalism of cannibalism and its connection to imperialism occurred during Japan's 1874 expedition to Taiwan. As Robert Eskildsen describes, Japan's popular media "exaggerated the aborigines' violent nature", in some cases by wrongly accusing them of cannibalism. This Horrid Practice: The Myth and Reality of Traditional Maori Cannibalism (2008) by New Zealand historian Paul Moon received a hostile reception by some Māori, who felt the book tarnished their whole people. However, the factual accuracy of the book was not seriously disputed and even critics such as Margaret Mutu grant that cannibalism was "definitely" practised and that it was "part of our [Māori] culture." History Among modern humans, cannibalism has been practised by various groups. It was practised by humans in Prehistoric Europe, Mesoamerica, South America, among Iroquoian peoples in North America, Maori in New Zealand, the Solomon Islands, parts of West Africa and Central Africa, some of the islands of Polynesia, New Guinea, Sumatra, and Fiji. Evidence of cannibalism has been found in ruins associated with the Ancestral Puebloans of the Southwestern United States as well (at Cowboy Wash in Colorado). Prehistory There is evidence, both archaeological and genetic, that cannibalism has been practised for hundreds of thousands of years by early Homo sapiens and archaic hominins. Human bones that have been "de-fleshed" by other humans go back 600,000 years. The oldest Homo sapiens bones (from Ethiopia) show signs of this as well. Some anthropologists, such as Tim D. White, suggest that cannibalism was common in human societies prior to the beginning of the Upper Paleolithic period. This theory is based on the large amount of "butchered human" bones found in Neanderthal and other Lower/Middle Paleolithic sites. It seems likely that not all instances of prehistoric cannibalism were due to the same reason, just as cannibalistic acts known from the historical record have been motivated by a variety of reasons. One suggested reason for cannibalism in the Lower and Middle Paleolithic have been food shortages. It has been also suggested that removing dead bodies through ritual (funerary) cannibalism was a means of predator control, aiming to eliminate predators' and scavengers' access to hominid (and early human) bodies. Jim Corbett proposed that after major epidemics, when human corpses are easily accessible to predators, there are more cases of man-eating leopards, so removing dead bodies through ritual cannibalism (before the cultural traditions of burying and burning bodies appeared in human history) might have had practical reasons for hominids and early humans to control predation. The oldest archaeological evidence of hominid cannibalism comes from the Gran Dolina cave in northern Spain. The remains of several individuals who died about 800,000 years ago and may have belongs to the Homo antecessor species show unmistakable signs of having been butchered and consumed in the same way as animals whose bones were also found at the site. They belong to at least eleven individuals, all of which were young (ranging from infancy to late teenhood). A study of this case considers it an instance of "nutritional" cannibalism, where individuals belonging to hostile or unrelated groups were hunted, killed, and eaten much like animals. Based on the placement and processing of human and animal remains, the authors conclude that cannibalism was likely a "repetitive behavior over time as part of a culinary tradition", not caused by starvation or other exceptional circumstances. They suggest that young individuals (more than half of which were children under ten) were targeted because they "posed a lower risk for hunters" and because this was an effective means for limiting the growth of competing groups. Several sites in Croatia, France, and Spain yield evidence that the Neanderthals sometimes practised cannibalism, though the interpretation of some of the finds remains controversial. Neanderthals could also fall victim to cannibalism by anatomically modern humans. Evidence found in southwestern France indicates that the latter butchered and ate a Neanderthal child about 30,000 years ago; it is unknown whether the child was killed by them or died of other reasons. The find has been considered as strengthening the conjecture that modern humans might have hunted Neanderthals and in this way contributed to their extinction. In Gough's Cave, England, remains of human bones and skulls, around 14,700 years old, suggest that cannibalism took place amongst the people living in or visiting the cave, and that they may have used human skulls as drinking vessels. The archaeological site of Herxheim in southwestern Germany was a ritual center and a mass grave formed by people of the Linear Pottery culture in Neolithic Europe. It contained the scattered remains of more than 1000 individuals from different, in some cases faraway regions, who died around 5000 BCE. Whether they were war captives or human sacrifices is unclear, but the evidence indicates that their corpses were spit-roasted whole and then consumed. At Fontbrégoua Cave in southeastern France, the remains of six people who lived about 7,000 years ago were found (two children, one adolescent, and three adults), in addition to animal bones. The patterns of cut marks indicate that both humans and animals were skinned and processed in similar ways. Since the human victims were all processed at the same time, the main excavator, Paola Villa, suspects that they all belonged to the same family or extended family and were killed and butchered together, probably during some kind of violent conflict. Others have argued that the traces were caused by defleshing rituals preceding a secondary burial, but the fact that both humans and wild and domestic animals were processed in the same way makes this unlikely; moreover, Villa argues that the observed traces better fit a typical butchering process than a secondary burial. Researchers have also found physical evidence of cannibalism from more recent times, including from Prehistoric Britain. In 2001, archaeologists at the University of Bristol found evidence of cannibalism practised around 2000 years ago in Gloucestershire, South West England. This is in agreement with Ancient Roman reports that the Celts in Britain practised human sacrifice, killing and eating captured enemies as well as convicted criminals. Early history Cannibalism is mentioned many times in early history and literature. The oldest written reference may be from the tomb of the ancient Egyptian king Unas (24th century BCE). It contained a hymn in praise of the king portraying him as a cannibal who eats both "men" and "gods", thus indicating an attitude towards cannibalism quite different from the modern one. Herodotus claimed in his Histories (5th century BCE) that after eleven days' voyage up the Borysthenes (Dnieper River) one reached a desolated land that extended for a long way, followed by a country of man-eaters (other than the Scythians), and beyond it by another desolated and uninhabited area. The Stoic philosopher Chrysippus approved of eating one's dead relatives in a funerary ritual, noting that such rituals were common among many peoples. Cassius Dio recorded cannibalism practised by the bucoli, Egyptian tribes led by Isidorus against Rome. They sacrificed and consumed two Roman officers in a ritualistic fashion, swearing an oath over their entrails. According to Appian, during the Roman siege of Numantia in the 2nd century BCE, the population of Numantia (in today's Spain) was reduced to cannibalism and suicide. Cannibalism was also reported by Josephus during the siege of Jerusalem in 70 CE. Jerome, in his letter Against Jovinianus (written 393 CE), discusses how people came to their present condition as a result of their heritage, and lists several examples of peoples and their customs. In the list, he mentions that he has heard that the Attacotti (in Britain) eat human flesh and that the Massagetae and Derbices (two Central Asian peoples) kill and eat old people, considering this a more desirable fate than dying of old age and illness. Middle Ages The Americas There is universal agreement that some Mesoamerican people practised human sacrifice, but there is a lack of scholarly consensus as to whether cannibalism in pre-Columbian America was widespread. At one extreme, the anthropologist Marvin Harris, author of Cannibals and Kings, has suggested that the flesh of the victims was a part of an aristocratic diet as a reward, since the Aztec diet was lacking in proteins. While most historians of the pre-Columbian era accept that there was ritual cannibalism related to human sacrifices, they often reject suggestions that human flesh could have been a significant portion of the Aztec diet. Cannibalism was also associated with acts of warfare, and has been interpreted as an element of blood revenge in war. West Africa When the Moroccan explorer Ibn Battuta visited the Mali Empire in the 1350s, he was surprised to see sultan Sulayman give "a slave girl as part of his reception-gift" to a group of warriors from a cannibal region who had come to visit his court. "They slaughtered her and ate her and smeared their faces and hands with her blood and came in gratitude to the sultan." He was told that the sultan did so every time he received the cannibal guests. Though a Muslim like Ibn Battuta himself, he apparently considered catering to his visitors' preferences more important than whatever reservations he may have had about the practice. Other Muslim authors writing around that time also report that cannibalism was practised in some West Africa regions and that slave girls were sometimes slaughtered for food, since "their flesh is the best thing we have to eat." Europe and Europeans Cases of cannibalism were recorded during the First Crusade, as there are various accounts of crusaders consuming the bodies of their dead opponents following the sieges of Antioch and of Ma'arra in 1097–1098. While the Christian sources all explain these acts as due to hunger, Amin Maalouf is sceptical of this justification, arguing that that the crusaders' behaviour indicates they might have been driven by "fanaticism" rather than, or in addition to "necessity". Thomas Asbridge states that, while the "cannibalism at Marrat is among the most infamous of all the atrocities perpetrated by the First Crusaders", it nevertheless had "some positive effects on the crusaders' short-term prospects", since reports of their brutality convinced many Muslim commanders to accept truces rather than trying to fight them. During Europe's Great Famine of 1315–1317, there were various reports of cannibalism among starving people. Western Asia Charges of cannibalism were levied against the Qizilbash of the Safavid Ismail I. China Cannibalism has been repeatedly recorded throughout China's well-documented history. The sinologist Bengt Pettersson found references to more than three hundred different episodes of cannibalism in the Official Dynastic Histories alone. Most episodes occurred in the context of famine or war, or were otherwise motivated by vengeance or medical reasons. More than half of the episodes recorded in the Official Histories describe cases motivated by food scarcity during famines or in times of war. Pettersson observes that the records of such events "neither encouraged nor condemned" the consumption of human flesh under such circumstances, rather accepting it as an unavoidable way of "coping with a life-threatening situation". In other cases, cannibalism was an element of vengeance or punishment – eating the hearts and livers, or sometimes the whole bodies, of killed enemies was a way of further humiliating them and sweetening the revenge. Both private individuals and state officials engaged in such acts, especially from the 4th to the 10th century CE, but in some cases right until the end of Imperial China (in 1912). More than 70 cases are listed in the Official Histories alone. In warfare, human flesh could be eaten out of a lack of other provisions, but also out of hatred against the enemy or to celebrate one's victory. Not just enemy fighters, but also their "servants and concubines were all steamed and eaten", according to one account. At least since the Tang dynasty (618–907), the consumption of human flesh was considered a highly effective medical treatment, recommended by the Bencao Shiyi, an influential medical reference book published in the early 8th century, as well as in similar later manuals. Together with the ethical ideal of filial piety, according to which young people were supposed to do everything in their power to support their parents and parents-in-law, this idea lead to a unique form of voluntary cannibalism, in which a young person cut some of the flesh out of their body and gave it to an ill parent or parent-in-law for consumption. The majority of the donors were women, frequently daughters-in-law of the patient. The Official Histories describe more than 110 cases of such voluntary offerings that took place between the early 7th and the early 20th century. While these acts were (at least nominally) voluntary and the donors usually (though not always) survived them, several sources also report of children and adolescents who were killed so that their flesh could be eaten for medical purposes. During the Tang dynasty, cannibalism was supposedly resorted to by rebel forces early in the period (who were said to raid neighbouring areas for victims to eat), and (on a large scale) by both soldiers and civilians during the siege of Suiyang, a decisive episode of the An Lushan Rebellion. Eating an enemy's heart and liver was also repeatedly mentioned as a feature of both official punishments and private vengeance. The final decades of the dynasty were marked by large-scale rebellions, during which both rebels and regular soldiers butchered prisoners for food and killed and ate civilians. Sometimes "the rebels captured by government troops were [even] sold as food", according to several of the Official Histories, while warlords likewise relied on the sale of human flesh to finance their rebellions. An Arab traveller visiting China during this time noted with surprise: "cannibalism [is] permissible for them according to their legal code, for they trade in human flesh in their markets." References to cannibalizing the enemy also appear in poetry written in the subsequent Song dynasty (960–1279) – for example, in Man Jiang Hong – although they are perhaps meant symbolically, expressing hatred towards the enemy. The Official Histories covering this period record various cases of rebels and bandits eating the flesh of their victims. The flesh of executed criminals was sometimes cut off and sold for consumption. During the Tang dynasty a law was enacted that forbade this practice, but whether the law was effectively enforced is unclear. The sale of human flesh is also repeatedly mentioned during famines, in accounts ranging from the 6th to the 15th century. Several of these accounts mention that animal flesh was still available, but had become so expensive that few could afford it. Dog meat was five times as expensive as human flesh, according to one such report. Sometimes, poor men sold their own wives or children to butchers who slaughtered them and sold their flesh. Cannibalism in famine situations seems to have been generally tolerated by the authorities, who did not intervene when such acts occurred. A number of accounts suggests that human flesh was occasionally eaten for culinary reasons. An anecdote told about Duke Huan of Qi (7th century BCE) claims that he was curious about the taste of "steamed child", having already eaten everything else. His cook supposedly killed his own son to prepare the dish, and Duke Huan judged it to be "the best food of all". In later times, wealthy men, among them a son of the 4th-century emperor Shi Hu and an "open and high-spirited" man who lived in the 7th century CE, served the flesh of purchased women or children during lavish feasts. The sinologist observes that while such acts were not common, they do not seem to have been rare exceptions, and the hosts apparently did not have to face ostracism or legal prosection. Key Ray Chong even concludes that "learned cannibalism was often practiced ... for culinary appreciation, and exotic dishes [of human flesh] were prepared for jaded upper-class palates". The Official Histories mention 10th-century officials who liked to eat the flesh of babies and children, and during the Jin dynasty (1115–1234), human flesh seems to have been readily available at the home of a general, who supposedly served it to one of his guests as a practical joke. Accounts from the 12th to 14th centuries indicate that both soldiers and writers praised this flesh as particularly delicious, considering especially children's flesh as unsurpassable in taste. Pettersson observes that people generally seem to have had less reservations about the consumption of human flesh than one might expect today. While survival cannibalism during famines was regarded a lamentable necessity, accounts explaining the practice as due to other reasons, such as vengeance or filial piety, were generally even positive. Early modern and colonial era The Americas European explorers and colonizers brought home many stories of cannibalism practised by the native peoples they encountered. In Spain's overseas expansion to the New World, the practice of cannibalism was reported by Christopher Columbus in the Caribbean islands, and the Caribs were greatly feared because of their supposed practice of it. Queen Isabel of Castile had forbidden the Spaniards to enslave the indigenous, unless they were "guilty" of cannibalism. The accusation of cannibalism became a pretext for attacks on indigenous groups and justification for the Spanish conquest. In Yucatán, shipwrecked Spaniard Jerónimo de Aguilar, who later became a translator for Hernán Cortés, reported to have witnessed fellow Spaniards sacrificed and eaten, but escaped from captivity where he was being fattened for sacrifice himself. In the Florentine Codex (1576) compiled by Franciscan Bernardino de Sahagún from information provided by indigenous eyewitnesses has questionable evidence of Mexica (Aztec) cannibalism. Franciscan friar Diego de Landa reported on Yucatán instances. In early Brazil, there is reportage of cannibalism among the Tupinamba. It is recorded about the natives of the captaincy of Sergipe in Brazil: "They eat human flesh when they can get it, and if a woman miscarries devour the abortive immediately. If she goes her time out, she herself cuts the navel-string with a shell, which she boils along with the secondine [i.e. placenta], and eats them both." (see human placentophagy). The 1913 Handbook of Indians of Canada (reprinting 1907 material from the Bureau of American Ethnology), claims that North American natives practising cannibalism included "... the Montagnais, and some of the tribes of Maine; the Algonkin, Armouchiquois, Iroquois, and Micmac; farther west the Assiniboine, Cree, Foxes, Chippewa, Miami, Ottawa, Kickapoo, Illinois, Sioux, and Winnebago; in the south the people who built the mounds in Florida, and the Tonkawa, Attacapa, Karankawa, Caddo, and Comanche; in the northwest and west, portions of the continent, the Thlingchadinneh and other Athapascan tribes, the Tlingit, Heiltsuk, Kwakiutl, Tsimshian, Nootka, Siksika, some of the Californian tribes, and the Ute. There is also a tradition of the practice among the Hopi, and mentions of the custom among other tribes of New Mexico and Arizona. The Mohawk, and the Attacapa, Tonkawa, and other Texas tribes were known to their neighbours as 'man-eaters.'" The forms of cannibalism described included both resorting to human flesh during famines and ritual cannibalism, the latter usually consisting of eating a small portion of an enemy warrior. From another source, according to Hans Egede, when the Inuit killed a woman accused of witchcraft, they ate a portion of her heart. As with most lurid tales of native cannibalism, these stories are treated with a great deal of scrutiny, as accusations of cannibalism were often used as justifications for the subjugation or destruction of "savages". The historian Patrick Brantlinger suggests that Indigenous peoples that were colonized were being dehumanized as part of the justification for the atrocities. Among settlers, sailors, and explorers This period of time was also rife with instances of explorers and seafarers resorting to cannibalism for survival. There is archaeological and written evidence for English settlers' cannibalism in 1609 in the Jamestown Colony under famine conditions, during a period which became known as Starving Time. Sailors shipwrecked or lost at sea repeatedly resorted to cannibalism to face off starvation. The survivors of the sinking of the French ship Méduse in 1816 resorted to cannibalism after four days adrift on a raft. Their plight was made famous by Théodore Géricault's painting Raft of the Medusa. After a whale sank the Essex of Nantucket on November 20, 1820, the survivors, in three small boats, resorted, by common consent, to cannibalism in order for some to survive. This event became an important source of inspiration for Herman Melville's Moby-Dick. The case of R v Dudley and Stephens (1884) is an English criminal case which dealt with four crew members of an English yacht, the Mignonette, who were cast away in a storm some from the Cape of Good Hope. After several days, one of the crew, a seventeen-year-old cabin boy, fell unconscious due to a combination of the famine and drinking seawater. The others (one possibly objecting) decided to kill him and eat him. They were picked up four days later. Two of the three survivors were found guilty of murder. A significant outcome of this case was that necessity in English criminal law was determined to be no defence against a charge of murder. This was a break with the traditional understanding among sailors, which had been that selecting a victim for killing and consumption was acceptable in a starvation situation as long as lots were drawn so that all faced an equal risk of being killed. On land, travellers through sparsely inhabited regions and explorers of unknown areas sometimes ate human flesh after running out of other provisions. In a famous example from the 1840s, the members of Donner Party found themselves stranded by snow in the Donner Pass, a high mountain pass in California, without adequate supplies during the Mexican–American War, leading to several instances of cannibalism, including the murder of two young Native American men for food. Sir John Franklin's lost polar expedition, which took place at approximately the same time, is another example of cannibalism out of desperation. In frontier situations where there was no strong authority, some individuals got used to killing and eating others even in situations where other food would have been available. One notorious case was the mountain man Boone Helm, who become known as "The Kentucky Cannibal" for eating several of his fellow travellers, from 1850 until his eventual hanging in 1864. West Africa The Leopard Society was a cannibalistic secret society that existed until the mid-1900s and was active mostly in regions that today belong to Sierra Leone, Liberia and Ivory Coast. The Leopard men would dress in leopard skins and waylay travellers with sharp claw-like weapons in the form of leopards' claws and teeth. The victims' flesh would be cut from their bodies and distributed to members of the society. Central Africa Cannibalism was practised widely in the some parts of the Congo Basin, though it was by no means universal. Some peoples, such as the Bakongo, rejected the practice altogether. In some other regions human flesh was eaten "only occasionally to mark a particularly significant ritual occasion, but in other societies in the Congo, perhaps even a majority by the late nineteenth century, people ate human flesh whenever they could, saying that it was far tastier than other meat", notes the anthropologist Robert B. Edgerton. Many people not only freely admitted eating human flesh, but were surprised when they heard that Europeans did not eat it. Emil Torday observed: "They are not ashamed of cannibalism, and openly admit that they practise it because of their liking for human flesh", with the primary reason for cannibalism being a "gastronomic" preference for such dishes. Torday once received "a portion of a human thigh" sent as a well-intended gift, and other Europeans were offered pieces of human flesh in gestures of hospitality. People expected to be rewarded with fresh human flesh for services well performed and were disappointed when they received something else instead. In addition to enemies killed or captured in war, slaves were frequent victims. Many "healthy children" had to die "to provide a feast for their owners". Young slave children were at particular risk since they were in low demand for other purposes and since their flesh was widely praised as especially delicious, "just as many modern meat eaters prefer lamb over mutton and veal over beef". Such acts were not considered controversial – people did not understand why Europeans objected to the killing of slaves, while themselves killing and eating goats; they argued that both were the "property" of their owners, to be used as it pleased them. A third group of victims were persons from other ethnic groups, who in some areas were "hunt[ed] for food" just like animals. Many of the victims, who were usually killed with poisoned arrows or with clubs, were "women and children ... who had ventured too far from home while gathering firewood or fetching drinking water" and who were targeted "because they were easier to overpower" and also considered tastier than adult men. In some regions there was a regular trade in slaves destined to be eaten, and the flesh of recently butchered slaves was available for purchase as well. Some people fattened slave children to sell them for consumption; if such a child became ill and lost too much weight, their owner drowned them in the nearest river instead of wasting further food on them, as a French missionary once witnessed. Human flesh not sold the same day was smoked, so it could be "sold at leisure" during subsequent weeks. Europeans were often hesitant to buy smoked meat since they knew that the "smoking of human flesh to preserve it was ... widespread", but once meat was smoked, its origin was hard to determine. Instead of being killed quickly, "persons to be eaten often had both of their arms and legs broken and were made to sit up to their necks in a stream for [up to] three days, a practice said to make their flesh more tender, before they were killed and cooked." Both adults and children, and also animals such as birds and monkeys, were routinely submitted to this treatment prior to being slaughtered. Various reports indicate that living slaves were exposed on marketplaces, so that purchasers could choose which body parts to buy before the victim was butchered and the flesh distributed. This custom, reported around both the central Congo River and the Ubangi in the north, seem to have been motivated by a desire to get fresh rather than smoked flesh, since without refrigeration there was no other way to preserve flesh from spoiling quickly. Killed or captured enemies made another sort of victims, even during wars fought by the colonial state. During the 1892–1894 war between the Congo Free State and the Swahili–Arab city-states of Nyangwe and Kasongo in Eastern Congo, there were reports of widespread cannibalization of the bodies of defeated combatants by the Batetela allies of the Belgian commander Francis Dhanis. In April 1892, 10,000 Batetela, under the command of Gongo Lutete, joined forces with Dhanis in a campaign against the Swahili–Arab leaders Sefu and Mohara. After one early skirmish in the campaign, Dhanis's medical officer, Captain Sidney Langford Hinde, "noticed that the bodies of both the killed and wounded had vanished." When fighting broke out again, Hinde saw his Batetela allies drop human arms, legs and heads on the road; now he had to accept that they had really "carried them off for food", which he had initially doubted. According to Hinde, the conquest of Nyangwe was followed by "days of cannibal feasting" during which hundreds were eaten, with only their heads being kept as mementos. During this time, Lutete "hid himself in his quarters, appalled by the sight of thousands of men smoking human hands and human chops on their camp fires, enough to feed his army for many days." Hinde also noted that the Batetela town Ngandu had "at least 2,000 polished human skulls" as a "solid white pavement in front" of its gates, with human skulls crowning every post of the stockade. Soon after, Nyangwe's surviving population rose in a rebellion, during whose brutal suppression a thousand rioters were killed by the new government. One young Belgian officer wrote home: "Happily Gongo's men ... ate them up [in a few hours]. It's horrible but exceedingly useful and hygienic.... I should have been horrified at the idea in Europe! but it seems quite natural to me here. Don't show this letter to anyone indiscreet". Hinde too commented approvingly on the thoroughness with which the cannibals "disposed of all the dead, leaving nothing even for the jackals, and thus sav[ing] us, no doubt, from many an epidemic." Generally the Free State administration seems to have done little to suppress cannibal customs, sometimes even tolerating or facilitating them among its own auxiliary troops and allies. In August 1903, the UK diplomat Roger Casement wrote from Lake Tumba to a consular colleague: "The people round here are all cannibals.... There are also dwarfs (called Batwas) in the forest who are even worse cannibals than the taller human environment. They eat man flesh raw! It's a fact." He added that assailants would "bring down a dwarf on the way home, for the marital cooking pot.... The Dwarfs, as I say, dispense with cooking pots and eat and drink their human prey fresh cut on the battlefield while the blood is still warm and running. These are not fairy tales ..., but actual gruesome reality in the heart of this poor, benighted savage land." The origins of Congolese cannibalism are lost in time. The oldest known references to it can be found in Filippo Pigafetta's Report of the Kingdom of Congo, published in the late 16th century based on the memories of Duarte Lopez, a Portuguese trader who had lived for several years in the Kingdom of Kongo. Lopez reported that farther up the Congo River, there lived a people who ate both killed enemies and those of their slaves which they could not sell for a "good price". Oral records indicate that, already at a time when slavery was not widespread in the Congo Basin, people assumed that anyone sold as a slave would likely be eaten, "because cannibalism was common, and slaves were purchased especially for such purposes". In the 19th century, warfare and slave raids increased in the Congo Basin as a result of the international demand for slaves, who could no longer be so easily captured nearer to the coasts. As a result, the consumption of slaves increased as well, since most of those sold in the Atlantic slave trade were young and healthy individuals aged from 14 to 30, and similar preferences existed in the Arab–Swahili slave trade. However, many of the captives were younger, older, or otherwise considered less saleable, and such victims were often eaten by the slave raiders or sold to cannibals who purchased them as "meat". Most of the accounts of cannibalism in the Congo are from the late 19th century, when the Atlantic slave trade had come to a halt, but slavery still existed in Africa and the Arab world. Various reports indicate that around the Ubangi River, slaves were frequently exchanged against ivory, which was then exported to Europe or the Americas, while the slaves were eaten. Some European traders seem to have directly and knowingly taken part in these deadly transactions, while others turned a blind eye. The local elephant hunters preferred the flesh especially of young human beings – four to sixteen was the preferred age range, according to one trader – "because it was not only more tender, but also much quicker to cook" than the meat of elephants or other large animals. While sceptics such as William Arens sometimes claim that there are no credible eyewitness accounts of cannibal acts, there are numerous such accounts from the Congo. David Livingstone "saw human parts being cooked with bananas, and many other Europeans" – among them Hinde – "reported seeing cooked human remains lying around abandoned fires." Soldiers of the German explorer Hermann Wissmann saw how people captured and wounded in a slave raid were shot by a Swahili–Arab leader and then handed over "to his auxiliary troops, who ... cut them in pieces and dragged them to the fire to serve as their supper". Visiting a village near the Aruwimi River, the British artist Herbert Ward saw a man "carrying four large lumps of human flesh, with the skin still clinging to it, on a stick", and soon afterwards "a party of men squatting round a fire, before which this ghastly flesh, exposed on spits, was cooking"; he was told that the flesh came from a man who had been killed a few hours before. Another time, when "camping for the night with a party of Arab raiders and their followers", he and his companions felt "compelled to change the position of our tent owing to the offensive smell of human flesh, which was being cooked on all sides of us." The Belgian colonial officer Camille Coquilhat saw "the remaining half of [a] steamed man" – a slave who had been purchased for consumption and slaughtered a few hours earlier – "in an enormous pot" and discussed with the slave's owner, who at first thought that Coquilhat was joking when he objected to his cannibalistic customs. Near the Ubangi River, which formed the border between the Belgian and the French colonial enterprises, the French traveller saw local auxiliaries of the French troops kill "some women and some children" after a punitive expedition, then cooking their flesh in pots and "enjoy[ing]" it. Among the Mangbetu people in the north-east, Georg A. Schweinfurth saw a human arm being smoked over a fire. At other occasion, he watched a group of young women using boiling water for "scalding the hair off the lower half of a human body" in preparation for cooking it. A few years later, Gaetano Casati saw how the roasted leg of a slave woman was served at the court of the Mangbetu king. More eyewitness accounts could be added. Europe From the 16th century on, an unusual form of medical cannibalism became widespread in several European countries, for which thousands of Egyptian mummies were ground up and sold as medicine. Powdered human mummy – called mummia – was thought to stop internal bleeding and to have other healing properties. The practice developed into a widespread business that flourished until the early 18th century. The demand was much higher than the supply of ancient mummies, leading to much of the offered "mummia" being counterfeit, made from recent Egyptian or European corpses – often from the gallows – instead. In a few cases, mummia was still offered in medical catalogues in the early 20th century. Australia Hundreds of accounts exist of cannibalism among Aboriginal Australians in all parts of Australia, with the possible exception of Tasmania, dating from the first European settlement to the 1930s and later. While it is generally accepted that some forms of cannibalism were practised in Australia in certain circumstances, the prevalence and meaning of such acts in pre-colonial Aboriginal societies are disputed. Before colonization, Aboriginal Australians were predominantly nomadic hunter-gatherers at times lacking in protein sources. Reported cases of cannibalism include killing and eating small children (infanticide was widely practised as a means of population control and because mothers had trouble carrying two young children not yet able to walk) and enemy warriors slain in battle. In the late 1920s, the anthropologist Géza Róheim heard from Aboriginals that infanticidal cannibalism had been practised especially during droughts. "Years ago it had been custom for every second child to be eaten" – the baby was roasted and consumed not only by the mother, but also by the older siblings, who benefited from this meat during times of food scarcity. One woman told him that her little sister had been roasted, but denied having eaten of her. Another "admitted having killed and eaten her small daughter", and several other people he talked to remembered having "eaten one of their brothers". The consumption of infants took two different forms, depending on where it was practised: Usually only babies who had not yet received a name (which happened around the first birthday) were consumed, but in times of severe hunger, older children (up to four years or so) could be killed and eaten too, though people tended to have bad feelings about this. Babies were killed by their mother, while a bigger child "would be killed by the father by being beaten on the head". But cases of women killing older children are on record too. In 1904 a parish priest in Broome, Western Australia, stated that infanticide was very common, including one case where a four-year-old was "killed and eaten by its mother", who later became a Christian. The journalist and anthropologist Daisy Bates, who spent a long time among Aboriginals and was well acquainted with their customs, knew an Aboriginal woman who one day left her village to give birth a mile away, taking only her daughter with her. She then "killed and ate the baby, sharing the food with the little daughter." After her return, Bates found the place and saw "the ashes of a fire" with the baby's "broken skull, and one or two charred bones" in them. She states that "baby cannibalism was rife among these central-western peoples, as it is west of the border in Central Australia." The Norwegian ethnographer Carl Sofus Lumholtz confirms that infants were commonly killed and eaten especially in times of food scarcity. He notes that people spoke of such acts "as an everyday occurrence, and not at all as anything remarkable." Some have interpreted the consumption of infants as a religious practice: "In parts of New South Wales ..., it was customary long ago for the first-born of every lubra [Aboriginal woman] to be eaten by the tribe, as part of a religious ceremony." However, there seems to be no direct evidence that such acts actually had a religious meaning, and the Australian anthropologist Alfred William Howitt rejects the idea that the eaten were human sacrifices as "absolutely without foundation", arguing that religious sacrifices of any kind were unknown in Australia. Another frequently reported practise was funerary endocannibalism, the cooking and consumption of the deceased as a funerary rite. According to Bates, exocannibalism was also practised in many regions. Foreigners and members of different ethnic groups were hunted and eaten much like animals. She met "fine sturdy fellows" who "frankly admitted the hunting and sharing of kangaroo and human meat as frequently as that of kangaroo and emu." The bodies of the killed were roasted whole in "a deep hole in the sand". There were also "killing vendettas", in which a hostile settlement was attacked and as many persons as possible killed, whose flesh was then shared according to well-defined rules: "The older men ate the soft and virile parts, and the brain; swift runners were given the thighs; hands, arms or shoulders went to the best spear-throwers, and so on." Referring to the coast of the Great Australian Bight, Bates writes: "Cannibalism had been rife for centuries in these regions and for a thousand miles north and east of them." Human flesh was not eaten for spiritual reasons and not only due to hunger; rather it was considered a "favourite food". Lumholtz similarly notes that "the greatest delicacy known to the Australian native is human flesh", even adding that the "appetite for human flesh" was the primary motive for killing. Unrelated individuals and isolated families were attacked just to be eaten and any stranger was at risk of being "pursued like a wild beast and slain and eaten". Acquiring human flesh is this manner was something to be proud of, not a reason for shame. He stresses that such flesh was nevertheless by no means a "daily food", since opportunities to capture victims were relatively rare. One specific instance of kidnapping for cannibal purposes was recorded in the 1840s by the English immigrant George French Angas, who stated that several children were kidnapped, butchered, and eaten near Lake Alexandrina in South Australia shortly before he arrived there. Polynesia and Melanesia The first encounter between Europeans and Māori may have involved cannibalism of a Dutch sailor. In June 1772, the French explorer Marion du Fresne and 26 members of his crew were killed and eaten in the Bay of Islands. In an 1809 incident known as the Boyd massacre, about 66 passengers and crew of the Boyd were killed and eaten by Māori on the Whangaroa peninsula, Northland. Cannibalism was already a regular practice in Māori wars. In another instance, on July 11, 1821, warriors from the Ngapuhi tribe killed 2,000 enemies and remained on the battlefield "eating the vanquished until they were driven off by the smell of decaying bodies". Māori warriors fighting the New Zealand government in Titokowaru's War in New Zealand's North Island in 1868–69 revived ancient rites of cannibalism as part of the radical Hauhau movement of the Pai Marire religion. In parts of Melanesia, cannibalism was still practised in the early 20th century, for a variety of reasons – including retaliation, to insult an enemy people, or to absorb the dead person's qualities. One tribal chief, Ratu Udre Udre in Rakiraki, Fiji, is said to have consumed 872 people and to have made a pile of stones to record his achievement. Fiji was nicknamed the "Cannibal Isles" by European sailors, who avoided disembarking there. The dense population of the Marquesas Islands, in what is now French Polynesia, was concentrated in narrow valleys, and consisted of warring tribes, who sometimes practised cannibalism on their enemies. Human flesh was called "long pig". W. D. Rubinstein wrote: Early 20th century to present After World War I, cannibalism continued to occur as a ritual practice and in times of drought or famine. Occasional cannibal acts committed by individual criminals are documented as well throughout the 20th and 21st centuries. World War II Many instances of cannibalism by necessity were recorded during World War II. For example, during the 872-day siege of Leningrad, reports of cannibalism began to appear in the winter of 1941–1942, after all birds, rats, and pets were eaten by survivors. Leningrad police even formed a special division to combat cannibalism. Some 2.8 million Soviet POWs died in Nazi custody in less than eight months during 1941–42. According to the USHMM, by the winter of 1941, "starvation and disease resulted in mass death of unimaginable proportions". This deliberate starvation led to many incidents of cannibalism. Following the Soviet victory at Stalingrad it was found that some German soldiers in the besieged city, cut off from supplies, resorted to cannibalism. Later, following the German surrender in January 1943, roughly 100,000 German soldiers were taken prisoner of war (POW). Almost all of them were sent to POW camps in Siberia or Central Asia where, due to being chronically underfed by their Soviet captors, many resorted to cannibalism. Fewer than 5,000 of the prisoners taken at Stalingrad survived captivity. Cannibalism took place in the concentration and death camps in the Independent State of Croatia (NDH), a Nazi German puppet state which was governed by the fascist Ustasha organization, who committed the Genocide of Serbs and the Holocaust in NDH. Some survivors testified that some of the Ustashas drank the blood from the slashed throats of the victims. The Australian War Crimes Section of the Tokyo tribunal, led by prosecutor William Webb (the future Judge-in-Chief), collected numerous written reports and testimonies that documented Japanese soldiers' acts of cannibalism among their own troops, on enemy dead, as well as on Allied prisoners of war in many parts of the Greater East Asia Co-Prosperity Sphere. In September 1942, Japanese daily rations on New Guinea consisted of 800 grams of rice and tinned meat. However, by December, this had fallen to 50 grams. According to historian Yuki Tanaka, "cannibalism was often a systematic activity conducted by whole squads and under the command of officers". In some cases, flesh was cut from living people. A prisoner of war from the British Indian Army, Lance Naik Hatam Ali, testified that in New Guinea: "the Japanese started selecting prisoners and every day one prisoner was taken out and killed and eaten by the soldiers. I personally saw this happen and about 100 prisoners were eaten at this place by the Japanese. The remainder of us were taken to another spot away where 10 prisoners died of sickness. At this place, the Japanese again started selecting prisoners to eat. Those selected were taken to a hut where their flesh was cut from their bodies while they were alive and they were thrown into a ditch where they later died." Another well-documented case occurred in Chichi-jima in February 1945, when Japanese soldiers killed and consumed five American airmen. This case was investigated in 1947 in a war crimes trial, and of 30 Japanese soldiers prosecuted, five (Maj. Matoba, Gen. Tachibana, Adm. Mori, Capt. Yoshii, and Dr. Teraki) were found guilty and hanged. In his book Flyboys: A True Story of Courage, James Bradley details several instances of cannibalism of World War II Allied prisoners by their Japanese captors. The author claims that this included not only ritual cannibalization of the livers of freshly killed prisoners, but also the cannibalization-for-sustenance of living prisoners over the course of several days, amputating limbs only as needed to keep the meat fresh. There are more than 100 documented cases in Australia's government archives of Japanese soldiers practising cannibalism on enemy soldiers and civilians in New Guinea during the war. For instance, from an archived case, an Australian lieutenant describes how he discovered a scene with cannibalized bodies, including one "consisting only of a head which had been scalped and a spinal column" and that "in all cases, the condition of the remains were such that there can be no doubt that the bodies had been dismembered and portions of the flesh cooked". In another archived case, a Pakistani corporal (who was captured in Singapore and transported to New Guinea by the Japanese) testified that Japanese soldiers cannibalized a prisoner (some were still alive) per day for about 100 days. There was also an archived memo, in which a Japanese general stated that eating anyone except enemy soldiers was punishable by death. Toshiyuki Tanaka, a Japanese scholar in Australia, mentions that it was done "to consolidate the group feeling of the troops" rather than due to food shortage in many of the cases. Tanaka also states that the Japanese committed the cannibalism under supervision of their senior officers and to serve as a power projection tool. Jemadar Abdul Latif (VCO of the 4/9 Jat Regiment of the British Indian Army and POW rescued by the Australians at Sepik Bay in 1945) stated that the Japanese soldiers ate both Indian POWs and local New Guinean people. At the camp for Indian POWs in Wewak, where many died and 19 POWs were eaten, the Japanese doctor and lieutenant Tumisa would send an Indian out of the camp after which a Japanese party would kill and eat flesh from the body as well as cut off and cook certain body parts (liver, buttock muscles, thighs, legs, and arms), according to Captain R. U. Pirzai in a The Courier-Mail report of August 25, 1945. South America When Uruguayan Air Force Flight 571 crashed on a glacier in the Andes on October 13, 1972, many survivors resorted to eating the deceased during their 72 days in the mountains. The experiences and memories of the survivors became the source of several books and films. In an account of the accident and aftermath, survivor Roberto Canessa described the decision to eat the pilots and their dead friends and family members: North America In 1991, Jeffrey Dahmer of Milwaukee, Wisconsin, was arrested after one of his intended victims managed to escape. Found in Dahmer's apartment were two human hearts, an entire torso, a bag full of human organs from his victims, and a portion of arm muscle. He stated that he planned to consume all of the body parts over the next few weeks. West Africa In the 1980s, Médecins Sans Frontières, the international medical charity, supplied photographic and other documentary evidence of ritualized cannibal feasts among the participants in Liberia's internecine strife preceding the First Liberian Civil War to representatives of Amnesty International. Amnesty International declined to publicize this material; the Secretary-General of the organization, Pierre Sane, said at the time in an internal communication that "what they do with the bodies after human rights violations are committed is not part of our mandate or concern". The existence of cannibalism on a wide scale in Liberia was subsequently verified. A few years later, reported of cannibal acts committed during the Second Liberian Civil War and Sierra Leone Civil War emerged. Central Africa Reports from the Belgian Congo indicate that cannibalism was still widely practised in some regions in the 1920s. Hermann Norden, an American who visited the Kasai region in 1923, found that "cannibalism was commonplace". People were afraid of walking outside of populated places because there was a risk of being attacked, killed, and eaten. Norden talked with a Belgian who "admitted that it was quite likely he had occasionally been served human flesh without knowing what he was eating" – it was simply a dish that appeared on the tables from time. Other travellers heard persistent rumours that there was still a certain underground trade in slaves, some of whom (adults and children alike) were regularly killed and then "cut up and cooked as ordinary meat", around both the Kasai and the Ubangi River. The colonial state seems to have done little to discourage or punish such acts. There are also reports that human flesh was sometimes sold at markets in both Kinshasa and Brazzaville, "right in the middle of European life." Norden observed that cannibalism was so common that people talked about it quite "casual[ly]": "No stress was put upon it, nor horror shown. This person had died of fever; that one had been eaten. It was all a matter of the way one's luck held." The culinary use of human flesh continued in some cases even after World War II. In 1950, a Belgian administrator ate a "remarkably delicious" dish, learning after he had finished "that the meat came from a young girl." A few years later, a Danish traveller was served a piece of the "soft and tender" flesh of a butchered woman. During the Congo Crisis, which followed the country's independence in 1960, body parts of killed enemies were eaten and the flesh of war victims was sometimes sold for consumption. In Luluabourg (today Kananga), an American journalist saw a truck smeared with blood. A police commissioner investigating the scene told her that "sixteen women and children" had been lured in a nearby village to enter the truck, kidnapped, and "butchered ... for meat." She also talked with a Presbyterian missionary, who excused this act as due to "protein need.... The bodies of their enemies are the only source of protein available." In conflict situations, cannibalism persisted into the 21st century. During the first decade of the new century, cannibal acts have been reported from the Second Congo War and the Ituri conflict in the northeast of the Democratic Republic of the Congo. According to UN investigators, fighters belonging to several factions "grilled" human bodies "on a barbecue"; young girls were boiled "alive in ... big pots filled with boiling water and oil" or "cut into small pieces ... and then eaten." A UN human rights expert reported in July 2007 that sexual atrocities committed by rebel groups as well as by armed forces and national police against Congolese women go "far beyond rape" and include sexual slavery, forced incest, and cannibalism. In the Ituri region, much of the violence, which included "widespread cannibalism", was consciously directed against pygmies, who were believed to be relatively helpless and even considered subhuman by some other Congolese. UN investigators also collected eyewitness accounts of cannibalism during a violent conflict that shook the Kasai region in 2016/2017. Various parts of killed enemies and beheaded captives were cooked and eaten, including their heads, thighs, and penises. Cannibalism has also been reported from the Central African Republic, north of the Congo Basin. Jean-Bédel Bokassa ruled the country from 1966 to 1979 as dictator and finally as self-declared emperor. Tenacious rumours that he liked to dine on the flesh of opponents and political prisoners were substantiated by several testimonies during his eventual trial in 1986/1987. Bokassa's successor David Dacko stated that he had seen photographs of butchered bodies hanging in the cold-storage rooms of Bokassa's palace immediately after taking power in 1979. These or similar photos, said to show a walk-in freezer containing the bodies of schoolchildren arrested in April 1979 during protests and beat to death in the 1979 Ngaragba Prison massacre, were also published in Paris Match magazine. During the trial, Bokassa's former chef testified that he had repeatedly cooked human flesh from the palace's freezers for his boss's table. While Bokassa was found guilty of murder in at least twenty cases, the charge of cannibalism was nevertheless not taken into account for the final verdict, since the consumption of human remains is considered a misdemeanor under CAR law and all previously committed misdemeanors had been forgiven by a general amnesty declared in 1981. Further acts of cannibalism were reported to have targeted the Muslim minority during the Central African Republic Civil War which started in 2012. East Africa In the 1970s the Ugandan dictator Idi Amin was reputed to practice cannibalism. More recently, the Lord's Resistance Army has been accused of routinely engaging in ritual or magical cannibalism. It is also reported by some that witch doctors in the country sometimes use the body parts of children in their medicine. During the South Sudanese Civil War, cannibalism and forced cannibalism have been reported from South Sudan. Central and Western Europe Before 1931, The New York Times reporter William Seabrook, apparently disappointed that he had been unable to taste human flesh in West Africa, obtained from a hospital intern at the Sorbonne a chunk of this meat from the body of a healthy man killed in an accident, then cooked and ate it. He reported, Karl Denke, possible Carl Großmann and Fritz Haarmann, as well as Joachim Kroll were German murderers and cannibals active between the early 20th century and the 1970s. Armin Meiwes is a former computer repair technician who achieved international notoriety for killing and eating a voluntary victim in 2001, whom he had found via the Internet. After Meiwes and the victim jointly attempted to eat the victim's severed penis, Meiwes killed his victim and proceeded to eat a large amount of his flesh. He was arrested in December 2002. In January 2004, Meiwes was convicted of manslaughter and sentenced to eight years and six months in prison. Despite the victim's undisputed consent, the prosecutors successfully appealed this decision, and in a retrial that ended in May 2006, Meiwes was convicted of murder and sentenced to life imprisonment. On July 23, 1988, Rick Gibson ate the flesh of another person in public. Because England does not have a specific law against cannibalism, he legally ate a canapé of donated human tonsils in Walthamstow High Street, London. A year later, on April 15, 1989, he publicly ate a slice of human testicle. When he tried to eat another slice of human testicle as "hors d'oeuvre" at the Pitt International Galleries in Vancouver on July 14, 1989, the police confiscated the testicle. However, the charge of publicly exhibiting a disgusting object was dropped, and two months later he finally ate the piece of human testicle on the steps of the Vancouver court house. In 2008, a British model called Anthony Morley was imprisoned for the killing, dismemberment and partial cannibalisation of his lover, magazine executive Damian Oldfield. Eastern Europe and the Soviet Union In his book, The Gulag Archipelago, Soviet writer Aleksandr Solzhenitsyn described cases of cannibalism in 20th-century Soviet Union. Of the famine in Povolzhie (1921–1922) he wrote: "That horrible famine was up to cannibalism, up to consuming children by their own parents – the famine, which Russia had never known even in the Time of Troubles [in 1601–1603]". The historian Orlando Figes observes that "thousands of cases" of cannibalism were reported, while the number of cases that were never reported was doubtless even higher. In Pugachyov, "it was dangerous for children to go out after dark since there were known to be bands of cannibals and traders who killed them to eat or sell their tender flesh." An inhabitant of a nearby village stated: "There are several cafeterias in the village – and all of them serve up young children." This was no exception – Figes estimates "that a considerable proportion of the meat in Soviet factories in the Volga area ... was human flesh." Various gangs specialized in "capturing children, murdering them and selling the human flesh as horse meat or beef", with the buyers happy to have found a source of meat in a situation of extreme shortage and often willing not to "ask too many questions". Cannibalism was also widespread during the Holodomor, a man-made famine in Soviet Ukraine between 1932 and 1933. Survival was a moral as well as a physical struggle. A woman doctor wrote to a friend in June 1933 that she had not yet become a cannibal, but was "not sure that I shall not be one by the time my letter reaches you". The good people died first. Those who refused to steal or to prostitute themselves died. Those who gave food to others died. Those who refused to eat corpses died. Those who refused to kill their fellow man died. ... At least 2,505 people were sentenced for cannibalism in the years 1932 and 1933 in Ukraine, though the actual number of cases was certainly much higher. Most cases of cannibalism were "necrophagy, the consumption of corpses of people who had died of starvation". But the murder of children for food was common as well. Many survivors told of neighbours who had killed and eaten their own children. One woman, asked why she had done this, "answered that her children would not survive anyway, but this way she would". She was arrested by the police. The police also documented cases of children being kidnapped, killed, and eaten, and "stories of children being hunted down as food" circulated in many areas. A man who lived through the famine in his youth later remembered that "the availability of human flesh at market[s] was an open and acknowledged secret. People were glad" if they could buy it since "there was no other means to survive." In March 1933 the secret police in Kiev Oblast collected "ten or more reports of cannibalism every day" but concluded that "in reality there are many more such incidents", most of which went unreported. Those found guilty of cannibalism were often "imprisoned, executed, or lynched". But while the authorities were well informed about the extent of cannibalism, they also tried to suppress this information from becoming widely known, the chief of the secret police warning "that written notes on the subject do not circulate among the officials where they might cause rumours". The Holodomor was part of the Soviet famine of 1930–1933, which devastated also other parts of the Soviet Union in the early 1930s. Multiple cases of cannibalism were also reported from Kazakhstan. A few years later, starving people again resorted to cannibalism during the siege of Leningrad (1941–1944). About this time, Solzhenitsyn writes: "Those who consumed human flesh, or dealt with the human liver trading from dissecting rooms ... were accounted as the political criminals". Of the building of Northern Railway Labor Camp ("Sevzheldorlag") Solzhenitsyn reports, "An ordinary hard working political prisoner almost could not survive at that penal camp. In the camp Sevzheldorlag (chief: colonel Klyuchkin) in 1946–47 there were many cases of cannibalism: they cut human bodies, cooked and ate." The Soviet journalist Yevgenia Ginzburg was a long-term political prisoner who spent time in the Soviet prisons, Gulag camps and settlements from 1938 to 1955. She described in her memoir, Harsh Route (or Steep Route), of a case which she was directly involved in during the late 1940s, after she had been moved to the prisoners' hospital. The chief warder shows me the black smoked pot, filled with some food: "I need your medical expertise regarding this meat." I look into the pot, and hardly hold vomiting. The fibres of that meat are very small, and don't resemble me anything I have seen before. The skin on some pieces bristles with black hair ... A former smith from Poltava, Kulesh worked together with Centurashvili. At this time, Centurashvili was only one month away from being discharged from the camp ... And suddenly he surprisingly disappeared ... The wardens searched for two more days, and then assumed that it was an escape case, though they wondered why, since his imprisonment period was almost over ... The crime was there. Approaching the fireplace, Kulesh killed Centurashvili with an axe, burned his clothes, then dismembered him and hid the pieces in snow, in different places, putting specific marks on each burial place. ... Just yesterday, one body part was found under two crossed logs. India The Aghori are Indian ascetics who believe that eating human flesh confers spiritual and physical benefits, such as prevention of ageing. They claim to only eat those who have voluntarily granted their body to the sect upon their death, but an Indian TV crew witnessed one Aghori feasting on a corpse discovered floating in the Ganges and a member of the Dom caste reports that Aghori often take bodies from cremation ghats (or funeral pyres). China Cannibalism is documented to have occurred in rural China during the severe famine that resulted from the Great Leap Forward (1958–1962). During Mao Zedong's Cultural Revolution (1966–1976), local governments' documents revealed hundreds of incidents of cannibalism for ideological reasons, including large-scale cannibalism during the Guangxi Massacre. Cannibal acts occurred at public events organized by local Communist Party officials, with people taking part in them in order to prove their revolutionary passion. The writer Zheng Yi documented many of these incidents, especially those in Guangxi, in his 1993 book, Scarlet Memorial. Pills made of human flesh were said to be used by some Tibetan Buddhists, motivated by a belief that mystical powers were bestowed upon those who consumed Brahmin flesh. Southeast Asia In Joshua Oppenheimer's film The Look of Silence, several of the anti-Communist militias active in the Indonesian mass killings of 1965–66 claim that drinking blood from their victims prevented them from going mad. East Asia Reports of widespread cannibalism began to emerge from North Korea during the famine of the 1990s and subsequent ongoing starvation. Kim Jong-il was reported to have ordered a crackdown on cannibalism in 1996, but Chinese travellers reported in 1998 that cannibalism had occurred. Three people in North Korea were reported to have been executed for selling or eating human flesh in 2006. Further reports of cannibalism emerged in early 2013, including reports of a man executed for killing his two children for food. There are conflicting claims about how widespread cannibalism was in North Korea. While refugees reported that it was widespread, Barbara Demick wrote in her book, Nothing to Envy: Ordinary Lives in North Korea (2010), that it did not seem to be. Melanesia The Korowai tribe of south-eastern Papua could be one of the last surviving tribes in the world engaging in cannibalism. A local cannibal cult killed and ate victims as late as 2012. As in some other Papuan societies, the Urapmin people engaged in cannibalism in war. Notably, the Urapmin also had a system of food taboos wherein dogs could not be eaten and they had to be kept from breathing on food, unlike humans who could be eaten and with whom food could be shared. See also Alexander Pearce, alleged Irish cannibal Alferd Packer, an American prospector, accused but not convicted of cannibalism Androphagi, an ancient nation of cannibals Asmat people, a Papua group with a reputation of cannibalism Cannibal film Cannibalism in literature Cannibalism in popular culture Cannibalism in poultry Chijon family, a Korean gang that killed and ate rich people Child cannibalism for children as victims of cannibalism (in myth and reality) Custom of the sea, the practice of shipwrecked survivors drawing lots to see who would be killed and eaten so that the others might survive Homo antecessor, an extinct human species providing some of the earliest known evidence for human cannibalism Human fat has been applied in European pharmacopeia between the 16th and the 19th centuries. Human placentophagy, the consumption of the placenta (afterbirth) Idi Amin, Ugandan dictator who is alleged to have consumed humans Issei Sagawa, a Japanese man who became a minor celebrity after killing and eating another student List of incidents of cannibalism Manifesto Antropófago (Cannibal Manifesto in English), a Brazilian poem Medical cannibalism, the consumption of human body parts to treat or prevent diseases Mummia, medicine made from human mummies Noida serial murders, a widely publicized instance of alleged cannibalism in India Placentophagy, the act of mammals eating the placenta of their young after childbirth Pleistocene human diet, the eating habits of human ancestors in the Pleistocene R v Dudley and Stephens, an important trial of two men accused of shipwreck cannibalism Self-cannibalism, the practice of eating oneself (also called autocannibalism) Traditional Chinese medicines derived from the human body Transmissible spongiform encephalopathy, a progressive condition that affect the brain and nervous system of many animals, including humans Vorarephilia, a sexual fetish and paraphilia where arousal results from the idea of devouring others or being devoured Wari’ people, an Amerindian tribe that practised cannibalism References Further reading Berdan, Frances F. The Aztecs of Central Mexico: An Imperial Society. New York 1982. Earle, Rebecca. The Body of the Conquistador: Food, Race, and the Colonial Experience in Spanish America, 1492–1700. New York: Cambridge University Press 2012. Jáuregui, Carlos. Canibalia: Canibalismo, calibanismo, antropofagía cultural y consumo en América Latina. Madrid: Vervuert 2008. Lestringant, Frank. Cannibals: The Discovery and Representation of the Cannibal from Columbus to Jules Verne. Berkeley and Los Angeles: University of California Press 1997. Ortiz de Montellano, Bernard R. Aztec Medicine, Health, and Nutrition. New Brunswick 1990. Read, Kay A. Time and Sacrifice in the Aztec Cosmos. Bloomington 1998. Sahlins, Marshall. "Cannibalism: An Exchange." New York Review of Books 26, no. 4 (March 22, 1979). Schutt, Bill. Cannibalism: A Perfectly Natural History. Chapel Hill: Algonquin Books 2017. External links Is there a relation between cannibalism and amyloidosis? All about Cannibalism: The Ancient Taboo in Modern Times (Cannibalism Psychology) at CrimeLibrary.com Cannibalism, Víctor Montoya The Straight Dope Notes arguing that routine cannibalism is myth Did a mob of angry Dutch kill and eat their prime minister? (from The Straight Dope) Harry J. Brown, 'Hans Staden among the Tupinambas.'
5659
https://en.wikipedia.org/wiki/Chemical%20element
Chemical element
A chemical element is a chemical substance that cannot be broken down into other substances. The basic particle that constitutes a chemical element is the atom, and each chemical element is distinguished by the number of protons in the nuclei of its atoms, known as its atomic number. For example, oxygen has an atomic number of 8, meaning that each oxygen atom has 8 protons in its nucleus. This is in contrast to chemical compounds and mixtures, which contain atoms with more than one atomic number. Almost all of the baryonic matter of the universe is composed of chemical elements (among rare exceptions are neutron stars). When different elements undergo chemical reactions, atoms are rearranged into new compounds held together by chemical bonds. Only a minority of elements, such as silver and gold, are found uncombined as relatively pure native element minerals. Nearly all other naturally occurring elements occur in the Earth as compounds or mixtures. Air is primarily a mixture of the elements nitrogen, oxygen, and argon, though it does contain compounds including carbon dioxide and water. The history of the discovery and use of the elements began with primitive human societies that discovered native minerals like carbon, sulfur, copper and gold (though the concept of a chemical element was not yet understood). Attempts to classify materials such as these resulted in the concepts of classical elements, alchemy, and various similar theories throughout human history. Much of the modern understanding of elements developed from the work of Dmitri Mendeleev, a Russian chemist who published the first recognizable periodic table in 1869. This table organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The periodic table summarizes various properties of the elements, allowing chemists to derive relationships between them and to make predictions about compounds and potential new ones. By November 2016, the International Union of Pure and Applied Chemistry had recognized a total of 118 elements. The first 94 occur naturally on Earth, and the remaining 24 are synthetic elements produced in nuclear reactions. Save for unstable radioactive elements (radionuclides) which decay quickly, nearly all of the elements are available industrially in varying amounts. The discovery and synthesis of further new elements is an ongoing area of scientific study. Description The lightest chemical elements are hydrogen and helium, both created by Big Bang nucleosynthesis during the first 20 minutes of the universe in a ratio of around 3:1 by mass (or 12:1 by number of atoms), along with tiny traces of the next two elements, lithium and beryllium. Almost all other elements found in nature were made by various natural methods of nucleosynthesis. On Earth, small amounts of new atoms are naturally produced in nucleogenic reactions, or in cosmogenic processes, such as cosmic ray spallation. New atoms are also naturally produced on Earth as radiogenic daughter isotopes of ongoing radioactive decay processes such as alpha decay, beta decay, spontaneous fission, cluster decay, and other rarer modes of decay. Of the 94 naturally occurring elements, those with atomic numbers 1 through 82 each have at least one stable isotope (except for technetium, element 43 and promethium, element 61, which have no stable isotopes). Isotopes considered stable are those for which no radioactive decay has yet been observed. Elements with atomic numbers 83 through 94 are unstable to the point that radioactive decay of all isotopes can be detected. Some of these elements, notably bismuth (atomic number 83), thorium (atomic number 90), and uranium (atomic number 92), have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy metals before the formation of our Solar System. At over 1.9 years, over a billion times longer than the current estimated age of the universe, bismuth-209 (atomic number 83) has the longest known alpha decay half-life of any naturally occurring element, and is almost always considered on par with the 80 stable elements. The very heaviest elements (those beyond plutonium, element 94) undergo radioactive decay with half-lives so short that they are not found in nature and must be synthesized. There are now 118 known elements. In this context, "known" means observed well enough, even from just a few decay products, to have been differentiated from other elements. Most recently, the synthesis of element 118 (since named oganesson) was reported in October 2006, and the synthesis of element 117 (tennessine) was reported in April 2010. Of these 118 elements, 94 occur naturally on Earth. Six of these occur in extreme trace quantities: technetium, atomic number 43; promethium, number 61; astatine, number 85; francium, number 87; neptunium, number 93; and plutonium, number 94. These 94 elements have been detected in the universe at large, in the spectra of stars and also supernovae, where short-lived radioactive elements are newly being made. The first 94 elements have been detected directly on Earth as primordial nuclides present from the formation of the Solar System, or as naturally occurring fission or transmutation products of uranium and thorium. The remaining 24 heavier elements, not found today either on Earth or in astronomical spectra, have been produced artificially: these are all radioactive, with very short half-lives; if any atoms of these elements were present at the formation of Earth, they are extremely likely, to the point of certainty, to have already decayed, and if present in novae have been in quantities too small to have been noted. Technetium was the first purportedly non-naturally occurring element synthesized, in 1937, although trace amounts of technetium have since been found in nature (and also the element may have been discovered naturally in 1925). This pattern of artificial production and later natural discovery has been repeated with several other radioactive naturally occurring rare elements. List of the elements are available by name, atomic number, density, melting point, boiling point and by symbol, as well as ionization energies of the elements. The nuclides of stable and radioactive elements are also available as a list of nuclides, sorted by length of half-life for those that are unstable. One of the most convenient, and certainly the most traditional presentation of the elements, is in the form of the periodic table, which groups together elements with similar chemical properties (and usually also similar electronic structures). Atomic number The atomic number of an element is equal to the number of protons in each atom, and defines the element. For example, all carbon atoms contain 6 protons in their atomic nucleus; so the atomic number of carbon is 6. Carbon atoms may have different numbers of neutrons; atoms of the same element having different numbers of neutrons are known as isotopes of the element. The number of protons in the atomic nucleus also determines its electric charge, which in turn determines the number of electrons of the atom in its non-ionized state. The electrons are placed into atomic orbitals that determine the atom's various chemical properties. The number of neutrons in a nucleus usually has very little effect on an element's chemical properties (except in the case of hydrogen and deuterium). Thus, all carbon isotopes have nearly identical chemical properties because they all have six protons and six electrons, even though carbon atoms may, for example, have 6 or 8 neutrons. That is why the atomic number, rather than mass number or atomic weight, is considered the identifying characteristic of a chemical element. The symbol for atomic number is Z. Isotopes Isotopes are atoms of the same element (that is, with the same number of protons in their atomic nucleus), but having different numbers of neutrons. Thus, for example, there are three main isotopes of carbon. All carbon atoms have 6 protons in the nucleus, but they can have either 6, 7, or 8 neutrons. Since the mass numbers of these are 12, 13 and 14 respectively, the three isotopes of carbon are known as carbon-12, carbon-13, and carbon-14, often abbreviated to 12C, 13C, and 14C. Carbon in everyday life and in chemistry is a mixture of 12C (about 98.9%), 13C (about 1.1%) and about 1 atom per trillion of 14C. Most (66 of 94) naturally occurring elements have more than one stable isotope. Except for the isotopes of hydrogen (which differ greatly from each other in relative mass—enough to cause chemical effects), the isotopes of a given element are chemically nearly indistinguishable. All of the elements have some isotopes that are radioactive (radioisotopes), although not all of these radioisotopes occur naturally. The radioisotopes typically decay into other elements upon radiating an alpha or beta particle. If an element has isotopes that are not radioactive, these are termed "stable" isotopes. All of the known stable isotopes occur naturally (see primordial isotope). The many radioisotopes that are not found in nature have been characterized after being artificially made. Certain elements have no stable isotopes and are composed only of radioactive isotopes: specifically the elements without any stable isotopes are technetium (atomic number 43), promethium (atomic number 61), and all observed elements with atomic numbers greater than 82. Of the 80 elements with at least one stable isotope, 26 have only one single stable isotope. The mean number of stable isotopes for the 80 stable elements is 3.1 stable isotopes per element. The largest number of stable isotopes that occur for a single element is 10 (for tin, element 50). Isotopic mass and atomic mass The mass number of an element, A, is the number of nucleons (protons and neutrons) in the atomic nucleus. Different isotopes of a given element are distinguished by their mass numbers, which are conventionally written as a superscript on the left hand side of the atomic symbol (e.g. 238U). The mass number is always a whole number and has units of "nucleons". For example, magnesium-24 (24 is the mass number) is an atom with 24 nucleons (12 protons and 12 neutrons). Whereas the mass number simply counts the total number of neutrons and protons and is thus a natural (or whole) number, the atomic mass of a particular isotope (or "nuclide") of the element is the mass of a single atom of that isotope, and is typically expressed in daltons (symbol: Da), or univeral atomic mass units (symbol: u). Its relative atomic mass is a dimensionless number equal to the atomic mass divided by the atomic amass constant, which equals 1 Da. In general, the mass number of a given nuclide differs in value slightly from its relative atomic mass, since the mass of each proton and neutron is not exactly 1 Da; since the electrons contribute a lesser share to the atomic mass as neutron number exceeds proton number; and because of the nuclear binding energy and the electron binding energy. For example, the atomic mass of chlorine-35 to five significant digits is 34.969 Da and that of chlorine-37 is 36.966 Da. However, the relative atomic mass of each isotope is quite close to its mass number (always within 1%). The only isotope whose atomic mass is exactly a natural number is 12C, which has a mass of 12 Da because the dalton is defined as 1/12 of the mass of a free neutral carbon-12 atom in the ground state. The standard atomic weight (commonly called "atomic weight") of an element is the average of the atomic masses of all the chemical element's isotopes as found in a particular environment, weighted by isotopic abundance, relative to the atomic mass unit. This number may be a fraction that is not close to a whole number. For example, the relative atomic mass of chlorine is 35.453 u, which differs greatly from a whole number as it is an average of about 76% chlorine-35 and 24% chlorine-37. Whenever a relative atomic mass value differs by more than 1% from a whole number, it is due to this averaging effect, as significant amounts of more than one isotope are naturally present in a sample of that element. Chemically pure and isotopically pure Chemists and nuclear scientists have different definitions of a pure element. In chemistry, a pure element means a substance whose atoms all (or in practice almost all) have the same atomic number, or number of protons. Nuclear scientists, however, define a pure element as one that consists of only one stable isotope. For example, a copper wire is 99.99% chemically pure if 99.99% of its atoms are copper, with 29 protons each. However it is not isotopically pure since ordinary copper consists of two stable isotopes, 69% 63Cu and 31% 65Cu, with different numbers of neutrons. However, a pure gold ingot would be both chemically and isotopically pure, since ordinary gold consists only of one isotope, 197Au. Allotropes Atoms of chemically pure elements may bond to each other chemically in more than one way, allowing the pure element to exist in multiple chemical structures (spatial arrangements of atoms), known as allotropes, which differ in their properties. For example, carbon can be found as diamond, which has a tetrahedral structure around each carbon atom; graphite, which has layers of carbon atoms with a hexagonal structure stacked on top of each other; graphene, which is a single layer of graphite that is very strong; fullerenes, which have nearly spherical shapes; and carbon nanotubes, which are tubes with a hexagonal structure (even these may differ from each other in electrical properties). The ability of an element to exist in one of many structural forms is known as 'allotropy'. The reference state of an element is defined by convention, usually as the thermodynamically most stable allotrope and physical state at a pressure of 1 bar and a given temperature (typically at 298.15K). However, for phosphorus, the reference state is white phosphorus even though it is not the most stable allotrope. In thermochemistry, an element is defined to have an enthalpy of formation of zero in its reference state. For example, the reference state for carbon is graphite, because the structure of graphite is more stable than that of the other allotropes. Properties Several kinds of descriptive categorizations can be applied broadly to the elements, including consideration of their general physical and chemical properties, their states of matter under familiar conditions, their melting and boiling points, their densities, their crystal structures as solids, and their origins. General properties Several terms are commonly used to characterize the general physical and chemical properties of the chemical elements. A first distinction is between metals, which readily conduct electricity, nonmetals, which do not, and a small group, (the metalloids), having intermediate properties and often behaving as semiconductors. A more refined classification is often shown in colored presentations of the periodic table. This system restricts the terms "metal" and "nonmetal" to only certain of the more broadly defined metals and nonmetals, adding additional terms for certain sets of the more broadly viewed metals and nonmetals. The version of this classification used in the periodic tables presented here includes: actinides, alkali metals, alkaline earth metals, halogens, lanthanides, transition metals, post-transition metals, metalloids, reactive nonmetals, and noble gases. In this system, the alkali metals, alkaline earth metals, and transition metals, as well as the lanthanides and the actinides, are special groups of the metals viewed in a broader sense. Similarly, the reactive nonmetals and the noble gases are nonmetals viewed in the broader sense. In some presentations, the halogens are not distinguished, with astatine identified as a metalloid and the others identified as nonmetals. States of matter Another commonly used basic distinction among the elements is their state of matter (phase), whether solid, liquid, or gas, at a selected standard temperature and pressure (STP). Most of the elements are solids at conventional temperatures and atmospheric pressure, while several are gases. Only bromine and mercury are liquids at 0 degrees Celsius (32 degrees Fahrenheit) and normal atmospheric pressure; caesium and gallium are solids at that temperature, but melt at 28.4 °C (83.2 °F) and 29.8 °C (85.6 °F), respectively. Melting and boiling points Melting and boiling points, typically expressed in degrees Celsius at a pressure of one atmosphere, are commonly used in characterizing the various elements. While known for most elements, either or both of these measurements is still undetermined for some of the radioactive elements available in only tiny quantities. Since helium remains a liquid even at absolute zero at atmospheric pressure, it has only a boiling point, and not a melting point, in conventional presentations. Densities The density at selected standard temperature and pressure (STP) is frequently used in characterizing the elements. Density is often expressed in grams per cubic centimeter (g/cm3). Since several elements are gases at commonly encountered temperatures, their densities are usually stated for their gaseous forms; when liquefied or solidified, the gaseous elements have densities similar to those of the other elements. When an element has allotropes with different densities, one representative allotrope is typically selected in summary presentations, while densities for each allotrope can be stated where more detail is provided. For example, the three familiar allotropes of carbon (amorphous carbon, graphite, and diamond) have densities of 1.8–2.1, 2.267, and 3.515 g/cm3, respectively. Crystal structures The elements studied to date as solid samples have eight kinds of crystal structures: cubic, body-centered cubic, face-centered cubic, hexagonal, monoclinic, orthorhombic, rhombohedral, and tetragonal. For some of the synthetically produced transuranic elements, available samples have been too small to determine crystal structures. Occurrence and origin on Earth Chemical elements may also be categorized by their origin on Earth, with the first 94 considered naturally occurring, while those with atomic numbers beyond 94 have only been produced artificially as the synthetic products of human-made nuclear reactions. Of the 94 naturally occurring elements, 83 are considered primordial and either stable or weakly radioactive. The remaining 11 naturally occurring elements possess half lives too short for them to have been present at the beginning of the Solar System, and are therefore considered transient elements. Of these 11 transient elements, 5 (polonium, radon, radium, actinium, and protactinium) are relatively common decay products of thorium and uranium. The remaining 6 transient elements (technetium, promethium, astatine, francium, neptunium, and plutonium) occur only rarely, as products of rare decay modes or nuclear reaction processes involving uranium or other heavy elements. No radioactive decay has been observed for elements with atomic numbers 1 through 82, except 43 (technetium) and 61 (promethium). Observationally stable isotopes of some elements (such as tungsten and lead), however, are predicted to be slightly radioactive with very long half-lives: for example, the half-lives predicted for the observationally stable lead isotopes range from 1035 to 10189 years. Elements with atomic numbers 43, 61, and 83 through 94 are unstable enough that their radioactive decay can readily be detected. Three of these elements, bismuth (element 83), thorium (element 90), and uranium (element 92) have one or more isotopes with half-lives long enough to survive as remnants of the explosive stellar nucleosynthesis that produced the heavy elements before the formation of the Solar System. For example, at over 1.9 years, over a billion times longer than the current estimated age of the universe, bismuth-209 has the longest known alpha decay half-life of any naturally occurring element. The very heaviest 24 elements (those beyond plutonium, element 94) undergo radioactive decay with short half-lives and cannot be produced as daughters of longer-lived elements, and thus are not known to occur in nature at all. Periodic table The properties of the chemical elements are often summarized using the periodic table, which powerfully and elegantly organizes the elements by increasing atomic number into rows ("periods") in which the columns ("groups") share recurring ("periodic") physical and chemical properties. The current standard table contains 118 confirmed elements as of 2021. Although earlier precursors to this presentation exist, its invention is generally credited to the Russian chemist Dmitri Mendeleev in 1869, who intended the table to illustrate recurring trends in the properties of the elements. The layout of the table has been refined and extended over time as new elements have been discovered and new theoretical models have been developed to explain chemical behavior. Use of the periodic table is now ubiquitous within the academic discipline of chemistry, providing an extremely useful framework to classify, systematize and compare all the many different forms of chemical behavior. The table has also found wide application in physics, geology, biology, materials science, engineering, agriculture, medicine, nutrition, environmental health, and astronomy. Its principles are especially important in chemical engineering. Nomenclature and symbols The various chemical elements are formally identified by their unique atomic numbers, by their accepted names, and by their symbols. Atomic numbers The known elements have atomic numbers from 1 through 118, conventionally presented as Arabic numerals. Since the elements can be uniquely sequenced by atomic number, conventionally from lowest to highest (as in a periodic table), sets of elements are sometimes specified by such notation as "through", "beyond", or "from ... through", as in "through iron", "beyond uranium", or "from lanthanum through lutetium". The terms "light" and "heavy" are sometimes also used informally to indicate relative atomic numbers (not densities), as in "lighter than carbon" or "heavier than lead", although technically the weight or mass of atoms of an element (their atomic weights or atomic masses) do not always increase monotonically with their atomic numbers. Element names The naming of various substances now known as elements precedes the atomic theory of matter, as names were given locally by various cultures to various minerals, metals, compounds, alloys, mixtures, and other materials, although at the time it was not known which chemicals were elements and which compounds. As they were identified as elements, the existing names for anciently known elements (e.g., gold, mercury, iron) were kept in most countries. National differences emerged over the names of elements either for convenience, linguistic niceties, or nationalism. For a few illustrative examples: German speakers use "Wasserstoff" (water substance) for "hydrogen", "Sauerstoff" (acid substance) for "oxygen" and "Stickstoff" (smothering substance) for "nitrogen", while English and some romance languages use "sodium" for "natrium" and "potassium" for "kalium", and the French, Italians, Greeks, Portuguese and Poles prefer "azote/azot/azoto" (from roots meaning "no life") for "nitrogen". For purposes of international communication and trade, the official names of the chemical elements both ancient and more recently recognized are decided by the International Union of Pure and Applied Chemistry (IUPAC), which has decided on a sort of international English language, drawing on traditional English names even when an element's chemical symbol is based on a Latin or other traditional word, for example adopting "gold" rather than "aurum" as the name for the 79th element (Au). IUPAC prefers the British spellings "aluminium" and "caesium" over the U.S. spellings "aluminum" and "cesium", and the U.S. "sulfur" over the British "sulphur". However, elements that are practical to sell in bulk in many countries often still have locally used national names, and countries whose national language does not use the Latin alphabet are likely to use the IUPAC element names. According to IUPAC, chemical elements are not proper nouns in English; consequently, the full name of an element is not routinely capitalized in English, even if derived from a proper noun, as in californium and einsteinium. Isotope names of chemical elements are also uncapitalized if written out, e.g., carbon-12 or uranium-235. Chemical element symbols (such as Cf for californium and Es for einsteinium), are always capitalized (see below). In the second half of the twentieth century, physics laboratories became able to produce nuclei of chemical elements with half-lives too short for an appreciable amount of them to exist at any time. These are also named by IUPAC, which generally adopts the name chosen by the discoverer. This practice can lead to the controversial question of which research group actually discovered an element, a question that delayed the naming of elements with atomic number of 104 and higher for a considerable amount of time. (See element naming controversy). Precursors of such controversies involved the nationalistic namings of elements in the late 19th century. For example, lutetium was named in reference to Paris, France. The Germans were reluctant to relinquish naming rights to the French, often calling it cassiopeium. Similarly, the British discoverer of niobium originally named it columbium, in reference to the New World. It was used extensively as such by American publications before the international standardization (in 1950). Chemical symbols Specific chemical elements Before chemistry became a science, alchemists had designed arcane symbols for both metals and common compounds. These were however used as abbreviations in diagrams or procedures; there was no concept of atoms combining to form molecules. With his advances in the atomic theory of matter, John Dalton devised his own simpler symbols, based on circles, to depict molecules. The current system of chemical notation was invented by Berzelius. In this typographical system, chemical symbols are not mere abbreviations—though each consists of letters of the Latin alphabet. They are intended as universal symbols for people of all languages and alphabets. The first of these symbols were intended to be fully universal. Since Latin was the common language of science at that time, they were abbreviations based on the Latin names of metals. Cu comes from cuprum, Fe comes from ferrum, Ag from argentum. The symbols were not followed by a period (full stop) as with abbreviations. Later chemical elements were also assigned unique chemical symbols, based on the name of the element, but not necessarily in English. For example, sodium has the chemical symbol 'Na' after the Latin natrium. The same applies to "Fe" (ferrum) for iron, "Hg" (hydrargyrum) for mercury, "Sn" (stannum) for tin, "Au" (aurum) for gold, "Ag" (argentum) for silver, "Pb" (plumbum) for lead, "Cu" (cuprum) for copper, and "Sb" (stibium) for antimony. "W" (wolfram) for tungsten ultimately derives from German, "K" (kalium) for potassium ultimately from Arabic. Chemical symbols are understood internationally when element names might require translation. There have sometimes been differences in the past. For example, Germans in the past have used "J" (for the alternate name Jod) for iodine, but now use "I" and "Iod". The first letter of a chemical symbol is always capitalized, as in the preceding examples, and the subsequent letters, if any, are always lower case (small letters). Thus, the symbols for californium and einsteinium are Cf and Es. General chemical symbols There are also symbols in chemical equations for groups of chemical elements, for example in comparative formulas. These are often a single capital letter, and the letters are reserved and not used for names of specific elements. For example, an "X" indicates a variable group (usually a halogen) in a class of compounds, while "R" is a radical, meaning a compound structure such as a hydrocarbon chain. The letter "Q" is reserved for "heat" in a chemical reaction. "Y" is also often used as a general chemical symbol, although it is also the symbol of yttrium. "Z" is also frequently used as a general variable group. "E" is used in organic chemistry to denote an electron-withdrawing group or an electrophile; similarly "Nu" denotes a nucleophile. "L" is used to represent a general ligand in inorganic and organometallic chemistry. "M" is also often used in place of a general metal. At least two additional, two-letter generic chemical symbols are also in informal usage, "Ln" for any lanthanide element and "An" for any actinide element. "Rg" was formerly used for any rare gas element, but the group of rare gases has now been renamed noble gases and the symbol "Rg" has now been assigned to the element roentgenium. Isotope symbols Isotopes are distinguished by the atomic mass number (total protons and neutrons) for a particular isotope of an element, with this number combined with the pertinent element's symbol. IUPAC prefers that isotope symbols be written in superscript notation when practical, for example 12C and 235U. However, other notations, such as carbon-12 and uranium-235, or C-12 and U-235, are also used. As a special case, the three naturally occurring isotopes of the element hydrogen are often specified as H for 1H (protium), D for 2H (deuterium), and T for 3H (tritium). This convention is easier to use in chemical equations, replacing the need to write out the mass number for each atom. For example, the formula for heavy water may be written D2O instead of 2H2O. Origin of the elements Only about 4% of the total mass of the universe is made of atoms or ions, and thus represented by chemical elements. This fraction is about 15% of the total matter, with the remainder of the matter (85%) being dark matter. The nature of dark matter is unknown, but it is not composed of atoms of chemical elements because it contains no protons, neutrons, or electrons. (The remaining non-matter part of the mass of the universe is composed of the even less well understood dark energy). The 94 naturally occurring chemical elements were produced by at least four classes of astrophysical process. Most of the hydrogen, helium and a very small quantity of lithium were produced in the first few minutes of the Big Bang. This Big Bang nucleosynthesis happened only once; the other processes are ongoing. Nuclear fusion inside stars produces elements through stellar nucleosynthesis, including all elements from carbon to iron in atomic number. Elements higher in atomic number than iron, including heavy elements like uranium and plutonium, are produced by various forms of explosive nucleosynthesis in supernovae and neutron star mergers. The light elements lithium, beryllium and boron are produced mostly through cosmic ray spallation (fragmentation induced by cosmic rays) of carbon, nitrogen, and oxygen. During the early phases of the Big Bang, nucleosynthesis of hydrogen nuclei resulted in the production of hydrogen-1 (protium, 1H) and helium-4 (4He), as well as a smaller amount of deuterium (2H) and very minuscule amounts (on the order of 10−10) of lithium and beryllium. Even smaller amounts of boron may have been produced in the Big Bang, since it has been observed in some very old stars, while carbon has not. No elements heavier than boron were produced in the Big Bang. As a result, the primordial abundance of atoms (or ions) consisted of roughly 75% 1H, 25% 4He, and 0.01% deuterium, with only tiny traces of lithium, beryllium, and perhaps boron. Subsequent enrichment of galactic halos occurred due to stellar nucleosynthesis and supernova nucleosynthesis. However, the element abundance in intergalactic space can still closely resemble primordial conditions, unless it has been enriched by some means. On Earth (and elsewhere), trace amounts of various elements continue to be produced from other elements as products of nuclear transmutation processes. These include some produced by cosmic rays or other nuclear reactions (see cosmogenic and nucleogenic nuclides), and others produced as decay products of long-lived primordial nuclides. For example, trace (but detectable) amounts of carbon-14 (14C) are continually produced in the atmosphere by cosmic rays impacting nitrogen atoms, and argon-40 (40Ar) is continually produced by the decay of primordially occurring but unstable potassium-40 (40K). Also, three primordially occurring but radioactive actinides, thorium, uranium, and plutonium, decay through a series of recurrently produced but unstable radioactive elements such as radium and radon, which are transiently present in any sample of these metals or their ores or compounds. Three other radioactive elements, technetium, promethium, and neptunium, occur only incidentally in natural materials, produced as individual atoms by nuclear fission of the nuclei of various heavy elements or in other rare nuclear processes. In addition to the 94 naturally occurring elements, several artificial elements have been produced by human nuclear physics technology. , these experiments have produced all elements up to atomic number 118. Abundance The following graph (note log scale) shows the abundance of elements in our Solar System. The table shows the twelve most common elements in our galaxy (estimated spectroscopically), as measured in parts per million, by mass. Nearby galaxies that have evolved along similar lines have a corresponding enrichment of elements heavier than hydrogen and helium. The more distant galaxies are being viewed as they appeared in the past, so their abundances of elements appear closer to the primordial mixture. As physical laws and processes appear common throughout the visible universe, however, scientist expect that these galaxies evolved elements in similar abundance. The abundance of elements in the Solar System is in keeping with their origin from nucleosynthesis in the Big Bang and a number of progenitor supernova stars. Very abundant hydrogen and helium are products of the Big Bang, but the next three elements are rare since they had little time to form in the Big Bang and are not made in stars (they are, however, produced in small quantities by the breakup of heavier elements in interstellar dust, as a result of impact by cosmic rays). Beginning with carbon, elements are produced in stars by buildup from alpha particles (helium nuclei), resulting in an alternatingly larger abundance of elements with even atomic numbers (these are also more stable). In general, such elements up to iron are made in large stars in the process of becoming supernovas. Iron-56 is particularly common, since it is the most stable element that can easily be made from alpha particles (being a product of decay of radioactive nickel-56, ultimately made from 14 helium nuclei). Elements heavier than iron are made in energy-absorbing processes in large stars, and their abundance in the universe (and on Earth) generally decreases with their atomic number. The abundance of the chemical elements on Earth varies from air to crust to ocean, and in various types of life. The abundance of elements in Earth's crust differs from that in the Solar System (as seen in the Sun and heavy planets like Jupiter) mainly in selective loss of the very lightest elements (hydrogen and helium) and also volatile neon, carbon (as hydrocarbons), nitrogen and sulfur, as a result of solar heating in the early formation of the solar system. Oxygen, the most abundant Earth element by mass, is retained on Earth by combination with silicon. Aluminium at 8% by mass is more common in the Earth's crust than in the universe and solar system, but the composition of the far more bulky mantle, which has magnesium and iron in place of aluminium (which occurs there only at 2% of mass) more closely mirrors the elemental composition of the solar system, save for the noted loss of volatile elements to space, and loss of iron which has migrated to the Earth's core. The composition of the human body, by contrast, more closely follows the composition of seawater—save that the human body has additional stores of carbon and nitrogen necessary to form the proteins and nucleic acids, together with phosphorus in the nucleic acids and energy transfer molecule adenosine triphosphate (ATP) that occurs in the cells of all living organisms. Certain kinds of organisms require particular additional elements, for example the magnesium in chlorophyll in green plants, the calcium in mollusc shells, or the iron in the hemoglobin in vertebrate animals' red blood cells. History Evolving definitions The concept of an "element" as an undivisible substance has developed through three major historical phases: Classical definitions (such as those of the ancient Greeks), chemical definitions, and atomic definitions. Classical definitions Ancient philosophy posited a set of classical elements to explain observed patterns in nature. These elements originally referred to earth, water, air and fire rather than the chemical elements of modern science. The term 'elements' (stoicheia) was first used by the Greek philosopher Plato in about 360 BCE in his dialogue Timaeus, which includes a discussion of the composition of inorganic and organic bodies and is a speculative treatise on chemistry. Plato believed the elements introduced a century earlier by Empedocles were composed of small polyhedral forms: tetrahedron (fire), octahedron (air), icosahedron (water), and cube (earth). Aristotle, , also used the term stoicheia and added a fifth element called aether, which formed the heavens. Aristotle defined an element as: Chemical definitions In 1661, Robert Boyle proposed his theory of corpuscularism which favoured the analysis of matter as constituted by irreducible units of matter (atoms) and, choosing to side with neither Aristotle's view of the four elements nor Paracelsus' view of three fundamental elements, left open the question of the number of elements. The first modern list of chemical elements was given in Antoine Lavoisier's 1789 Elements of Chemistry, which contained thirty-three elements, including light and caloric. By 1818, Jöns Jakob Berzelius had determined atomic weights for forty-five of the forty-nine then-accepted elements. Dmitri Mendeleev had sixty-six elements in his periodic table of 1869. From Boyle until the early 20th century, an element was defined as a pure substance that could not be decomposed into any simpler substance. Put another way, a chemical element cannot be transformed into other chemical elements by chemical processes. Elements during this time were generally distinguished by their atomic weights, a property measurable with fair accuracy by available analytical techniques. Atomic definitions The 1913 discovery by English physicist Henry Moseley that the nuclear charge is the physical basis for an atom's atomic number, further refined when the nature of protons and neutrons became appreciated, eventually led to the current definition of an element based on atomic number (number of protons per atomic nucleus). The use of atomic numbers, rather than atomic weights, to distinguish elements has greater predictive value (since these numbers are integers), and also resolves some ambiguities in the chemistry-based view due to varying properties of isotopes and allotropes within the same element. Currently, IUPAC defines an element to exist if it has isotopes with a lifetime longer than the 10−14 seconds it takes the nucleus to form an electronic cloud. By 1914, seventy-two elements were known, all naturally occurring. The remaining naturally occurring elements were discovered or isolated in subsequent decades, and various additional elements have also been produced synthetically, with much of that work pioneered by Glenn T. Seaborg. In 1955, element 101 was discovered and named mendelevium in honor of D.I. Mendeleev, the first to arrange the elements in a periodic manner. Discovery and recognition of various elements Ten materials familiar to various prehistoric cultures are now known to be chemical elements: Carbon, copper, gold, iron, lead, mercury, silver, sulfur, tin, and zinc. Three additional materials now accepted as elements, arsenic, antimony, and bismuth, were recognized as distinct substances prior to 1500 AD. Phosphorus, cobalt, and platinum were isolated before 1750. Most of the remaining naturally occurring chemical elements were identified and characterized by 1900, including: Such now-familiar industrial materials as aluminium, silicon, nickel, chromium, magnesium, and tungsten Reactive metals such as lithium, sodium, potassium, and calcium The halogens fluorine, chlorine, bromine, and iodine Gases such as hydrogen, oxygen, nitrogen, helium, argon, and neon Most of the rare-earth elements, including cerium, lanthanum, gadolinium, and neodymium. The more common radioactive elements, including uranium, thorium, radium, and radon Elements isolated or produced since 1900 include: The three remaining undiscovered regularly occurring stable natural elements: hafnium, lutetium, and rhenium Plutonium, which was first produced synthetically in 1940 by Glenn T. Seaborg, but is now also known from a few long-persisting natural occurrences The three incidentally occurring natural elements (neptunium, promethium, and technetium), which were all first produced synthetically but later discovered in trace amounts in certain geological samples Four scarce decay products of uranium or thorium (astatine, francium, actinium, and protactinium), and Various synthetic transuranic elements, beginning with americium and curium Recently discovered elements The first transuranium element (element with atomic number greater than 92) discovered was neptunium in 1940. Since 1999, claims for the discovery of new elements have been considered by the IUPAC/IUPAP Joint Working Party. As of January 2016, all 118 elements have been confirmed by IUPAC as being discovered. The discovery of element 112 was acknowledged in 2009, and the name copernicium and the atomic symbol Cn were suggested for it. The name and symbol were officially endorsed by IUPAC on 19 February 2010. The heaviest element that is believed to have been synthesized to date is element 118, oganesson, on 9 October 2006, by the Flerov Laboratory of Nuclear Reactions in Dubna, Russia. Tennessine, element 117 was the latest element claimed to be discovered, in 2009. On 28 November 2016, scientists at the IUPAC officially recognized the names for the four newest chemical elements, with atomic numbers 113, 115, 117, and 118. List of the 118 known chemical elements The following sortable table shows the 118 known chemical elements. Atomic number, Element, and Symbol all serve independently as unique identifiers. Element names are those accepted by IUPAC. Block indicates the periodic table block for each element: red = s-block, yellow = p-block, blue = d-block, green = f-block. Group and period refer to an element's position in the periodic table. Group numbers here show the currently accepted numbering; for older numberings, see Group (periodic table). See also Biological roles of the elements Chemical database Discovery of the chemical elements Element collecting Fictional element Goldschmidt classification Island of stability List of nuclides List of the elements' densities Mineral (nutrient) Periodic Systems of Small Molecules Prices of chemical elements Systematic element name Table of nuclides Timeline of chemical element discoveries Roles of chemical elements References Further reading XML on-line corrected version: created by M. Nic, J. Jirat, B. Kosata; updates compiled by A. Jenkins External links Videos for each element by the University of Nottingham "Chemical Elements", In Our Time, BBC Radio 4 discussion with Paul Strathern, Mary Archer and John Murrell (25 May 2000) Chemistry
5661
https://en.wikipedia.org/wiki/Centime
Centime
Centime (from ) is French for "cent", and is used in English as the name of the fraction currency in several Francophone countries (including Switzerland, Algeria, Belgium, Morocco and France). In France, the usage of centime goes back to the introduction of the decimal monetary system under Napoleon. This system aimed at replacing non-decimal fractions of older coins. A five-centime coin was known as a sou, i.e. a solidus or shilling. In Francophone Canada of a Canadian dollar is officially known as a cent (pronounced /sɛnt/) in both English and French. However, in practice, the form of cenne (pronounced /sɛn/) has completely replaced the official cent. Spoken and written use of the official form cent in Francophone Canada is exceptionally uncommon. In the Canadian French vernacular sou, sou noir ( means "black" in French), cenne, and cenne noire are all widely known, used, and accepted monikers when referring to either of a Canadian dollar or the 1¢ coin (colloquially known as a "penny" in North American English). Subdivision of euro: cent or centime? In the European community, cent is the official name for one hundredth of a euro. However, in French-speaking countries, the word centime is the preferred term. The Superior Council of the French language of Belgium recommended in 2001 the use of centime, since cent is also the French word for "hundred". An analogous decision was published in the Journal officiel in France (2 December 1997). In Morocco, dirhams are divided into 100 centimes and one may find prices in the country quoted in centimes rather than in dirhams. Sometimes centimes are known as francs or, in former Spanish areas, pesetas. Usage A centime is one-hundredth of the following basic monetary units: Current Algerian dinar Burundian franc CFP franc CFA franc Comorian franc Congolese franc Djiboutian franc Ethiopian birr (as santim) Guinean franc Haitian gourde Moroccan dirham Rwandan franc Swiss franc (by French and English speakers only; Italian speakers use centesimo. See Rappen) Obsolete Algerian franc Belgian franc (Dutch: ) Cambodian franc French Camerounian franc French Guianan franc French franc Guadeloupe franc Katangese franc Latvian lats (Latvian: santīms) Luxembourgish franc Malagasy franc Malian franc Martinique franc Monegasque franc Moroccan franc New Hebrides franc Réunion franc Spanish Peseta Tunisian franc Westphalian frank References Marianne (personification)
5662
https://en.wikipedia.org/wiki/Calendar%20year
Calendar year
Generally speaking, a calendar year begins on the New Year's Day of the given calendar system and ends on the day before the following New Year's Day, and thus consists of a whole number of days. A year can also be measured by starting on any other named day of the calendar, and ending on the day before this named day in the following year. This may be termed a "year's time", but not a "calendar year". To reconcile the calendar year with the astronomical cycle (which has a fractional number of days) certain years contain extra days ("leap days" or "intercalary days"). The Gregorian year, which is in use in most of the world, begins on January 1 and ends on December 31. It has a length of 365 days in an ordinary year, with 8760 hours, 525,600 minutes, or 31,536,000 seconds; but 366 days in a leap year, with 8784 hours, 527,040 minutes, or 31,622,400 seconds. With 97 leap years every 400 years, the year has an average length of 365.2425 days. Other formula-based calendars can have lengths which are further out of step with the solar cycle: for example, the Julian calendar has an average length of 365.25 days, and the Hebrew calendar has an average length of 365.2468 days. The Lunar Hijri calendar is a lunar calendar consisting of 12 months in a year of 354 or 355 days. The astronomer's mean tropical year, which is averaged over equinoxes and solstices, is currently 365.24219 days, slightly shorter than the average length of the year in most calendars. Quarters The calendar year can be divided into four quarters, often abbreviated as Q1, Q2, Q3, and Q4. In the Gregorian calendar: First quarter, Q1: 1 January – 31 March (90 days or 91 days in leap years) Second quarter, Q2: 1 April – 30 June (91 days) Third quarter, Q3: 1 July – 30 September (92 days) Fourth quarter, Q4: 1 October – 31 December (92 days) While in the Chinese calendar, the quarters are traditionally associated with the 4 seasons of the year: Spring: 1st to 3rd month Summer: 4th to 6th month Autumn: 7th to 9th month Winter: 10th to 12th month See also Academic term Calendar reform Common year Fiscal year ISO 8601 ISO week date Leap year Model year Tropical year Seasonal year References Year Units of time Types of year
5663
https://en.wikipedia.org/wiki/CFA%20franc
CFA franc
The CFA franc (, , Franc of the Financial Community of Africa, originally Franc of the French Colonies in Africa, or colloquially ; abbreviation: F.CFA) is the name of two currencies, the West African CFA franc, used in eight West African countries, and the Central African CFA franc, used in six Central African countries. Although separate, the two CFA franc currencies have always been at parity and are effectively interchangeable. The ISO currency codes are XAF for the Central African CFA franc and XOF for the West African CFA franc. On 22 December 2019, it was announced that the West African currency would be reformed and replaced by an independent currency to be called Eco. Both CFA francs have a fixed exchange rate (peg) to the euro: €1 = F.CFA 655.957 exactly, and member countries deposited half of their foreign exchange reserves with the French Treasury. The currency has been criticized for restricting the sovereignty of the African member states, effectively putting their monetary policy in the hands of the European Central Bank. Others argue that the CFA "helps stabilize the national currencies of Franc Zone member-countries and greatly facilitates the flow of exports and imports between France and the member-countries". In May 2020, the French National Assembly agreed to end the French engagement in the West African CFA franc, including the foreign reserve deposit requirements. The West African CFA franc is expected to be renamed as the "Eco" in the near future. Usage CFA francs are used in fourteen countries: twelve nations formerly ruled by France in West and Central Africa (excluding Guinea and Mauritania, which withdrew), plus Guinea-Bissau (a former Portuguese colony), and Equatorial Guinea (a former Spanish colony). These fourteen countries have a combined population of 193.1 million people (as of 2021), and a combined GDP of US$283.0 billion (as of 2021). Name Between 1945 and 1958, CFA stood for ("French colonies of Africa"); then for ("French Community of Africa") between 1958 (establishment of the French Fifth Republic) and the independence of these African countries at the beginning of the 1960s. Since independence, CFA is taken to mean (African Financial Community) or Coopération financière en Afrique centrale (see Institutions below). History Creation The CFA franc was created on 26 December 1945, along with the CFP franc. The reason for their creation was the weakness of the French franc immediately after World War II. When France ratified the Bretton Woods Agreement in December 1945, the French franc was devalued in order to set a fixed exchange rate with the US dollar. New currencies were created in the French colonies to spare them the strong devaluation, thereby making it easier for them to import goods from France (and simultaneously making it harder for them to export goods to France). French officials presented the decision as an act of generosity. René Pleven, the French Minister of Finance, was quoted as saying: Exchange rate The CFA franc was created with a fixed exchange rate versus the French franc. This exchange rate was changed only twice, in 1948 and in 1994 (besides nominal adaptation to the new French franc in 1960 and the Euro in 1999). Exchange rate: 26 December 1945 to 16 October 1948 – F.CFA 1 = 1.70 French franc. This 70 centime premium is the consequence of the creation of the CFA franc, which spared the French African colonies the devaluation of December 1945 (before December 1945, 1 local franc in these colonies was worth 1 French franc). 17 October 1948 to 31 December 1959 – F.CFA 1 = 2 French francs (the CFA franc had followed the French franc's devaluation versus the US dollar in January 1948, but on 18 October 1948, the French franc devalued again and this time the CFA franc was revalued against the French franc to offset almost all of this new devaluation of the French franc; after October 1948, the CFA followed all the successive devaluations of the French franc) 1 January 1960 to 11 January 1994– F.CFA 1 = NF 0.02 (1 January 1960: the French franc redenominated, with 100 old francs becoming 1 new franc) 12 January 1994 to 31 December 1998– F.CFA 1 = F 0.01. An overnight 50% devaluation. 1 January 1999 onwards – F.CFA 100 = €0.152449 or €1 euro = F.CFA 655.957. (1 January 1999: the euro replaced FRF at the rate of 6.55957 FRF for 1 euro) The 1960 and 1999 events merely reflect changes of currency in use in France: the actual relative value of the CFA franc versus the French franc/euro only changed in 1948 and 1994. Changes in countries using the franc Over time, the number of countries and territories using the CFA franc has changed as some countries began introducing their own separate currencies. A couple of nations in West Africa have also chosen to adopt the CFA franc since its introduction, despite the fact that they had never been French colonies. 1960: Guinea leaves and begins issuing Guinean francs. 1962: Mali leaves and begins issuing Malian francs. 1973: Madagascar leaves (in 1972, according to another source) and begins issuing its own francs, the Malagasy franc, which ran concurrently with the Malagasy ariary (1 ariary = 5 Malagasy francs). 1973: Mauritania leaves, replacing the franc with the Mauritanian ouguiya (1 ouguiya = 5 CFA francs). 1974: Saint-Pierre and Miquelon leaves for French franc, which changed later to the Euro 1975: Réunion leaves for French franc, which changed later to the Euro 1976: Mayotte leaves for French franc, which changed later to the Euro 1984: Mali rejoins (1 CFA franc = 2 Malian francs). 1985: Equatorial Guinea joins (1 franc = 4 bipkwele) 1997: Guinea-Bissau joins (1 franc = 65 pesos) European Monetary Union In 1998, in anticipation of Economic and Monetary Union of the European Union, the Council of the European Union addressed the monetary agreements France had with the CFA Zone and Comoros and ruled that: The agreements are unlikely to have any material effect on the monetary and exchange rate policy of the Eurozone In their present forms and states of implementation, the agreements are unlikely to present any obstacle to a smooth functioning of economic and monetary union Nothing in the agreements can be construed as implying an obligation for the European Central Bank (ECB) or any national central bank to support the convertibility of the CFA and Comorian francs Modifications to the existing agreements will not lead to any obligations for the European Central or any national central bank The French Treasury will guarantee the free convertibility at a fixed parity between the euro and the CFA and Comorian francs The competent French authorities shall keep the European Commission, the European Central Bank and the Economic and Financial Committee informed about the implementation of the agreements and inform the Committee prior to changes of the parity between the euro and the CFA and Comorian francs Any change to the nature or scope of the agreements would require Council approval on the basis of a Commission recommendation and ECB consultation Criticism and replacement in West Africa The currency has been criticized for making national monetary policy for the developing countries of French West Africa all but impossible, since the CFA's value is pegged to the euro (whose monetary policy is set by the European Central Bank). Others disagree and argue that the CFA "helps stabilize the national currencies of Franc Zone member-countries and greatly facilitates the flow of exports and imports between France and the member-countries". The European Union's 2008 assessment of the CFA's link to the euro noted that "benefits from economic integration within each of the two monetary unions of the CFA franc zone, and even more so between them, remained remarkably low" but that "the peg to the French franc and, since 1999, to the euro as exchange rate anchor is usually found to have had favourable effects in the region in terms of macroeconomic stability". Critics point out that the currency is controlled by the French treasury, and in turn African countries channel more money to France than they receive in aid and have no sovereignty over their monetary policies. In January 2019, Italian ministers accused France of impoverishing Africa through the CFA franc, and criticism continued from various African organizations. On 21 December 2019, President Alassane Ouattara of the Ivory Coast and President Emmanuel Macron of France announced an initiative to replace the West African CFA Franc with the Eco. Subsequently, a reform of the West African CFA franc was initiated. In May 2020, the French National Assembly agreed to end the French engagement in the West African CFA franc. The countries using the currency will no longer have to deposit half of their foreign exchange reserves with the French Treasury. The broader Economic Community of West African States (ECOWAS), which includes the members of UEMOA, plans to introduce its own common currency for its member states by 2027, for which they have also formally adopted the name Eco. Debate on ending the Central African CFA On April 25, 2023, the subject of the CFA franc was discussed at the ministerial meeting of the Economic and Monetary Community of Central Africa (CEMAC) and France. The French perceive the guarantee provided to the CFA franc, and the assurance of its convertibility, as a pillar of economic stability for the region. France remains “open” and “available” to CEMAC proposals to reform monetary cooperation in Central Africa, as has happened in West Africa. Institutions There are two different currencies called the CFA franc: the West African CFA franc (ISO 4217 currency code XOF), and the Central Africa CFA franc (ISO 4217 currency code XAF). They are distinguished in French by the meaning of the abbreviation CFA. These two CFA francs have the same exchange rate with the euro (1 euro = 655.957 XOF = 655.957 XAF), and they are both guaranteed by the French treasury (), but the two currencies are only legal tender in their respective member countries. West African The West African CFA franc (XOF) is known in French as the , where CFA stands for ('Financial Community of Africa') or ("African Financial Community"). It is issued by the BCEAO (, i.e., "Central Bank of the West African States"), located in Dakar, Senegal, for the eight countries of the UEMOA (, i.e., "West African Economic and Monetary Union"): These eight countries have a combined population of 134.7 million people (as of 2021), and a combined GDP of US$179.7 billion (as of 2021). Central African The Central Africa CFA franc (XAF) is known in French as the , where CFA stands for ("Financial Cooperation in Central Africa"). It is issued by the BEAC (, i.e., "Bank of the Central African States"), located in Yaoundé, Cameroon, for the six countries of the CEMAC (, i.e., "Economic and Monetary Community of Central Africa"): These six countries have a combined population of 58.4 million people (as of 2021), and a combined GDP of US$103.3 billion (as of 2021). In 1975, Central African CFA banknotes were issued with an obverse unique to each participating country, and common reverse, in a fashion similar to euro coins. Equatorial Guinea, the only former Spanish colony in the zone, adopted the CFA in 1984. Gallery See also AM-Franc Comorian franc Currencies related to the euro CFP franc Réunion franc Reichmark References External links History of the CFA franc Franc zone information at Banque de France (in French, but more extensive than the English version) Decision of the Council of Europe on 23 November 1998 regarding the CFA and Comorian francs "For better or worse: the euro and the CFA franc", Africa Recovery, Department of Public Information, United Nations (April 1999) Other Central Bank of Madagascar The CFA franc zone and the EMU Aubin Nzaou-Kongo, International Law and Monetary Sovereignty, African Review of Law, 2020 Economy of Benin Economy of Chad Currencies introduced in 1945 Fixed exchange rate French West Africa Currencies of Cameroon
5664
https://en.wikipedia.org/wiki/Consciousness
Consciousness
Consciousness, at its simplest, is awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations and debate by philosophers, theologians, and all of science. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked. Examples of the range of descriptions, definitions or explanations are: simple wakefulness, one's sense of selfhood or soul explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event or mental process of the brain. Etymology In the late 20th century, philosophers like Hamlyn, Rorty, and Wilkes have disagreed with Kahn, Hardie and Modrak as to whether Aristotle even had a concept of consciousness. Aristotle does not use any single word or terminology to name the phenomenon; it is used only much later, especially by John Locke. Caston contends that for Aristotle, perceptual awareness was somewhat the same as what modern philosophers call consciousness. The origin of the modern concept of consciousness is often attributed to Locke's Essay Concerning Human Understanding, published in 1690. Locke defined consciousness as "the perception of what passes in a man's own mind". His essay influenced the 18th-century view of consciousness, and his definition appeared in Samuel Johnson's celebrated Dictionary (1755). "Consciousness" (French: conscience) is also defined in the 1753 volume of Diderot and d'Alembert's Encyclopédie, as "the opinion or internal feeling that we ourselves have from what we do". The earliest English language uses of "conscious" and "consciousness" date back, however, to the 1500s. The English word "conscious" originally derived from the Latin conscius (con- "together" and scio "to know"), but the Latin word did not have the same meaning as the English word—it meant "knowing with", in other words, "having joint or common knowledge with another". There were, however, many occurrences in Latin writings of the phrase conscius sibi, which translates literally as "knowing with oneself", or in other words "sharing knowledge with oneself about something". This phrase had the figurative meaning of "knowing that one knows", as the modern English word "conscious" does. In its earliest uses in the 1500s, the English word "conscious" retained the meaning of the Latin conscius. For example, Thomas Hobbes in Leviathan wrote: "Where two, or more men, know of one and the same fact, they are said to be Conscious of it one to another." The Latin phrase conscius sibi, whose meaning was more closely related to the current concept of consciousness, was rendered in English as "conscious to oneself" or "conscious unto oneself". For example, Archbishop Ussher wrote in 1613 of "being so conscious unto myself of my great weakness". Locke's definition from 1690 illustrates that a gradual shift in meaning had taken place. A related word was conscientia, which primarily means moral conscience. In the literal sense, "conscientia" means knowledge-with, that is, shared knowledge. The word first appears in Latin juridical texts by writers such as Cicero. Here, conscientia is the knowledge that a witness has of the deed of someone else. René Descartes (1596–1650) is generally taken to be the first philosopher to use conscientia in a way that does not fit this traditional meaning. Descartes used conscientia the way modern speakers would use "conscience". In Search after Truth (, Amsterdam 1701) he says "conscience or internal testimony" (conscientiâ, vel interno testimonio). The problem of definition The dictionary definitions of the word consciousness extend through several centuries and reflect a range of seemingly related meanings, with some differences that have been controversial, such as the distinction between 'inward awareness' and 'perception' of the physical world, or the distinction between 'conscious' and 'unconscious', or the notion of a "mental entity" or "mental activity" that is not physical. The common usage definitions of consciousness in Webster's Third New International Dictionary (1966 edition, Volume 1, page 482) are as follows: awareness or perception of an inward psychological or spiritual fact; intuitively perceived knowledge of something in one's inner self inward awareness of an external object, state, or fact concerned awareness; INTEREST, CONCERN—often used with an attributive noun [e.g. class consciousness] the state or activity that is characterized by sensation, emotion, volition, or thought; mind in the broadest possible sense; something in nature that is distinguished from the physical the totality in psychology of sensations, perceptions, ideas, attitudes, and feelings of which an individual or a group is aware at any given time or within a particular time span—compare STREAM OF CONSCIOUSNESS waking life (as that to which one returns after sleep, trance, fever) wherein all one's mental powers have returned . . . the part of mental life or psychic content in psychoanalysis that is immediately available to the ego—compare PRECONSCIOUS, UNCONSCIOUS The Cambridge Dictionary defines consciousness as "the state of understanding and realizing something." The Oxford Living Dictionary defines consciousness as "The state of being aware of and responsive to one's surroundings.", "A person's awareness or perception of something." and "The fact of awareness by the mind of itself and the world." Philosophers have attempted to clarify technical distinctions by using a jargon of their own. The Routledge Encyclopedia of Philosophy in 1998 defines consciousness as follows: Many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness. In The Macmillan Dictionary of Psychology (1989 edition), Stuart Sutherland expressed a skeptical attitude more than a definition: A partisan definition such as Sutherland's can hugely affect researchers' assumptions and the direction of their work: Many philosophers have argued that consciousness is a unitary concept that is understood by the majority of people despite the difficulty philosophers have had defining it. Others, though, have argued that the level of disagreement about the meaning of the word indicates that it either means different things to different people (for instance, the objective versus subjective aspects of consciousness), that it encompasses a variety of distinct meanings with no simple element in common, or that we should eliminate this concept from our understanding of the mind, a position known as consciousness semanticism. Inter-disciplinary perspectives Western philosophers since the time of Descartes and Locke have struggled to comprehend the nature of consciousness and how it fits into a larger picture of the world. These questions remain central to both continental and analytic philosophy, in phenomenology and the philosophy of mind, respectively. Consciousness has also become a significant topic of interdisciplinary research in cognitive science, involving fields such as psychology, linguistics, anthropology, neuropsychology and neuroscience. The primary focus is on understanding what it means biologically and psychologically for information to be present in consciousness—that is, on determining the neural and psychological correlates of consciousness. In medicine, consciousness is assessed by observing a patient's arousal and responsiveness, and can be seen as a continuum of states ranging from full alertness and comprehension, through disorientation, delirium, loss of meaningful communication, and finally loss of movement in response to painful stimuli. Issues of practical concern include how the presence of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness is impaired or disrupted. The degree of consciousness is measured by standardized behavior observation scales such as the Glasgow Coma Scale. Philosophy of mind Most writers on the philosophy of consciousness have been concerned with defending a particular point of view, and have organized their material accordingly. For surveys, the most common approach is to follow a historical path by associating stances with the philosophers who are most strongly associated with them, for example, Descartes, Locke, Kant, etc. An alternative is to organize philosophical stances according to basic issues. Coherence of the concept Philosophers differ from non-philosophers in their intuitions about what consciousness is. While most people have a strong intuition for the existence of what they refer to as consciousness, skeptics argue that this intuition is false, either because the concept of consciousness is intrinsically incoherent, or because our intuitions about it are based in illusions. Gilbert Ryle, for example, argued that traditional understanding of consciousness depends on a Cartesian dualist outlook that improperly distinguishes between mind and body, or between mind and world. He proposed that we speak not of minds, bodies, and the world, but of individuals, or persons, acting in the world. Thus, by speaking of "consciousness" we end up misleading ourselves by thinking that there is any sort of thing as consciousness separated from behavioral and linguistic understandings. Types Ned Block argued that discussions on consciousness often failed to properly distinguish phenomenal (P-consciousness) from access (A-consciousness), though these terms had been used before Block. P-consciousness, according to Block, is raw experience: it is moving, colored forms, sounds, sensations, emotions and feelings with our bodies and responses at the center. These experiences, considered independently of any impact on behavior, are called qualia. A-consciousness, on the other hand, is the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior. So, when we perceive, information about what we perceive is access conscious; when we introspect, information about our thoughts is access conscious; when we remember, information about the past is access conscious, and so on. Although some philosophers, such as Daniel Dennett, have disputed the validity of this distinction, others have broadly accepted it. David Chalmers has argued that A-consciousness can in principle be understood in mechanistic terms, but that understanding P-consciousness is much more challenging: he calls this the hard problem of consciousness. Some philosophers believe that Block's two types of consciousness are not the end of the story. William Lycan, for example, argued in his book Consciousness and Experience that at least eight clearly distinct types of consciousness can be identified (organism consciousness; control consciousness; consciousness of; state/event consciousness; reportability; introspective consciousness; subjective consciousness; self-consciousness)—and that even this list omits several more obscure forms. There is also debate over whether or not A-consciousness and P-consciousness always coexist or if they can exist separately. Although P-consciousness without A-consciousness is more widely accepted, there have been some hypothetical examples of A without P. Block, for instance, suggests the case of a "zombie" that is computationally identical to a person but without any subjectivity. However, he remains somewhat skeptical concluding "I don't know whether there are any actual cases of A-consciousness without P-consciousness, but I hope I have illustrated their conceptual possibility." Distinguishing consciousness from its contents Sam Harris observes: "At the level of your experience, you are not a body of cells, organelles, and atoms; you are consciousness and its ever-changing contents". Seen in this way, consciousness is a subjectively experienced, ever-present field in which things (the contents of consciousness) come and go. Christopher Tricker argues that this field of consciousness is symbolized by the mythical bird that opens the Daoist classic the Zhuangzi. This bird’s name is Of a Flock (peng 鵬), yet its back is countless thousands of miles across and its wings are like clouds arcing across the heavens. "Like Of a Flock, whose wings arc across the heavens, the wings of your consciousness span to the horizon. At the same time, the wings of every other being’s consciousness span to the horizon. You are of a flock, one bird among kin." Mind–body problem Mental processes (such as consciousness) and physical processes (such as brain events) seem to be correlated, however the specific nature of the connection is unknown. The first influential philosopher to discuss this question specifically was Descartes, and the answer he gave is known as Cartesian dualism. Descartes proposed that consciousness resides within an immaterial domain he called res cogitans (the realm of thought), in contrast to the domain of material things, which he called res extensa (the realm of extension). He suggested that the interaction between these two domains occurs inside the brain, perhaps in a small midline structure called the pineal gland. Although it is widely accepted that Descartes explained the problem cogently, few later philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed. However, no alternative solution has gained general acceptance. Proposed solutions can be divided broadly into two categories: dualist solutions that maintain Descartes's rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; and monist solutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism are substance dualism (which holds that the mind is formed of a distinct type of substance not governed by the laws of physics) and property dualism (which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types of monism are physicalism (which holds that the mind consists of matter organized in a particular way), idealism (which holds that only thought or experience truly exists, and matter is merely an illusion), and neutral monism (which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these schools of thought. Since the dawn of Newtonian science with its vision of simple mechanical principles governing the entire universe, some philosophers have been tempted by the idea that consciousness could be explained in purely physical terms. The first influential writer to propose such an idea explicitly was Julien Offray de La Mettrie, in his book Man a Machine (L'homme machine). His arguments, however, were very abstract. The most influential modern physical theories of consciousness are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman and Antonio Damasio, and by philosophers such as Daniel Dennett, seek to explain consciousness in terms of neural events occurring within the brain. Many other neuroscientists, such as Christof Koch, have explored the neural basis of consciousness without attempting to frame all-encompassing global theories. At the same time, computer scientists working in the field of artificial intelligence have pursued the goal of creating digital computer programs that can simulate or embody consciousness. A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, but that quantum theory may provide the missing ingredients. Several theorists have therefore proposed quantum mind (QM) theories of consciousness. Notable theories falling into this category include the holonomic brain theory of Karl Pribram and David Bohm, and the Orch-OR theory formulated by Stuart Hameroff and Roger Penrose. Some of these QM theories offer descriptions of phenomenal consciousness, as well as QM interpretations of access consciousness. None of the quantum mechanical theories have been confirmed by experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H. Briegel could falsify proposals such as those of Hameroff, which rely on quantum entanglement in protein. At the present time many scientists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing. Apart from the general question of the "hard problem" of consciousness (which is, roughly speaking, the question of how mental experience can arise from a physical basis), a more specialized question is how to square the subjective notion that we are in control of our decisions (at least in some small measure) with the customary view of causality that subsequent events are caused by prior events. The topic of free will is the philosophical and scientific examination of this conundrum. Problem of other minds Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. But if consciousness is subjective and not visible from the outside, why do the vast majority of people believe that other people are conscious, but rocks and trees are not? This is called the problem of other minds. It is particularly acute for people who believe in the possibility of philosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness. Related issues have also been studied extensively by Greg Littmann of the University of Illinois, and by Colin Allen (a professor at the University of Pittsburgh) regarding the literature and research studying artificial intelligence in androids. The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior; we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do. There are, however, a variety of problems with that explanation. For one thing, it seems to violate the principle of parsimony, by postulating an invisible entity that is not necessary to explain what we observe. Some philosophers, such as Daniel Dennett in a research paper titled "The Unimagined Preposterousness of Zombies", argue that people who give this explanation do not really understand what they are saying. More broadly, philosophers who do not accept the possibility of zombies generally believe that consciousness is reflected in behavior (including verbal behavior), and that we attribute consciousness on the basis of behavior. A more straightforward way of saying this is that we attribute experiences to people because of what they can do, including the fact that they can tell us about their experiences. Scientific study For many decades, consciousness as a research topic was avoided by the majority of mainstream scientists, because of a general feeling that a phenomenon defined in subjective terms could not properly be studied using objective experimental methods. In 1975 George Mandler published an influential psychological study which distinguished between slow, serial, and limited conscious processes and fast, parallel and extensive unconscious ones. The Science and Religion Forum 1984 annual conference, 'From Artificial Intelligence to Human Consciousness' identified the nature of consciousness as a matter for investigation; Donald Michie was a keynote speaker. Starting in the 1980s, an expanding community of neuroscientists and psychologists have associated themselves with a field called Consciousness Studies, giving rise to a stream of experimental work published in books, journals such as Consciousness and Cognition, Frontiers in Consciousness Research, Psyche, and the Journal of Consciousness Studies, along with regular conferences organized by groups such as the Association for the Scientific Study of Consciousness and the Society for Consciousness Studies. Modern medical and psychological investigations into consciousness are based on psychological experiments (including, for example, the investigation of priming effects using subliminal stimuli), and on case studies of alterations in consciousness produced by trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences that are reported by human subjects; the second makes use of the concept of consciousness that has been developed by neurologists and other medical professionals who deal with patients whose behavior is impaired. In either case, the ultimate goals are to develop techniques for assessing consciousness objectively in humans as well as other animals, and to understand the neural and psychological mechanisms that underlie it. Measurement Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition. In the majority of experiments that are specifically about consciousness, the subjects are human, and the criterion used is verbal report: in other words, subjects are asked to describe their experiences, and their descriptions are treated as observations of the contents of consciousness. For example, subjects who stare continuously at a Necker cube usually report that they experience it "flipping" between two 3D configurations, even though the stimulus itself remains the same. The objective is to understand the relationship between the conscious awareness of stimuli (as indicated by verbal report) and the effects the stimuli have on brain activity and behavior. In several paradigms, such as the technique of response priming, the behavior of subjects is clearly influenced by stimuli for which they report no awareness, and suitable experimental manipulations can lead to increasing priming effects despite decreasing prime identification (double dissociation). Verbal report is widely considered to be the most reliable indicator of consciousness, but it raises a number of issues. For one thing, if verbal reports are treated as observations, akin to observations in other branches of science, then the possibility arises that they may contain errors—but it is difficult to make sense of the idea that subjects could be wrong about their own experiences, and even more difficult to see how such an error could be detected. Daniel Dennett has argued for an approach he calls heterophenomenology, which means treating verbal reports as stories that may or may not be true, but his ideas about how to do this have not been widely adopted. Another issue with verbal report as a criterion is that it restricts the field of study to humans who have language: this approach cannot be used to study consciousness in other species, pre-linguistic children, or people with types of brain damage that impair language. As a third issue, philosophers who dispute the validity of the Turing test may feel that it is possible, at least in principle, for verbal report to be dissociated from consciousness entirely: a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awareness. Although verbal report is in practice the "gold standard" for ascribing consciousness, it is not the only possible criterion. In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent. The scientific literature regarding the neural bases of arousal and purposeful movement is very extensive. Their reliability as indicators of consciousness is disputed, however, due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness. Studies of the neuroscience of free will have also shown that the experiences that people report when they behave purposefully sometimes do not correspond to their actual behaviors or to the patterns of electrical activity recorded from their brains. Another approach applies specifically to the study of self-awareness, that is, the ability to distinguish oneself from others. In the 1970s Gordon Gallup developed an operational test for self-awareness, known as the mirror test. The test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals. The classic example involves placing a spot of coloring on the skin or fur near the individual's forehead and seeing if they attempt to remove it or at least touch the spot, thus indicating that they recognize that the individual they are seeing in the mirror is themselves. Humans (older than 18 months) and other great apes, bottlenose dolphins, orcas, pigeons, European magpies and elephants have all been observed to pass this test. Neural correlates A major part of the scientific literature on consciousness consists of studies that examine the relationship between the experiences reported by subjects and the activity that simultaneously takes place in their brains—that is, studies of the neural correlates of consciousness. The hope is to find that activity in a particular part of the brain, or a particular pattern of global brain activity, which will be strongly predictive of conscious awareness. Several brain imaging techniques, such as EEG and fMRI, have been used for physical measures of brain activity in these studies. Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience. Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations. A number of studies have shown that activity in primary sensory areas of the brain is not sufficient to produce consciousness: it is possible for subjects to report a lack of awareness even when areas such as the primary visual cortex (V1) show clear electrical responses to a stimulus. Higher brain areas are seen as more promising, especially the prefrontal cortex, which is involved in a range of higher cognitive functions collectively known as executive functions. There is substantial evidence that a "top-down" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory areas) is more predictive of conscious awareness than a "bottom-up" flow of activity. The prefrontal cortex is not the only candidate area, however: studies by Nikos Logothetis and his colleagues have shown, for example, that visually responsive neurons in parts of the temporal lobe reflect the visual perception in the situation when conflicting visual images are presented to different eyes (i.e., bistable percepts during binocular rivalry). Furthermore, top-down feedback from higher to lower visual brain areas may be weaker or absent in the peripheral visual field, as suggested by some experimental data and theoretical arguments; nevertheless humans can perceive visual inputs in the peripheral visual field arising from bottom-up V1 neural activities. Meanwhile, bottom-up V1 activities for the central visual fields can be vetoed, and thus made invisible to perception, by the top-down feedback, when these bottom-up signals are inconsistent with the brain's internal model of the visual world. Modulation of neural responses may correlate with phenomenal experiences. In contrast to the raw electrical responses that do not correlate with consciousness, the modulation of these responses by other stimuli correlates surprisingly well with an important aspect of consciousness: namely with the phenomenal experience of stimulus intensity (brightness, contrast). In the research group of Danko Nikolić it has been shown that some of the changes in the subjectively perceived brightness correlated with the modulation of firing rates while others correlated with the modulation of neural synchrony. An fMRI investigation suggested that these findings were strictly limited to the primary visual areas. This indicates that, in the primary visual areas, changes in firing rates and synchrony can be considered as neural correlates of qualia—at least for some type of qualia. In 2013, the perturbational complexity index (PCI) was proposed, a measure of the algorithmic complexity of the electrophysiological response of the cortex to transcranial magnetic stimulation. This measure was shown to be higher in individuals that are awake, in REM sleep or in a locked-in state than in those who are in deep sleep or in a vegetative state, making it potentially useful as a quantitative assessment of consciousness states. Assuming that not only humans but even some non-mammalian species are conscious, a number of evolutionary approaches to the problem of neural correlates of consciousness open up. For example, assuming that birds are conscious—a common assumption among neuroscientists and ethologists due to the extensive cognitive repertoire of birds—there are comparative neuroanatomical ways to validate some of the principal, currently competing, mammalian consciousness–brain theories. The rationale for such a comparative study is that the avian brain deviates structurally from the mammalian brain. So how similar are they? What homologs can be identified? The general conclusion from the study by Butler, et al., is that some of the major theories for the mammalian brain also appear to be valid for the avian brain. The structures assumed to be critical for consciousness in mammalian brains have homologous counterparts in avian brains. Thus the main portions of the theories of Crick and Koch, Edelman and Tononi, and Cotterill seem to be compatible with the assumption that birds are conscious. Edelman also differentiates between what he calls primary consciousness (which is a trait shared by humans and non-human animals) and higher-order consciousness as it appears in humans alone along with human language capacity. Certain aspects of the three theories, however, seem less easy to apply to the hypothesis of avian consciousness. For instance, the suggestion by Crick and Koch that layer 5 neurons of the mammalian brain have a special role, seems difficult to apply to the avian brain, since the avian homologs have a different morphology. Likewise, the theory of Eccles seems incompatible, since a structural homolog/analogue to the dendron has not been found in avian brains. The assumption of an avian consciousness also brings the reptilian brain into focus. The reason is the structural continuity between avian and reptilian brains, meaning that the phylogenetic origin of consciousness may be earlier than suggested by many leading neuroscientists. Joaquin Fuster of UCLA has advocated the position of the importance of the prefrontal cortex in humans, along with the areas of Wernicke and Broca, as being of particular importance to the development of human language capacities neuro-anatomically necessary for the emergence of higher-order consciousness in humans. A study in 2016 looked at lesions in specific areas of the brainstem that were associated with coma and vegetative states. A small region of the rostral dorsolateral pontine tegmentum in the brainstem was suggested to drive consciousness through functional connectivity with two cortical regions, the left ventral anterior insular cortex, and the pregenual anterior cingulate cortex. These three regions may work together as a triad to maintain consciousness. Models A wide range of empirical theories of consciousness have been proposed. Adrian Doerig and colleagues list 13 notable theories, while Anil Seth and Tim Bayne list 22 notable theories. Global workspace theory (GWT) is a cognitive architecture and theory of consciousness proposed by the cognitive psychologist Bernard Baars in 1988. Baars explains the theory with the metaphor of a theater, with conscious processes represented by an illuminated stage. This theater integrates inputs from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit "audience"). The theory has since been expanded upon by other scientists including cognitive neuroscientist Stanislas Dehaene and Lionel Naccache. Integrated information theory (IIT) postulates that consciousness resides in the information being processed and arises once the information reaches a certain level of complexity. Proponents of this model suggest that it may provide a physical grounding for consciousness in neurons, as they provide the mechanism by which information is integrated. Orchestrated objective reduction (Orch OR) postulates that consciousness originates at the quantum level inside neurons. The mechanism is held to be a quantum process called objective reduction that is orchestrated by cellular structures called microtubules. However the details of the mechanism would go beyond current quantum theory. In 2011, Graziano and Kastner proposed the "attention schema" theory of awareness. In that theory, specific cortical areas, notably in the superior temporal sulcus and the temporo-parietal junction, are used to build the construct of awareness and attribute it to other people. The same cortical machinery is also used to attribute awareness to oneself. Damage to these cortical regions can lead to deficits in consciousness such as hemispatial neglect. In the attention schema theory, the value of explaining the feature of awareness and attributing it to a person is to gain a useful predictive model of that person's attentional processing. Attention is a style of information processing in which a brain focuses its resources on a limited set of interrelated signals. Awareness, in this theory, is a useful, simplified schema that represents attentional states. To be aware of X is explained by constructing a model of one's attentional focus on X. The entropic brain is a theory of conscious states informed by neuroimaging research with psychedelic drugs. The theory suggests that the brain in primary states such as rapid eye movement (REM) sleep, early psychosis and under the influence of psychedelic drugs, is in a disordered state; normal waking consciousness constrains some of this freedom and makes possible metacognitive functions such as internal self-administered reality testing and self-awareness. Criticism has included questioning whether the theory has been adequately tested. In 2017, work by David Rudrauf and colleagues, including Karl Friston, applied the active inference paradigm to consciousness, a model of how sensory data is integrated with priors in a process of projective transformation. The authors argue that, while their model identifies a key relationship between computation and phenomenology, it does not completely solve the hard problem of consciousness or completely close the explanatory gap. Biological function and evolution Opinions are divided as to where in biological evolution consciousness emerged and about whether or not consciousness has any survival value. Some argue that consciousness is a byproduct of evolution. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles. Other authors date the origins of consciousness to the first animals with nervous systems or early vertebrates in the Cambrian over 500 million years ago. Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness. Each of these scenarios raises the question of the possible survival value of consciousness. Thomas Henry Huxley defends in an essay titled On the Hypothesis that Animals are Automata, and its History an epiphenomenalist theory of consciousness according to which consciousness is a causally inert effect of neural activity—"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious. Karl Popper develops a similar evolutionary argument in the book The Self and Its Brain. Regarding the primary function of conscious processing, a recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing that would otherwise be independent. This has been called the integration consensus. Another example has been proposed by Gerald Edelman called dynamic core hypothesis which puts emphasis on reentrant connections that reciprocally link areas of the brain in a massively parallel manner. Edelman also stresses the importance of the evolutionary emergence of higher-order consciousness in humans from the historically older trait of primary consciousness which humans share with non-human animals (see Neural correlates section above). These theories of integrative function present solutions to two classic problems associated with consciousness: differentiation and unity. They show how our conscious experience can discriminate between a virtually unlimited number of different possible scenes and details (differentiation) because it integrates those details from our sensory systems, while the integrative nature of consciousness in this view easily explains how our experience can seem unified as one whole despite all of these individual parts. However, it remains unspecified which kinds of information are integrated in a conscious manner and which kinds can be integrated without consciousness. Nor is it explained what specific causal role conscious integration plays, nor why the same functionality cannot be achieved without consciousness. Obviously not all kinds of information are capable of being disseminated consciously (e.g., neural activity related to vegetative functions, reflexes, unconscious motor programs, low-level perceptual analyzes, etc.) and many kinds of information can be disseminated and combined with other kinds without consciousness, as in intersensory interactions such as the ventriloquism effect. Hence it remains unclear why any of it is conscious. For a review of the differences between conscious and unconscious integrations, see the article of Ezequiel Morsella. As noted earlier, even among writers who consider consciousness to be well-defined, there is widespread dispute about which animals other than humans can be said to possess it. Edelman has described this distinction as that of humans possessing higher-order consciousness while sharing the trait of primary consciousness with non-human animals (see previous paragraph). Thus, any examination of the evolution of consciousness is faced with great difficulties. Nevertheless, some writers have argued that consciousness can be viewed from the standpoint of evolutionary biology as an adaptation in the sense of a trait that increases fitness. In his article "Evolution of consciousness", John Eccles argued that special anatomical and physical properties of the mammalian cerebral cortex gave rise to consciousness ("[a] psychon ... linked to [a] dendron through quantum physics"). Bernard Baars proposed that once in place, this "recursive" circuitry may have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms. Peter Carruthers has put forth one such potential adaptive advantage gained by conscious creatures by suggesting that consciousness allows an individual to make distinctions between appearance and reality. This ability would enable a creature to recognize the likelihood that their perceptions are deceiving them (e.g. that water in the distance may be a mirage) and behave accordingly, and it could also facilitate the manipulation of others by recognizing how things appear to them for both cooperative and devious ends. Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes. No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., a philosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between function F being performed by conscious organism O and non-conscious organism O*, it is unclear what adaptive advantage consciousness could provide. As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was an exaptation arising as a consequence of other developments such as increases in brain size or cortical rearrangement. Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired. Several scholars including Pinker, Chomsky, Edelman, and Luria have indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness (see Neural correlates section above). Altered states There are some brain states in which consciousness seems to be absent, including dreamless sleep or coma. There are also a variety of circumstances that can change the relationship between the mind and the world in less drastic ways, producing what are known as altered states of consciousness. Some altered states occur naturally; others can be produced by drugs or brain damage. Altered states can be accompanied by changes in thinking, disturbances in the sense of time, feelings of loss of control, changes in emotional expression, alternations in body image and changes in meaning or significance. The two most widely accepted altered states are sleep and dreaming. Although dream sleep and non-dream sleep appear very similar to an outside observer, each is associated with a distinct pattern of brain activity, metabolic activity, and eye movement; each is also associated with a distinct pattern of experience and cognition. During ordinary non-dream sleep, people who are awakened report only vague and sketchy thoughts, and their experiences do not cohere into a continuous narrative. During dream sleep, in contrast, people who are awakened report rich and detailed experiences in which events form a continuous progression, which may however be interrupted by bizarre or fantastic intrusions. Thought processes during the dream state frequently show a high level of irrationality. Both dream and non-dream states are associated with severe disruption of memory: it usually disappears in seconds during the non-dream state, and in minutes after awakening from a dream unless actively refreshed. Research conducted on the effects of partial epileptic seizures on consciousness found that patients who have partial epileptic seizures experience altered states of consciousness. In partial epileptic seizures, consciousness is impaired or lost while some aspects of consciousness, often automated behaviors, remain intact. Studies found that when measuring the qualitative features during partial epileptic seizures, patients exhibited an increase in arousal and became absorbed in the experience of the seizure, followed by difficulty in focusing and shifting attention. A variety of psychoactive drugs, including alcohol, have notable effects on consciousness. These range from a simple dulling of awareness produced by sedatives, to increases in the intensity of sensory qualities produced by stimulants, cannabis, empathogens–entactogens such as MDMA ("Ecstasy"), or most notably by the class of drugs known as psychedelics. LSD, mescaline, psilocybin, dimethyltryptamine, and others in this group can produce major distortions of perception, including hallucinations; some users even describe their drug-induced experiences as mystical or spiritual in quality. The brain mechanisms underlying these effects are not as well understood as those induced by use of alcohol, but there is substantial evidence that alterations in the brain system that uses the chemical neurotransmitter serotonin play an essential role. There has been some research into physiological changes in yogis and people who practise various techniques of meditation. Some research with brain waves during meditation has reported differences between those corresponding to ordinary relaxation and those corresponding to meditation. It has been disputed, however, whether there is enough evidence to count these as physiologically distinct states of consciousness. The most extensive study of the characteristics of altered states of consciousness was made by psychologist Charles Tart in the 1960s and 1970s. Tart analyzed a state of consciousness as made up of a number of component processes, including exteroception (sensing the external world); interoception (sensing the body); input-processing (seeing meaning); emotions; memory; time sense; sense of identity; evaluation and cognitive processing; motor output; and interaction with the environment. Each of these, in his view, could be altered in multiple ways by drugs or other manipulations. The components that Tart identified have not, however, been validated by empirical studies. Research in this area has not yet reached firm conclusions, but a recent questionnaire-based study identified eleven significant factors contributing to drug-induced states of consciousness: experience of unity; spiritual experience; blissful state; insightfulness; disembodiment; impaired control and cognition; anxiety; complex imagery; elementary imagery; audio-visual synesthesia; and changed meaning of percepts. Medical aspects The medical approach to consciousness is scientifically oriented. It derives from a need to treat people whose brain function has been impaired as a result of disease, brain damage, toxins, or drugs. In medicine, conceptual distinctions are considered useful to the degree that they can help to guide treatments. The medical approach focuses mostly on the amount of consciousness a person has: in medicine, consciousness is assessed as a "level" ranging from coma and brain death at the low end, to full alertness and purposeful responsiveness at the high end. Consciousness is of concern to patients and physicians, especially neurologists and anesthesiologists. Patients may have disorders of consciousness or may need to be anesthetized for a surgical procedure. Physicians may perform consciousness-related interventions such as instructing the patient to sleep, administering general anesthesia, or inducing medical coma. Also, bioethicists may be concerned with the ethical implications of consciousness in medical cases of patients such as the Karen Ann Quinlan case, while neuroscientists may study patients with impaired consciousness in hopes of gaining information about how the brain works. Assessment In medicine, consciousness is examined using a set of procedures known as neuropsychological assessment. There are two commonly used methods for assessing the level of consciousness of a patient: a simple procedure that requires minimal training, and a more complex procedure that requires substantial expertise. The simple procedure begins by asking whether the patient is able to move and react to physical stimuli. If so, the next question is whether the patient can respond in a meaningful way to questions and commands. If so, the patient is asked for name, current location, and current day and time. A patient who can answer all of these questions is said to be "alert and oriented times four" (sometimes denoted "A&Ox4" on a medical chart), and is usually considered fully conscious. The more complex procedure is known as a neurological examination, and is usually carried out by a neurologist in a hospital setting. A formal neurological examination runs through a precisely delineated series of tests, beginning with tests for basic sensorimotor reflexes, and culminating with tests for sophisticated use of language. The outcome may be summarized using the Glasgow Coma Scale, which yields a number in the range 3–15, with a score of 3 to 8 indicating coma, and 15 indicating full consciousness. The Glasgow Coma Scale has three subscales, measuring the best motor response (ranging from "no motor response" to "obeys commands"), the best eye response (ranging from "no eye opening" to "eyes opening spontaneously") and the best verbal response (ranging from "no verbal response" to "fully oriented"). There is also a simpler pediatric version of the scale, for children too young to be able to use language. In 2013, an experimental procedure was developed to measure degrees of consciousness, the procedure involving stimulating the brain with a magnetic pulse, measuring resulting waves of electrical activity, and developing a consciousness score based on the complexity of the brain activity. Disorders Medical conditions that inhibit consciousness are considered disorders of consciousness. This category generally includes minimally conscious state and persistent vegetative state, but sometimes also includes the less severe locked-in syndrome and more severe chronic coma. Differential diagnosis of these disorders is an active area of biomedical research. Finally, brain death results in possible irreversible disruption of consciousness. While other conditions may cause a moderate deterioration (e.g., dementia and delirium) or transient interruption (e.g., grand mal and petit mal seizures) of consciousness, they are not included in this category. Medical experts increasingly view anosognosia as a disorder of consciousness. Anosognosia is a Greek-derived term meaning "unawareness of disease". This is a condition in which patients are disabled in some way, most commonly as a result of a stroke, but either misunderstand the nature of the problem or deny that there is anything wrong with them. The most frequently occurring form is seen in people who have experienced a stroke damaging the parietal lobe in the right hemisphere of the brain, giving rise to a syndrome known as hemispatial neglect, characterized by an inability to direct action or attention toward objects located to the left with respect to their bodies. Patients with hemispatial neglect are often paralyzed on the left side of the body, but sometimes deny being unable to move. When questioned about the obvious problem, the patient may avoid giving a direct answer, or may give an explanation that does not make sense. Patients with hemispatial neglect may also fail to recognize paralyzed parts of their bodies: one frequently mentioned case is of a man who repeatedly tried to throw his own paralyzed right leg out of the bed he was lying in, and when asked what he was doing, complained that somebody had put a dead leg into the bed with him. An even more striking type of anosognosia is Anton–Babinski syndrome, a rarely occurring condition in which patients become blind but claim to be able to see normally, and persist in this claim in spite of all evidence to the contrary. Outside human adults In children Of the eight types of consciousness in the Lycan classification, some are detectable in utero and others develop years after birth. Psychologist and educator William Foulkes studied children's dreams and concluded that prior to the shift in cognitive maturation that humans experience during ages five to seven, children lack the Lockean consciousness that Lycan had labeled "introspective consciousness" and that Foulkes labels "self-reflection." In a 2020 paper, Katherine Nelson and Robyn Fivush use "autobiographical consciousness" to label essentially the same faculty, and agree with Foulkes on the timing of this faculty's acquisition. Nelson and Fivush contend that "language is the tool by which humans create a new, uniquely human form of consciousness, namely, autobiographical consciousness." Julian Jaynes had staked out these positions decades earlier. Citing the developmental steps that lead the infant to autobiographical consciousness, Nelson and Fivush point to the acquisition of "theory of mind," calling theory of mind "necessary for autobiographical consciousness" and defining it as "understanding differences between one's own mind and others' minds in terms of beliefs, desires, emotions and thoughts." They write, "The hallmark of theory of mind, the understanding of false belief, occurs ... at five to six years of age." In animals The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell humans about their experiences. Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind. Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed. Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled What Is it Like to Be a Bat?. He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is like for the organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself. Other thinkers, such as Douglas Hofstadter, dismiss this argument as incoherent. Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Donald Griffin's 2001 book Animal Minds reviews a substantial portion of the evidence. On July 7, 2012, eminent scientists from different branches of neuroscience gathered at the University of Cambridge to celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence of Stephen Hawking, the 'Cambridge Declaration on Consciousness', which summarizes the most important findings of the survey: "We decided to reach a consensus and make a statement directed to the public that is not scientific. It's obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society." "Convergent evidence indicates that non-human animals ..., including all mammals and birds, and other creatures, ... have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors." In artificial intelligence The idea of an artifact made conscious is an ancient theme of mythology, appearing for example in the Greek myth of Pygmalion, who carved a statue that was magically brought to life, and in medieval Jewish stories of the Golem, a magically animated homunculus built of clay. However, the possibility of actually constructing a conscious machine was probably first discussed by Ada Lovelace, in a set of notes written in 1842 about the Analytical Engine invented by Charles Babbage, a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote: One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientist Alan Turing, titled Computing Machinery and Intelligence. Turing disavowed any interest in terminology, saying that even "Can machines think?" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as the Turing test. To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of artificial intelligence as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious, while David Chalmers argues that a philosophical zombie could pass the test, yet fail to be conscious. A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and technology. Jürgen Schmidhuber argues that consciousness is the result of compression. As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness. In a lively exchange over what has come to be referred to as "the Chinese room argument", John Searle sought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped. In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated. Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the robotic version, which requires grounding the robot's words in the robot's sensorimotor capacity to categorize and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on embodied cognition and situated cognition. In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on a machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that a machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness. Stream of consciousness William James is usually credited with popularizing the idea that human consciousness flows like a stream, in his Principles of Psychology of 1890. According to James, the "stream of thought" is governed by five characteristics: Every thought tends to be part of a personal consciousness. Within each personal consciousness thought is always changing. Within each personal consciousness thought is sensibly continuous. It always appears to deal with objects independent of itself. It is interested in some parts of these objects to the exclusion of others. A similar concept appears in Buddhist philosophy, expressed by the Sanskrit term Citta-saṃtāna, which is usually translated as mindstream or "mental continuum". Buddhist teachings describe that consciousness manifests moment to moment as sense impressions and mental phenomena that are continuously changing. The teachings list six triggers that can result in the generation of different mental events. These triggers are input from the five senses (seeing, hearing, smelling, tasting or touch sensations), or a thought (relating to the past, present or the future) that happen to arise in the mind. The mental events generated as a result of these triggers are: feelings, perceptions and intentions/behaviour. The moment-by-moment manifestation of the mind-stream is said to happen in every person all the time. It even happens in a scientist who analyzes various phenomena in the world, or analyzes the material body including the organ brain. The manifestation of the mindstream is also described as being influenced by physical laws, biological laws, psychological laws, volitional laws, and universal laws. The purpose of the Buddhist practice of mindfulness is to understand the inherent nature of the consciousness and its characteristics. Narrative form In the West, the primary impact of the idea has been on literature rather than science: "stream of consciousness as a narrative mode" means writing in a way that attempts to portray the moment-to-moment thoughts and experiences of a character. This technique perhaps had its beginnings in the monologs of Shakespeare's plays and reached its fullest development in the novels of James Joyce and Virginia Woolf, although it has also been used by many other noted writers. Here, for example, is a passage from Joyce's Ulysses about the thoughts of Molly Bloom: Spiritual approaches To most philosophers, the word "consciousness" connotes the relationship between the mind and the world. To writers on spiritual or religious topics, it frequently connotes the relationship between the mind and God, or the relationship between the mind and deeper truths that are thought to be more fundamental than the physical world. The mystical psychiatrist Richard Maurice Bucke, author of the 1901 book Cosmic Consciousness: A Study in the Evolution of the Human Mind, distinguished between three types of consciousness: 'Simple Consciousness', awareness of the body, possessed by many animals; 'Self Consciousness', awareness of being aware, possessed only by humans; and 'Cosmic Consciousness', awareness of the life and order of the universe, possessed only by humans who are enlightened. Many more examples could be given, such as the various levels of spiritual consciousness presented by Prem Saran Satsangi and Stuart Hameroff. Another thorough account of the spiritual approach is Ken Wilber's 1977 book The Spectrum of Consciousness, a comparison of western and eastern ways of thinking about the mind. Wilber described consciousness as a spectrum with ordinary awareness at one end, and more profound types of awareness at higher levels. See also Chaitanya (consciousness): Pure consciousness in Hindu philosophy. Models of consciousness: Ideas for a scientific mechanism underlying consciousness. Plant perception (paranormal): A pseudoscientific theory. Sakshi (Witness): Pure awareness in Hindu philosophy. Vertiginous question: On the uniqueness of a person's consciousness. Reality References Further reading External links Cognitive neuroscience Cognitive psychology Concepts in epistemology Metaphysical properties Concepts in the philosophy of mind Concepts in the philosophy of science Emergence Mental processes Metaphysics of mind Neuropsychological assessment Ontology Phenomenology Theory of mind
5665
https://en.wikipedia.org/wiki/Currency
Currency
A currency is a standardization of money in any form, in use or circulation as a medium of exchange, for example banknotes and coins. A more general definition is that a currency is a system of money in common use within a specific environment over time, especially for people in a nation state. Under this definition, the British Pound Sterling (£), euros (€), Japanese yen (¥), and U.S. dollars (US$) are examples of (government-issued) fiat currencies. Currencies may act as stores of value and be traded between nations in foreign exchange markets, which determine the relative values of the different currencies. Currencies in this sense are either chosen by users or decreed by governments, and each type has limited boundaries of acceptance; i.e., legal tender laws may require a particular unit of account for payments to government agencies. Other definitions of the term "currency" appear in the respective synonymous articles: banknote, coin, and money. This article uses the definition which focuses on the currency systems of countries. One can classify currencies into three monetary systems: fiat money, commodity money, and representative money, depending on what guarantees a currency's value (the economy at large vs. the government's physical metal reserves). Some currencies function as legal tender in certain jurisdictions, or for specific purposes, such as payment to a government (taxes), or government agencies (fees, fines). Others simply get traded for their economic value. The concept of digital currencies has arisen in recent years. Whether government-backed digital notes and coins (such as the digital renminbi in China, for example) will be successfully developed and implemented remains unknown. Digital currencies that are not issued by a government monetary authority, such as cryptocurrencies like Bitcoin, are different because their value is market-dependent and has no safety net. Various countries have expressed concern about the opportunities that cryptocurrencies create for illegal activities such as scams, ransomware (extortion), money laundering and terrorism. In 2014, the United States IRS advised that virtual currency is treated as property for Federal income-tax purposes, and it provides examples of how long-standing tax principles applicable to transactions involving property apply to virtual currency. History Early currency Originally, currency was a form of receipt, representing grain stored in temple granaries in Sumer in ancient Mesopotamia and in Ancient Egypt. In this first stage of currency, metals were used as symbols to represent value stored in the form of commodities. This formed the basis of trade in the Fertile Crescent for over 1500 years. However, the collapse of the Near Eastern trading system pointed to a flaw: in an era where there was no place that was safe to store value, the value of a circulating medium could only be as sound as the forces that defended that store. A trade could only reach as far as the credibility of that military. By the late Bronze Age, however, a series of treaties had established safe passage for merchants around the Eastern Mediterranean, spreading from Minoan Crete and Mycenae in the northwest to Elam and Bahrain in the southeast. It is not known what was used as a currency for these exchanges, but it is thought that oxhide-shaped ingots of copper, produced in Cyprus, may have functioned as a currency. It is thought that the increase in piracy and raiding associated with the Bronze Age collapse, possibly produced by the Peoples of the Sea, brought the trading system of oxhide ingots to an end. It was only the recovery of Phoenician trade in the 10th and 9th centuries BC that led to a return to prosperity, and the appearance of real coinage, possibly first in Anatolia with Croesus of Lydia and subsequently with the Greeks and Persians. In Africa, many forms of value store have been used, including beads, ingots, ivory, various forms of weapons, livestock, the manilla currency, and ochre and other earth oxides. The manilla rings of West Africa were one of the currencies used from the 15th century onwards to sell slaves. African currency is still notable for its variety, and in many places, various forms of barter still apply. Coinage The prevalence of metal coins possibly led to the metal itself being the store of value: first copper, then both silver and gold, and at one point also bronze. Today other non-precious metals are used for coins. Metals were mined, weighed, and stamped into coins. This was to assure the individual accepting the coin that he was getting a certain known weight of precious metal. Coins could be counterfeited, but the existence of standard coins also created a new unit of account, which helped lead to banking. Archimedes' principle provided the next link: coins could now be easily tested for their fine weight of the metal, and thus the value of a coin could be determined, even if it had been shaved, debased or otherwise tampered with (see Numismatics). Most major economies using coinage had several tiers of coins of different values, made of copper, silver, and gold. Gold coins were the most valuable and were used for large purchases, payment of the military, and backing of state activities. Units of account were often defined as the value of a particular type of gold coin. Silver coins were used for midsized transactions, and sometimes also defined a unit of account, while coins of copper or silver, or some mixture of them (see debasement), might be used for everyday transactions. This system had been used in ancient India since the time of the Mahajanapadas. The exact ratios between the values of the three metals varied greatly between different eras and places; for example, the opening of silver mines in the Harz mountains of central Europe made silver relatively less valuable, as did the flood of New World silver after the Spanish conquests. However, the rarity of gold consistently made it more valuable than silver, and likewise silver was consistently worth more than copper. Paper money In premodern China, the need for lending and for a medium of exchange that was less physically cumbersome than large numbers of copper coins led to the introduction of paper money, i.e. banknotes. Their introduction was a gradual process that lasted from the late Tang dynasty (618–907) into the Song dynasty (960–1279). It began as a means for merchants to exchange heavy coinage for receipts of deposit issued as promissory notes by wholesalers' shops. These notes were valid for temporary use in a small regional territory. In the 10th century, the Song dynasty government began to circulate these notes amongst the traders in its monopolized salt industry. The Song government granted several shops the right to issue banknotes, and in the early 12th century the government finally took over these shops to produce state-issued currency. Yet the banknotes issued were still only locally and temporarily valid: it was not until the mid 13th century that a standard and uniform government issue of paper money became an acceptable nationwide currency. The already widespread methods of woodblock printing and then Bi Sheng's movable type printing by the 11th century were the impetus for the mass production of paper money in premodern China. At around the same time in the medieval Islamic world, a vigorous monetary economy was created during the 7th–12th centuries on the basis of the expanding levels of circulation of a stable high-value currency (the dinar). Innovations introduced by Muslim economists, traders and merchants include the earliest uses of credit, cheques, promissory notes, savings accounts, transaction accounts, loaning, trusts, exchange rates, the transfer of credit and debt, and banking institutions for loans and deposits. In Europe, paper currency was first introduced on a regular basis in Sweden in 1661 (although Washington Irving records an earlier emergency use of it, by the Spanish in a siege during the Conquest of Granada). As Sweden was rich in copper, many copper coins were in circulation, but its relatively low value necessitated extraordinarily big coins, often weighing several kilograms. The advantages of paper currency were numerous: it reduced the need to transport gold and silver, which was risky; it facilitated loans of gold or silver at interest, since the underlying specie (money in the form of gold or silver coins rather than notes) never left the possession of the lender until someone else redeemed the note; and it allowed a division of currency into credit- and specie-backed forms. It enabled the sale of investment in joint-stock companies and the redemption of those shares in a paper. But there were also disadvantages. First, since a note has no intrinsic value, there was nothing to stop issuing authorities from printing more notes than they had specie to back them with. Second, because this increased the money supply, it increased inflationary pressures, a fact observed by David Hume in the 18th century. Thus paper money would often lead to an inflationary bubble, which could collapse if people began demanding hard money, causing the demand for paper notes to fall to zero. The printing of paper money was also associated with wars, and financing of wars, and therefore regarded as part of maintaining a standing army. For these reasons, paper currency was held in suspicion and hostility in Europe and America. It was also addictive since the speculative profits of trade and capital creation were quite large. Major nations established mints to print money and mint coins, and branches of their treasury to collect taxes and hold gold and silver stock. At that time, both silver and gold were considered a legal tender and accepted by governments for taxes. However, the instability in the exchange rate between the two grew over the course of the 19th century, with the increases both in the supply of these metals, particularly silver, and in trade. The parallel use of both metals is called bimetallism, and the attempt to create a bimetallic standard where both gold and silver backed currency remained in circulation occupied the efforts of inflationists. Governments at this point could use currency as an instrument of policy, printing paper currency such as the United States greenback, to pay for military expenditures. They could also set the terms at which they would redeem notes for specie, by limiting the amount of purchase, or the minimum amount that could be redeemed. By 1900, most of the industrializing nations were on some form of gold standard, with paper notes and silver coins constituting the circulating medium. Private banks and governments across the world followed Gresham's law: keeping the gold and silver they received but paying out in notes. This did not happen all around the world at the same time, but occurred sporadically, generally in times of war or financial crisis, beginning in the early 20th century and continuing across the world until the late 20th century, when the regime of floating fiat currencies came into force. One of the last countries to break away from the gold standard was the United States in 1971, an action which was known as the Nixon shock. No country has an enforceable gold standard or silver standard currency system. Banknote era A banknote or a bill is a type of currency and it is commonly used as legal tender in many jurisdictions. Together with coins, banknotes make up the cash form of a currency. Banknotes were initially mostly paper, but Australia's Commonwealth Scientific and Industrial Research Organisation developed a polymer currency in the 1980s; it went into circulation on the nation's bicentenary in 1988. Polymer banknotes had already been introduced in the Isle of Man in 1983. polymer currency is used in over 20 countries (over 40 if counting commemorative issues), and dramatically increases the life span of banknotes and reduces counterfeiting. Modern currencies The currency used is based on the concept of lex monetae; that a sovereign state decides which currency it shall use. (See Fiat currency.) Currency codes and currency symbols In 1978 the International Organization for Standardization published a system of three-digit alphabetic codes (ISO 4217) to denote currencies. These codes are based on two initial letters allocated to a specific country and a final letter denoting a specific monetary unit of account. Many currencies use a currency symbol. These are not subject to international standards and are not unique: the dollar sign in particular has many uses. Alternative currencies Distinct from centrally controlled government-issued currencies, private decentralized trust-reduced networks support alternative currencies (such as Bitcoin and Ethereum's ether, which are classified as cryptocurrency since transference transactions are assured through cryptographic signatures validated by all users. With few exceptions, these currencies are not asset backed. The U.S. Commodity Futures Trading Commission has declared Bitcoin (and, by extension, similar products) to be a commodity under the Commodity Exchange Act. There are also branded currencies, for example 'obligation' based stores of value, such as quasi-regulated BarterCard, Loyalty Points (Credit Cards, Airlines) or Game-Credits (MMO games) that are based on reputation of commercial products. Historically, pseudo-currencies have also included company scrip, a form of wages that could only be exchanged in company stores owned by the employers. Modern token money, such as the tokens operated by local exchange trading systems (LETS), is a form of barter rather than being a true currency. The currency may be Internet-based and digital, for instance, Bitcoin is not tied to any specific country, or the IMF's SDR that is based on a basket of currencies (and assets held). Possession and sale of alternative forms of currencies is often outlawed by governments in order to preserve the legitimacy of the constitutional currency for the benefit of all citizens. For example, Article I, section 8, clause 5 of the United States Constitution delegates to Congress the power to coin money and to regulate the value thereof. This power was delegated to Congress in order to establish and preserve a uniform standard of value and to insure a singular monetary system for all purchases and debts in the United States, public and private. Along with the power to coin money, the United States Congress has the concurrent power to restrain the circulation of money which is not issued under its own authority in order to protect and preserve the constitutional currency. It is a violation of federal law for individuals, or organizations to create private coin or currency systems to compete with the official coinage and currency of the United States. Control and production In most cases, a central bank has the exclusive power to issue all forms of currency, including coins and banknotes (fiat money), and to restrain the circulation alternative currencies for its own area of circulation (a country or group of countries); it regulates the production of currency by banks (credit) through monetary policy. An exchange rate is a price at which two currencies can be exchanged against each other. This is used for trade between the two currency zones. Exchange rates can be classified as either floating or fixed. In the former, day-to-day movements in exchange rates are determined by the market; in the latter, governments intervene in the market to buy or sell their currency to balance supply and demand at a static exchange rate. In cases where a country has control of its own currency, that control is exercised either by a central bank or by a Ministry of Finance. The institution that has control of monetary policy is referred to as the monetary authority. Monetary authorities have varying degrees of autonomy from the governments that create them. A monetary authority is created and supported by its sponsoring government, so independence can be reduced by the legislative or executive authority that creates it. Several countries can use the same name for their own separate currencies (for example, a dollar in Australia, Canada, and the United States). By contrast, several countries can also use the same currency (for example, the euro or the CFA franc), or one country can declare the currency of another country to be legal tender. For example, Panama and El Salvador have declared US currency to be legal tender, and from 1791 to 1857, Spanish dollars were legal tender in the United States. At various times countries have either re-stamped foreign coins or used currency boards, issuing one note of currency for each note of a foreign government held, as Ecuador currently does. Each currency typically has a main currency unit (the dollar, for example, or the euro) and a fractional unit, often defined as of the main unit: 100 cents = 1 dollar, 100 centimes = 1 franc, 100 pence = 1 pound, although units of or occasionally also occur. Some currencies do not have any smaller units at all, such as the Icelandic króna and the Japanese yen. Mauritania and Madagascar are the only remaining countries that have theoretical fractional units not based on the decimal system; instead, the Mauritanian ouguiya is in theory divided into 5 khoums, while the Malagasy ariary is theoretically divided into 5 iraimbilanja. In these countries, words like dollar or pound "were simply names for given weights of gold". Due to inflation khoums and iraimbilanja have in practice fallen into disuse. (See non-decimal currencies for other historic currencies with non-decimal divisions.) Currency convertibility Subject to variation around the world, local currency can be converted to another currency or vice versa with or without central bank/government intervention. Such conversions take place in the foreign exchange market. Based on the above restrictions or free and readily conversion features, currencies are classified as: Fully convertible When there are no restrictions or limitations on the amount of currency that can be traded on the international market, and the government does not artificially impose a fixed value or minimum value on the currency in international trade. The US dollar is one of the main fully convertible currencies. Partially convertible Central banks control international investments flowing into and out of a country. While most domestic transactions are handled without any special requirements, there are significant restrictions on international investing, and special approval is often required in order to convert into other currencies. The Indian rupee and the renminbi are examples of partially convertible currencies. Nonconvertible A government neither participates in the international currency market nor allows the conversion of its currency by individuals or companies. These currencies are also known as blocked, e.g. the North Korean won and the Cuban peso. According to the three aspects of trade in goods and services, capital flows and national policies, the supply-demand relationship of different currencies determines the exchange ratio between currencies. Trade in goods and services Through cost transfer, goods and services circulating in the country (such as hotels, tourism, catering, advertising, household services) will indirectly affect the trade cost of goods and services and the price of export trade. Therefore, services and goods involved in international trade are not the only reason affecting the exchange rate. The large number of international tourists and overseas students has resulted in the flow of services and goods at home and abroad. It also represents that the competitiveness of global goods and services directly affects the change of international exchange rates. Capital flows National currencies will be traded on international markets for investment purposes. Investment opportunities in each country attract other countries into investment programs, so that these foreign currencies become the reserves of the central banks of each country. The exchange rate mechanism, in which currencies are quoted continuously between countries, is based on foreign exchange markets in which currencies are invested by individuals and traded or speculated by central banks and investment institutions. In addition, changes in interest rates, capital market fluctuations and changes in investment opportunities will affect the global capital inflows and outflows of countries around the world, and exchange rates will fluctuate accordingly. National policies The country's foreign trade, monetary and fiscal policies affect the exchange rate fluctuations. Foreign trade includes policies such as tariffs and import standards for commodity exports. The impact of monetary policy on the total amount and yield of money directly determines the changes in the international exchange rate. Fiscal policies, such as transfer payments, taxation ratios, and other factors, dominate the profitability of capital and economic development, and the ratio of national debt issuance to deficit determines the repayment capacity and credit rating of the country. Such policies determine the mechanism of linking domestic and foreign currencies and therefore have a significant impact on the generation of exchange rates. Currency convertibility is closely linked to economic development and finance. There are strict conditions for countries to achieve currency convertibility, which is a good way for countries to improve their economies. The currencies of some countries or regions in the world are freely convertible, such as the US dollar, Australian dollar and Japanese yen. The requirements for currency convertibility can be roughly divided into four parts: Sound microeconomic agency With a freely convertible currency, domestic firms will have to compete fiercely with their foreign counterparts. The development of competition among them will affect the implementation effect of currency convertibility. In addition, microeconomics is a prerequisite for macroeconomic conditions. The macroeconomic situation and policies are stable Since currency convertibility is the cross-border flow of goods and capital, it will have an impact on the macro economy. This requires that the national economy be in a normal and orderly state, that is, there is no serious inflation and economic overheating. In addition, the government should use macro policies to make mature adjustments to deal with the impact of currency exchange on the economy. A reasonable and open economy The maintainability of international balance of payments is the main performance of reasonable economic structure. Currency convertibility not only causes difficulties in the sustainability of international balance of payments but also affects the government's direct control over international economic transactions. To eliminate the foreign exchange shortage, the government needs adequate international reserves. Appropriate exchange rate regime and level The level of exchange rate is an important factor in maintaining exchange rate stability, both before and after currency convertibility. The exchange rate of freely convertible currency is too high or too low, which can easily trigger speculation and undermine the stability of macroeconomic and financial markets. Therefore, to maintain the level of exchange rate, a proper exchange rate regime is crucial. Local currency In economics, a local currency is a currency not backed by a national government and intended to trade only in a small area. Advocates such as Jane Jacobs argue that this enables an economically depressed region to pull itself up, by giving the people living there a medium of exchange that they can use to exchange services and locally produced goods (in a broader sense, this is the original purpose of all money). Opponents of this concept argue that local currency creates a barrier that can interfere with economies of scale and comparative advantage and that in some cases they can serve as a means of tax evasion. Local currencies can also come into being when there is economic turmoil involving the national currency. An example of this is the Argentinian economic crisis of 2002 in which IOUs issued by local governments quickly took on some of the characteristics of local currencies. One of the best examples of a local currency is the original LETS currency, founded on Vancouver Island in the early 1980s. In 1982, the Canadian Central Bank's lending rates ran up to 14% which drove chartered bank lending rates as high as 19%. The resulting currency and credit scarcity left island residents with few options other than to create a local currency. List of major world payment currencies The following table are estimates of the 20 most frequently used currencies in world payments in September 2023 by SWIFT. See also Related concepts Counterfeit money Currency band Currency transaction tax Debasement Exchange rate Fiscal localism Foreign currency exchange Foreign exchange reserves Functional currency History of banking History of money Mutilated currency Optimum currency area Slang terms for money Virtual currency World currency Accounting units Currency pair Currency symbol Currency strength European Currency Unit Fictional currency Franc Poincaré Local currencies Petrocurrency Special drawing rights Lists ISO 4217 List of alternative names for currency List of currencies List of circulating currencies List of proposed currencies List of historical currencies List of historical exchange rates List of international trade topics List of motifs on banknotes Notes References External links Foreign exchange market
5666
https://en.wikipedia.org/wiki/Central%20bank
Central bank
A central bank, reserve bank, or monetary authority is an institution that manages the currency and monetary policy of a country or monetary union. In contrast to a commercial bank, a central bank possesses a monopoly on increasing the monetary base. Many central banks also have supervisory or regulatory powers to ensure the stability of commercial banks in their jurisdiction, to prevent bank runs, and in some cases also to enforce policies on financial consumer protection and against bank fraud, money laundering, or terrorism financing. Central banks in most developed nations are institutionally independent from political interference, even though governments typically have governance rights over them and legislative bodies exercise scrutiny. Issues like central bank independence, central bank policies and rhetoric in central bank governors discourse or the premises of macroeconomic policies (monetary and fiscal policy) of the state are a focus of contention and criticism by some policymakers, researchers and specialized business, economics and finance media. Definition The notion of central banks as a separate category from other banks has emerged gradually, and only fully coalesced in the 20th century. In the aftermath of World War I, leading central bankers of the United Kingdom and the United States respectively, Montagu Norman and Benjamin Strong, agreed on a definition of central banks that was both positive and normative. Since that time, central banks have been generally distinguishable from other financial institutions, except in so-called single-tier communist systems such as Hungary's between 1950 and 1987, where the Hungarian National Bank operated alongside three other major state-owned banks. For earlier periods, what institutions do or do not count as central banks is often not univocal. Correlatively, different scholars have held different views about the timeline of emergence of the first central banks. A widely held view in the second half of the 20th century has been that Stockholms Banco (est. 1657), as the original issuer of banknotes, counted as the oldest central bank, and that consequently its successor the Sveriges Riksbank was the oldest central bank in continuous operation, with the Bank of England as second-oldest and direct or indirect model for all subsequent central banks. That view has persisted in some early-21st-century publications. In more recent scholarship, however, the issuance of banknotes has often been viewed as just one of several techniques to provide central bank money, defined as financial money (in contrast to commodity money) of the highest quality. Under that definition, municipal banks of the late medieval and early modern periods, such as the Taula de canvi de Barcelona (est. 1401) or Bank of Amsterdam (est. 1609), issued central bank money and count as early central banks. Naming There is no universal terminology for the name of a central bank. Early central banks were often the only or principal formal financial institution in their jurisdiction, and were consequently often named "bank of" the relevant city's or country's name, e.g. the Bank of Amsterdam, Bank of Hamburg, Bank of England, or Wiener Stadtbank. Naming practices subsequently evolved as more central banks were established. They include, with references to the date when the bank acquired its current name: "Bank of [Country]": e.g. Bank of Spain (1782), Bank of the United States (1791), Bank of France (1800), Bank of Java (1828), Bank of Japan (1882), Bank of Italy (1893), Bank of China (1912), Bank of Mexico (1925), Bank of Canada (1934), Bank of Korea (1950). The Bank of England has kept its original name of 1694, even though the Act of Union 1707 and Acts of Union 1800 expanded its remit to the broader United Kingdom. "National Bank": e.g. National Bank of Belgium (1850), Bulgarian National Bank (1879), Swiss National Bank (1907), National Bank of Poland (1945), National Bank of Ukraine (1991). "Reserve Bank": in the U.S. Federal Reserve (1913) and thereafter British colonies or dominions, e.g. South African Reserve Bank (1921), Reserve Bank of New Zealand (1934), Reserve Bank of India (1935), Reserve Bank of Australia (1960), Reserve Bank of Fiji (1984) "Central Bank": e.g. Central Bank of China (1924), Central Bank of the Republic of Turkey (1930), Central Bank of Argentina (1935), Central Bank of Ireland (1943), Central Bank of Paraguay (1952), Central Bank of Brazil (1964), European Central Bank (1998). "State Bank": e.g. State Bank of Pakistan (1948), State Bank of Vietnam (1951); also former central banks of Communist countries, e.g. the Soviet Gosbank (1922) or the State Bank of Czechoslovakia (1950). "People's Bank", also associated with Communism, is used by the People's Bank of China. "Monetary Authority", e.g. Monetary Authority of Singapore (1971), Maldives Monetary Authority (1981), Hong Kong Monetary Authority (1993), Cayman Islands Monetary Authority (1997). The Saudi Arabian Monetary Authority (est. 1952) was renamed the Saudi Central Bank in 2020 but still uses the acronym SAMA. In some cases, the local-language name is used in English-language practice, e.g. Sveriges Riksbank (est. 1668, current name in use since 1866), De Nederlandsche Bank (est. 1814), Deutsche Bundesbank (est. 1957), or Bangko Sentral ng Pilipinas (est. 1993). Some commercial banks have names suggestive of central banks, even if they are not: examples are the State Bank of India and Central Bank of India, National Bank of Greece, Banco do Brasil, National Bank of Pakistan, Bank of China, Bank of Cyprus, or Bank of Ireland, as well as Deutsche Bank. Some but not all of these institutions had assumed central banking roles in the past. The leading executive of a central bank is usually known as the Governor, President, or Chair. History Background The use of money as a unit of account predates history. Government control of money is documented in the ancient Egyptian economy (2750–2150 BCE). The Egyptians measured the value of goods with a central unit called shat. Like many other currencies, the shat was linked to gold. The value of a shat in terms of goods was defined by government administrations. Other cultures in Asia Minor later materialized their currencies in the form of gold and silver coins. The issuance of paper currency is not to be equated with central banking, even though paper currency is a form of financial money (i.e. not commodity money). The difference is that government-issued paper currency, as present e.g. in China during the Yuan dynasty, is typically not freely convertible and thus of inferior quality, occasionally leading to hyperinflation. From the 12th century, a network of professional banks emerged primarily in Southern Europe (including Southern France, with the Cahorsins). Banks could use book money to create deposits for their customers. Thus, they had the possibility to issue, lend and transfer money autonomously without direct control from political authorities. Early municipal central banks The Taula de canvi de Barcelona, established in 1401, is the first example of municipal, mostly public banks which pioneered central banking on a limited scale. It was soon emulated by the Bank of Saint George in the Republic of Genoa, first established in 1407, and significantly later by the Banco del Giro in the Republic of Venice and by a network of institutions in Naples that later consolidated into Banco di Napoli. Notable municipal central banks were established in the early 17th century in leading northwestern European commercial centers, namely the Bank of Amsterdam in 1609 and the Hamburger Bank in 1619. These institutions offered a public infrastructure for cashless international payments. They aimed to increase the efficiency of international trade and to safeguard monetary stability. These municipal public banks thus fulfilled comparable functions to modern central banks. Early national central banks The Swedish central bank, known since 1866 as Sveriges Riksbank, was founded in Stockholm in 1664 from the remains of the failed Stockholms Banco and answered to the Riksdag of the Estates, Sweden's early modern parliament. One role of the Swedish central bank was lending money to the government. The establishment of the Bank of England was devised by Charles Montagu, 1st Earl of Halifax, following a 1691 proposal by William Paterson. A royal charter was granted on through the passage of the Tonnage Act. The bank was given exclusive possession of the government's balances, and was the only limited-liability corporation allowed to issue banknotes. The early modern Bank of England, however, did not have all the functions of a today's central banks, e.g. to regulate the value of the national currency, to finance the government, to be the sole authorized distributor of banknotes, or to function as a lender of last resort to banks suffering a liquidity crisis. In the early 18th century, a major experiment in national central banking failed in France with John Law's Banque Royale in 1720-1721. Later in the century, France had other attempts with the Caisse d'Escompte first created in 1767, and King Charles III established the Bank of Spain in 1782. The Russian Assignation Bank, established in 1769 by Catherine the Great, was an outlier from the general pattern of early national central banks in that it was directly owned by the Imperial Russian government, rather than private individual shareholders. In the nascent United States, Alexander Hamilton, as Secretary of the Treasury in the 1790s, set up the First Bank of the United States despite heavy opposition from Jeffersonian Republicans. National central banks since 1800 Central banks were established in many European countries during the 19th century. Napoleon created the Banque de France in 1800, in order to stabilize and develop the French economy and to improve the financing of his wars. The Bank of France remained the most important Continental European central bank throughout the 19th century. The Bank of Finland was founded in 1812, soon after Finland had been taken over from Sweden by Russia to become a grand duchy. Simultaneously, a quasi-central banking role was played by a small group of powerful family-run banking networks, typified by the House of Rothschild, with branches in major cities across Europe, as well as Hottinguer in Switzerland and Oppenheim in Germany. The theory of central banking, even though the name was not yet widely used, evolved in the 19th century. Henry Thornton, an opponent of the real bills doctrine, was a defender of the bullionist position and a significant figure in monetary theory. Thornton's process of monetary expansion anticipated the theories of Knut Wicksell regarding the "cumulative process which restates the Quantity Theory in a theoretically coherent form". As a response to a currency crisis in 1797, Thornton wrote in 1802 An Enquiry into the Nature and Effects of the Paper Credit of Great Britain, in which he argued that the increase in paper credit did not cause the crisis. The book also gives a detailed account of the British monetary system as well as a detailed examination of the ways in which the Bank of England should act to counteract fluctuations in the value of the pound. In the United Kingdom until the mid-nineteenth century, commercial banks were able to issue their own banknotes, and notes issued by provincial banking companies were commonly in circulation. Many consider the origins of the central bank to lie with the passage of the Bank Charter Act 1844. Under the 1844 Act, bullionism was institutionalized in Britain, creating a ratio between the gold reserves held by the Bank of England and the notes that the bank could issue. The Act also placed strict curbs on the issuance of notes by the country banks. The Bank of England took over a role of lender of last resort in the 1870s after criticism of its lacklustre response to the failure of Overend, Gurney and Company. The journalist Walter Bagehot wrote on the subject in Lombard Street: A Description of the Money Market, in which he advocated for the bank to officially become a lender of last resort during a credit crunch, sometimes referred to as "Bagehot's dictum". The 19th and early 20th centuries central banks in most of Europe and Japan developed under the international gold standard. Free banking or currency boards were common at the time. Problems with collapses of banks during downturns, however, led to wider support for central banks in those nations which did not as yet possess them, for example in Australia. In the United States, the role of a central bank had been ended in the so-called Bank War of the 1830s by President Andrew Jackson. In 1913, the U.S. created the Federal Reserve System through the passing of The Federal Reserve Act. Following World War I, the Economic and Financial Organization (EFO) of the League of Nations, influenced by the ideas of Montagu Norman and other leading policymakers and economists of the time, took an active role to promote the independence of central bank, a key component of the economic orthodoxy the EFO fostered at the Brussels Conference (1920). The EFO thus directed the creation of the Oesterreichische Nationalbank in Austria, Hungarian National Bank, Bank of Danzig, and Bank of Greece, as well as comprehensive reforms of the Bulgarian National Bank and Bank of Estonia. Similar ideas were emulated in other newly independent European countries, e.g. for the National Bank of Czechoslovakia. By 1935, the only significant independent nation that did not possess a central bank was Brazil, which subsequently developed a precursor thereto in 1945 and the present Central Bank of Brazil twenty years later. After gaining independence, numerous African and Asian countries also established central banks or monetary unions. The Reserve Bank of India, which had been established during British colonial rule as a private company, was nationalized in 1949 following India's independence. By the early 21st century, most of the world's countries had a national central bank set up as a public sector institution, albeit with widely varying degrees of independence. Colonial, extraterritorial and federal central banks Before the near-generalized adoption of the model of national public-sector central banks, a number of economies relied on a central bank that was effectively or legally run from outside their territory. The first colonial central banks, such as the Bank of Java (est. 1828 in Batavia), Banque de l'Algérie (est. 1851 in Algiers), or Hongkong and Shanghai Banking Corporation (est. 1865 in Hong Kong), operated from the colony itself. Following the generalization of the transcontinental use of the electrical telegraph using submarine communications cable, however, new colonial banks were typically headquartered in the colonial metropolis; prominent examples included the Paris-based Banque de l'Indochine (est. 1875), Banque de l'Afrique Occidentale (est. 1901), and Banque de Madagascar (est. 1925). The Banque de l'Algérie's head office was relocated from Algiers to Paris in 1900. In some cases, independent countries which did not have a strong domestic base of capital accumulation and were critically reliant on foreign funding found advantage in granting a central banking role to banks that were effectively or even legally foreign. A seminal case was the Imperial Ottoman Bank established in 1863 as a French-British joint venture, and a particularly egregious one was the Paris-based National Bank of Haiti (est. 1881) which captured significant financial resources from the economically struggling albeit independent nation of Haiti. Other cases include the London-based Imperial Bank of Persia, established in 1885, and the Rome-based National Bank of Albania, established in 1925. The State Bank of Morocco was established in 1907 with international shareholding and headquarters functions distributed between Paris and Tangier, a half-decade before the country lost its independence. In other cases, there have been organized currency unions such as the Belgium–Luxembourg Economic Union established in 1921, under which Luxembourg had no central bank, but that was managed by a national central bank (in that case the National Bank of Belgium) rather than a supranational one. The present-day Common Monetary Area of Southern Africa has comparable features. Yet another pattern was set in countries where federated or otherwise sub-sovereign entities had wide policy autonomy that was echoed to varying degrees in the organization of the central bank itself. These included, for example, the Austro-Hungarian Bank from 1878 to 1918, the U.S. Federal Reserve in its first two decades, the Bank deutscher Länder between 1948 and 1957, or the National Bank of Yugoslavia between 1972 and 1993. Conversely, some countries that are politically organized as federations, such as today's Canada, Mexico, or Switzerland, rely on a unitary central bank. Supranational central banks In the second half of the 20th century, the dismantling of colonial systems left some groups of countries using the same currency even though they had achieved national independence. In contrast to the unraveling of Austria-Hungary and the Ottoman Empire after World War I, some of these countries decided to keep using a common currency, thus forming a monetary union, and to entrust its management to a common central bank. Examples include the Eastern Caribbean Currency Authority, the Central Bank of West African States, and the Bank of Central African States. The concept of supranational central banking took a globally significant dimension with the Economic and Monetary Union of the European Union and the establishment of the European Central Bank (ECB) in 1998. In 2014, the ECB took an additional role of banking supervision as part of the newly established policy of European banking union. Central bank mandates Price stability The primary role of central banks is usually to maintain price stability, as defined as a specific level of inflation. Inflation is defined either as the devaluation of a currency or equivalently the rise of prices relative to a currency. Most central banks currently have an inflation target close to 2%. Since inflation lowers real wages, Keynesians view inflation as the solution to involuntary unemployment. However, "unanticipated" inflation leads to lender losses as the real interest rate will be lower than expected. Thus, Keynesian monetary policy aims for a steady rate of inflation. Central banks as monetary authorities in representative states are intertwined through globalized financial markets. As a regulator of one of the most widespread currencies in the global economy, the US Federal Reserve plays an outsized role in the international monetary market. Being the main supplier and rate adjusted for US dollars, the Federal Reserve implements a set of requirements to control inflation and unemployment in the US. High employment Frictional unemployment is the time period between jobs when a worker is searching for, or transitioning from one job to another. Unemployment beyond frictional unemployment is classified as unintended unemployment. For example, structural unemployment is a form of unemployment resulting from a mismatch between demand in the labour market and the skills and locations of the workers seeking employment. Macroeconomic policy generally aims to reduce unintended unemployment. Keynes labeled any jobs that would be created by a rise in wage-goods (i.e., a decrease in real-wages) as involuntary unemployment: Men are involuntarily unemployed if, in the event of a small rise in the price of wage-goods relatively to the money-wage, both the aggregate supply of labour willing to work for the current money-wage and the aggregate demand for it at that wage would be greater than the existing volume of employment.— John Maynard Keynes, The General Theory of Employment, Interest and Money p1 Economic growth Economic growth can be enhanced by investment in capital, such as more or better machinery. A low interest rate implies that firms can borrow money to invest in their capital stock and pay less interest for it. Lowering the interest is therefore considered to encourage economic growth and is often used to alleviate times of low economic growth. On the other hand, raising the interest rate is often used in times of high economic growth as a contra-cyclical device to keep the economy from overheating and avoid market bubbles. Further goals of monetary policy are stability of interest rates, of the financial market, and of the foreign exchange market. Goals frequently cannot be separated from each other and often conflict. Costs must therefore be carefully weighed before policy implementation. Climate change In the aftermath of the Paris agreement on climate change, a debate is now underway on whether central banks should also pursue environmental goals as part of their activities. In 2017, eight central banks formed the Network for Greening the Financial System (NGFS) to evaluate the way in which central banks can use their regulatory and monetary policy tools to support climate change mitigation. Today more than 70 central banks are part of the NGFS. In January 2020, the European Central Bank has announced it will consider climate considerations when reviewing its monetary policy framework. Proponents of "green monetary policy" are proposing that central banks include climate-related criteria in their collateral eligibility frameworks, when conducting asset purchases and also in their refinancing operations. But critics such as Jens Weidmann are arguing it is not central banks' role to conduct climate policy. China is among the most advanced central banks when it comes to green monetary policy. It has given green bonds preferential status to lower their yield and uses window policy to direct green lending. Central bank operations The functions of a central bank may include: Monetary policy: by setting the official interest rate and controlling the money supply; Financial stability: acting as a government's banker and as the bankers' bank ("lender of last resort"); Reserve management: managing a country's foreign-exchange and gold reserves and government bonds; Banking supervision: regulating and supervising the banking industry, and currency exchange; Payments system: managing or supervising means of payments and inter-banking clearing systems; Coins and notes issuance; Other functions of central banks may include economic research, statistical collection, supervision of deposit guarantee schemes, advice to government in financial policy. Monetary policy Central banks implement a country's chosen monetary policy. Currency issuance At the most basic level, monetary policy involves establishing what form of currency the country may have, whether a fiat currency, gold-backed currency (disallowed for countries in the International Monetary Fund), currency board or a currency union. When a country has its own national currency, this involves the issue of some form of standardized currency, which is essentially a form of promissory note: "money" under certain circumstances. Historically, this was often a promise to exchange the money for precious metals in some fixed amount. Now, when many currencies are fiat money, the "promise to pay" consists of the promise to accept that currency to pay for taxes. A central bank may use another country's currency either directly in a currency union, or indirectly on a currency board. In the latter case, exemplified by the Bulgarian National Bank, Hong Kong and Latvia (until 2014), the local currency is backed at a fixed rate by the central bank's holdings of a foreign currency. Similar to commercial banks, central banks hold assets (government bonds, foreign exchange, gold, and other financial assets) and incur liabilities (currency outstanding). Central banks create money by issuing banknotes and loaning them to the government in exchange for interest-bearing assets such as government bonds. When central banks decide to increase the money supply by an amount which is greater than the amount their national governments decide to borrow, the central banks may purchase private bonds or assets denominated in foreign currencies. The European Central Bank remits its interest income to the central banks of the member countries of the European Union. The US Federal Reserve remits most of its profits to the U.S. Treasury. This income, derived from the power to issue currency, is referred to as seigniorage, and usually belongs to the national government. The state-sanctioned power to create currency is called the Right of Issuance. Throughout history, there have been disagreements over this power, since whoever controls the creation of currency controls the seigniorage income. The expression "monetary policy" may also refer more narrowly to the interest-rate targets and other active measures undertaken by the monetary authority. Monetary policy instruments The primary tools available to central banks are open market operations (including repurchase agreements), reserve requirements, interest rate policy (through control of the discount rate), and control of the money supply. A central bank affects the monetary base through open market operations, if its country has a well developed market for its government bonds. This entails managing the quantity of money in circulation through the buying and selling of various financial instruments, such as treasury bills, repurchase agreements or "repos", company bonds, or foreign currencies, in exchange for money on deposit at the central bank. Those deposits are convertible to currency, so all of these purchases or sales result in more or less base currency entering or leaving market circulation. For example, if the central bank wishes to decrease interest rates (executing expansionary monetary policy), it purchases government debt, thereby increasing the amount of cash in circulation or crediting banks' reserve accounts. Commercial banks then have more money to lend, so they reduce lending rates, making loans less expensive. Cheaper credit card interest rates increase consumer spending. Additionally, when business loans are more affordable, companies can expand to keep up with consumer demand. They ultimately hire more workers, whose incomes increase, which in its turn also increases the demand. This method is usually enough to stimulate demand and drive economic growth to a healthy rate. Usually, the short-term goal of open market operations is to achieve a specific short-term interest rate target. In other instances, monetary policy might instead entail the targeting of a specific exchange rate relative to some foreign currency or else relative to gold. For example, in the case of the United States the Federal Reserve targets the federal funds rate, the rate at which member banks lend to one another overnight; however, the monetary policy of China (since 2014) is to target the exchange rate between the Chinese renminbi and a basket of foreign currencies. If the open market operations do not lead to the desired effects, a second tool can be used: the central bank can increase or decrease the interest rate it charges on discounts or overdrafts (loans from the central bank to commercial banks, see discount window). If the interest rate on such transactions is sufficiently low, commercial banks can borrow from the central bank to meet reserve requirements and use the additional liquidity to expand their balance sheets, increasing the credit available to the economy. A third alternative is to change the reserve requirements. The reserve requirement refers to the proportion of total liabilities that banks must keep on hand overnight, either in its vaults or at the central bank. Banks only maintain a small portion of their assets as cash available for immediate withdrawal; the rest is invested in illiquid assets like mortgages and loans. Lowering the reserve requirement frees up funds for banks to increase loans or buy other profitable assets. This is expansionary because it creates credit. However, even though this tool immediately increases liquidity, central banks rarely change the reserve requirement because doing so frequently adds uncertainty to banks' planning. The use of open market operations is therefore preferred. Unconventional monetary policy Other forms of monetary policy, particularly used when interest rates are at or near 0% and there are concerns about deflation or deflation is occurring, are referred to as unconventional monetary policy. These include credit easing, quantitative easing, forward guidance, and signalling. In credit easing, a central bank purchases private sector assets to improve liquidity and improve access to credit. Signaling can be used to lower market expectations for lower interest rates in the future. For example, during the credit crisis of 2008, the US Federal Reserve indicated rates would be low for an "extended period", and the Bank of Canada made a "conditional commitment" to keep rates at the lower bound of 25 basis points (0.25%) until the end of the second quarter of 2010. Some have envisaged the use of what Milton Friedman once called "helicopter money" whereby the central bank would make direct transfers to citizens in order to lift inflation up to the central bank's intended target. Such policy option could be particularly effective at the zero lower bound. Central Bank Digital Currencies Since 2017, prospect of implementing Central Bank Digital Currency (CBDC) has been in discussion. As of the end of 2018, at least 15 central banks were considering to implementing CBDC. Since 2014, the People's Bank of China has been working on a project for digital currency to make its own digital currency and electronic payment systems. Banking supervision and other activities In some countries a central bank, through its subsidiaries, controls and monitors the banking sector. In other countries banking supervision is carried out by a government department such as the UK Treasury, or by an independent government agency, for example, UK's Financial Conduct Authority. It examines the banks' balance sheets and behaviour and policies toward consumers. Apart from refinancing, it also provides banks with services such as transfer of funds, bank notes and coins or foreign currency. Thus it is often described as the "bank of banks". Many countries will monitor and control the banking sector through several different agencies and for different purposes. The Bank regulation in the United States for example is highly fragmented with 3 federal agencies, the Federal Deposit Insurance Corporation, the Federal Reserve Board, or Office of the Comptroller of the Currency and numerous others on the state and the private level. There is usually significant cooperation between the agencies. For example, money center banks, deposit-taking institutions, and other types of financial institutions may be subject to different (and occasionally overlapping) regulation. Some types of banking regulation may be delegated to other levels of government, such as state or provincial governments. Any cartel of banks is particularly closely watched and controlled. Most countries control bank mergers and are wary of concentration in this industry due to the danger of groupthink and runaway lending bubbles based on a single point of failure, the credit culture of the few large banks. Central bank governance and independence Numerous governments have opted to make central banks independent. The economic logic behind central bank independence is that when governments delegate monetary policy to an independent central bank (with an anti-inflationary purpose) and away from elected politicians, monetary policy will not reflect the interests of the politicians. When governments control monetary policy, politicians may be tempted to boost economic activity in advance of an election to the detriment of the long-term health of the economy and the country. As a consequence, financial markets may not consider future commitments to low inflation to be credible when monetary policy is in the hands of elected officials, which increases the risk of capital flight. An alternative to central bank independence is to have fixed exchange rate regimes. Governments generally have some degree of influence over even "independent" central banks; the aim of independence is primarily to prevent short-term interference. In 1951, the Deutsche Bundesbank became the first central bank to be given full independence, leading this form of central bank to be referred to as the "Bundesbank model", as opposed, for instance, to the New Zealand model, which has a goal (i.e. inflation target) set by the government. Central bank independence is usually guaranteed by legislation and the institutional framework governing the bank's relationship with elected officials, particularly the minister of finance. Central bank legislation will enshrine specific procedures for selecting and appointing the head of the central bank. Often the minister of finance will appoint the governor in consultation with the central bank's board and its incumbent governor. In addition, the legislation will specify banks governor's term of appointment. The most independent central banks enjoy a fixed non-renewable term for the governor in order to eliminate pressure on the governor to please the government in the hope of being re-appointed for a second term. Generally, independent central banks enjoy both goal and instrument independence. Despite their independence, central banks are usually accountable at some level to government officials, either to the finance ministry or to parliament. For example, the Board of Governors of the U.S. Federal Reserve are nominated by the U.S. president and confirmed by the Senate, publishes verbatim transcripts, and balance sheets are audited by the Government Accountability Office. In the 1990s there was a trend towards increasing the independence of central banks as a way of improving long-term economic performance. While a large volume of economic research has been done to define the relationship between central bank independence and economic performance, the results are ambiguous. The literature on central bank independence has defined a cumulative and complementary number of aspects: Institutional independence: The independence of the central bank is enshrined in law and shields central banks from political interference. In general terms, institutional independence means that politicians should refrain from seeking to influence monetary policy decisions, while symmetrically central banks should also avoid influencing government politics. Goal independence: The central bank has the right to set its own policy goals, whether inflation targeting, control of the money supply, or maintaining a fixed exchange rate. While this type of independence is more common, many central banks prefer to announce their policy goals in partnership with the appropriate government departments. This increases the transparency of the policy-setting process and thereby increases the credibility of the goals chosen by providing assurance that they will not be changed without notice. In addition, the setting of common goals by the central bank and the government helps to avoid situations where monetary and fiscal policy are in conflict; a policy combination that is clearly sub-optimal. Functional & operational independence: The central bank has the independence to determine the best way of achieving its policy goals, including the types of instruments used and the timing of their use. To achieve its mandate, the central bank has the authority to run its own operations (appointing staff, setting budgets, and so on.) and to organize its internal structures without excessive involvement of the government. This is the most common form of central bank independence. The granting of independence to the Bank of England in 1997 was, in fact, the granting of operational independence; the inflation target continued to be announced in the Chancellor's annual budget speech to Parliament. Personal independence: The other forms of independence are not possible unless central bank heads have a high security of tenure. In practice, this means that governors should hold long mandates (at least longer than the electoral cycle) and a certain degree of legal immunity. One of the most common statistical indicators used in the literature as a proxy for central bank independence is the "turn-over-rate" of central bank governors. If a government is in the habit of appointing and replacing the governor frequently, it clearly has the capacity to micro-manage the central bank through its choice of governors. Financial independence: central banks have full autonomy on their budget, and some are even prohibited from financing governments. This is meant to remove incentives from politicians to influence central banks. Legal independence : some central banks have their own legal personality, which allows them to ratify international agreements without the government's approval (like the ECB), and to go to court. There is very strong consensus among economists that an independent central bank can run a more credible monetary policy, making market expectations more responsive to signals from the central bank. Both the Bank of England (1997) and the European Central Bank have been made independent and follow a set of published inflation targets so that markets know what to expect. Even the People's Bank of China has been accorded great latitude, though in China the official role of the bank remains that of a national bank rather than a central bank, underlined by the official refusal to "unpeg" the yuan or to revalue it "under pressure". The fact that the Communist Party is not elected also relieves the pressure to please people, increasing its independence. Populism can reduce de facto central bank independence. International organizations such as the World Bank, the Bank for International Settlements (BIS) and the International Monetary Fund (IMF) strongly support central bank independence. This results, in part, from a belief in the intrinsic merits of increased independence. The support for independence from the international organizations also derives partly from the connection between increased independence for the central bank and increased transparency in the policy-making process. The IMF's Financial Services Action Plan (FSAP) review self-assessment, for example, includes a number of questions about central bank independence in the transparency section. An independent central bank will score higher in the review than one that is not independent. Central bank independence indices Central bank independence indices allow a quantitative analysis of central bank independence for individual countries over time. One central bank independence index is the Garriga CBI, where a higher index indicates higher central bank independence, shown below for individual countries. Statistics Collectively, central banks purchase less than 500 tonnes of gold each year, on average (out of an annual global production of 2,500-3,000 tonnes). In 2018, central banks collectively hold over 33,000 metric tons of the gold, about a fifth of all the gold ever mined, according to Bloomberg News. In 2016, 75% of the world's central-bank assets were controlled by four centers in China, the United States, Japan and the eurozone. The central banks of Brazil, Switzerland, Saudi Arabia, the U.K., India and Russia, each account for an average of 2.5 percent. The remaining 107 central banks hold less than 13 percent. According to data compiled by Bloomberg News, the top 10 largest central banks owned $21.4 trillion in assets, a 10 percent increase from 2015. See also Fractional-reserve banking Free banking Full-reserve banking National bank State bank Bank for International Settlements History of central banking in the United States List of central banks References Further reading Acocella, N., Di Bartolomeo, G., and Hughes Hallett, A. [2012], "Central banks and economic policy after the crisis: what have we learned?", ch. 5 in: Baker, H. K. and Riddick, L. A. (eds.), Survey of International Finance, Oxford University Press. External links List of central bank websites at the Bank for International Settlements International Journal of Central Banking "The Federal Reserve System: Purposes and Functions" – A publication of the U.S. Federal Reserve, describing its role in the macroeconomy   – C E V Borio, Bank for International Settlements, Basel Banks Banking terms
5667
https://en.wikipedia.org/wiki/Chlorine
Chlorine
Chlorine is a chemical element with the symbol Cl and atomic number 17. The second-lightest of the halogens, it appears between fluorine and bromine in the periodic table and its properties are mostly intermediate between them. Chlorine is a yellow-green gas at room temperature. It is an extremely reactive element and a strong oxidising agent: among the elements, it has the highest electron affinity and the third-highest electronegativity on the revised Pauling scale, behind only oxygen and fluorine. Chlorine played an important role in the experiments conducted by medieval alchemists, which commonly involved the heating of chloride salts like ammonium chloride (sal ammoniac) and sodium chloride (common salt), producing various chemical substances containing chlorine such as hydrogen chloride, mercury(II) chloride (corrosive sublimate), and hydrochloric acid (in the form of ). However, the nature of free chlorine gas as a separate substance was only recognised around 1630 by Jan Baptist van Helmont. Carl Wilhelm Scheele wrote a description of chlorine gas in 1774, supposing it to be an oxide of a new element. In 1809, chemists suggested that the gas might be a pure element, and this was confirmed by Sir Humphry Davy in 1810, who named it after the Ancient Greek (, "pale green") because of its colour. Because of its great reactivity, all chlorine in the Earth's crust is in the form of ionic chloride compounds, which includes table salt. It is the second-most abundant halogen (after fluorine) and twenty-first most abundant chemical element in Earth's crust. These crustal deposits are nevertheless dwarfed by the huge reserves of chloride in seawater. Elemental chlorine is commercially produced from brine by electrolysis, predominantly in the chlor-alkali process. The high oxidising potential of elemental chlorine led to the development of commercial bleaches and disinfectants, and a reagent for many processes in the chemical industry. Chlorine is used in the manufacture of a wide range of consumer products, about two-thirds of them organic chemicals such as polyvinyl chloride (PVC), many intermediates for the production of plastics, and other end products which do not contain the element. As a common disinfectant, elemental chlorine and chlorine-generating compounds are used more directly in swimming pools to keep them sanitary. Elemental chlorine at high concentration is extremely dangerous, and poisonous to most living organisms. As a chemical warfare agent, chlorine was first used in World War I as a poison gas weapon. In the form of chloride ions, chlorine is necessary to all known species of life. Other types of chlorine compounds are rare in living organisms, and artificially produced chlorinated organics range from inert to toxic. In the upper atmosphere, chlorine-containing organic molecules such as chlorofluorocarbons have been implicated in ozone depletion. Small quantities of elemental chlorine are generated by oxidation of chloride ions in neutrophils as part of an immune system response against bacteria. History The most common compound of chlorine, sodium chloride, has been known since ancient times; archaeologists have found evidence that rock salt was used as early as 3000 BC and brine as early as 6000 BC. Early discoveries Around 900, the authors of the Arabic writings attributed to Jabir ibn Hayyan (Latin: Geber) and the Persian physician and alchemist Abu Bakr al-Razi ( 865–925, Latin: Rhazes) were experimenting with sal ammoniac (ammonium chloride), which when it was distilled together with vitriol (hydrated sulfates of various metals) produced hydrogen chloride. However, it appears that in these early experiments with chloride salts, the gaseous products were discarded, and hydrogen chloride may have been produced many times before it was discovered that it can be put to chemical use. One of the first such uses was the synthesis of mercury(II) chloride (corrosive sublimate), whose production from the heating of mercury either with alum and ammonium chloride or with vitriol and sodium chloride was first described in the De aluminibus et salibus ("On Alums and Salts", an eleventh- or twelfth century Arabic text falsely attributed to Abu Bakr al-Razi and translated into Latin in the second half of the twelfth century by Gerard of Cremona, 1144–1187). Another important development was the discovery by pseudo-Geber (in the De inventione veritatis, "On the Discovery of Truth", after c. 1300) that by adding ammonium chloride to nitric acid, a strong solvent capable of dissolving gold (i.e., aqua regia) could be produced. Although aqua regia is an unstable mixture that continually gives off fumes containing free chlorine gas, this chlorine gas appears to have been ignored until c. 1630, when its nature as a separate gaseous substance was recognised by the Brabantian chemist and physician Jan Baptist van Helmont. Isolation The element was first studied in detail in 1774 by Swedish chemist Carl Wilhelm Scheele, and he is credited with the discovery. Scheele produced chlorine by reacting MnO2 (as the mineral pyrolusite) with HCl: 4 HCl + MnO2 → MnCl2 + 2 H2O + Cl2 Scheele observed several of the properties of chlorine: the bleaching effect on litmus, the deadly effect on insects, the yellow-green color, and the smell similar to aqua regia. He called it "dephlogisticated muriatic acid air" since it is a gas (then called "airs") and it came from hydrochloric acid (then known as "muriatic acid"). He failed to establish chlorine as an element. Common chemical theory at that time held that an acid is a compound that contains oxygen (remnants of this survive in the German and Dutch names of oxygen: sauerstoff or zuurstof, both translating into English as acid substance), so a number of chemists, including Claude Berthollet, suggested that Scheele's dephlogisticated muriatic acid air must be a combination of oxygen and the yet undiscovered element, muriaticum. In 1809, Joseph Louis Gay-Lussac and Louis-Jacques Thénard tried to decompose dephlogisticated muriatic acid air by reacting it with charcoal to release the free element muriaticum (and carbon dioxide). They did not succeed and published a report in which they considered the possibility that dephlogisticated muriatic acid air is an element, but were not convinced. In 1810, Sir Humphry Davy tried the same experiment again, and concluded that the substance was an element, and not a compound. He announced his results to the Royal Society on 15 November that year. At that time, he named this new element "chlorine", from the Greek word χλωρος (chlōros, "green-yellow"), in reference to its color. The name "halogen", meaning "salt producer", was originally used for chlorine in 1811 by Johann Salomo Christoph Schweigger. This term was later used as a generic term to describe all the elements in the chlorine family (fluorine, bromine, iodine), after a suggestion by Jöns Jakob Berzelius in 1826. In 1823, Michael Faraday liquefied chlorine for the first time, and demonstrated that what was then known as "solid chlorine" had a structure of chlorine hydrate (Cl2·H2O). Later uses Chlorine gas was first used by French chemist Claude Berthollet to bleach textiles in 1785. Modern bleaches resulted from further work by Berthollet, who first produced sodium hypochlorite in 1789 in his laboratory in the town of Javel (now part of Paris, France), by passing chlorine gas through a solution of sodium carbonate. The resulting liquid, known as "Eau de Javel" ("Javel water"), was a weak solution of sodium hypochlorite. This process was not very efficient, and alternative production methods were sought. Scottish chemist and industrialist Charles Tennant first produced a solution of calcium hypochlorite ("chlorinated lime"), then solid calcium hypochlorite (bleaching powder). These compounds produced low levels of elemental chlorine and could be more efficiently transported than sodium hypochlorite, which remained as dilute solutions because when purified to eliminate water, it became a dangerously powerful and unstable oxidizer. Near the end of the nineteenth century, E. S. Smith patented a method of sodium hypochlorite production involving electrolysis of brine to produce sodium hydroxide and chlorine gas, which then mixed to form sodium hypochlorite. This is known as the chloralkali process, first introduced on an industrial scale in 1892, and now the source of most elemental chlorine and sodium hydroxide. In 1884 Chemischen Fabrik Griesheim of Germany developed another chloralkali process which entered commercial production in 1888. Elemental chlorine solutions dissolved in chemically basic water (sodium and calcium hypochlorite) were first used as anti-putrefaction agents and disinfectants in the 1820s, in France, long before the establishment of the germ theory of disease. This practice was pioneered by Antoine-Germain Labarraque, who adapted Berthollet's "Javel water" bleach and other chlorine preparations. Elemental chlorine has since served a continuous function in topical antisepsis (wound irrigation solutions and the like) and public sanitation, particularly in swimming and drinking water. Chlorine gas was first used as a weapon on April 22, 1915 at the Second Battle of Ypres by the German Army. The effect on the allies was devastating because the existing gas masks were difficult to deploy and had not been broadly distributed. Properties Chlorine is the second halogen, being a nonmetal in group 17 of the periodic table. Its properties are thus similar to fluorine, bromine, and iodine, and are largely intermediate between those of the first two. Chlorine has the electron configuration [Ne]3s23p5, with the seven electrons in the third and outermost shell acting as its valence electrons. Like all halogens, it is thus one electron short of a full octet, and is hence a strong oxidising agent, reacting with many elements in order to complete its outer shell. Corresponding to periodic trends, it is intermediate in electronegativity between fluorine and bromine (F: 3.98, Cl: 3.16, Br: 2.96, I: 2.66), and is less reactive than fluorine and more reactive than bromine. It is also a weaker oxidising agent than fluorine, but a stronger one than bromine. Conversely, the chloride ion is a weaker reducing agent than bromide, but a stronger one than fluoride. It is intermediate in atomic radius between fluorine and bromine, and this leads to many of its atomic properties similarly continuing the trend from iodine to bromine upward, such as first ionisation energy, electron affinity, enthalpy of dissociation of the X2 molecule (X = Cl, Br, I), ionic radius, and X–X bond length. (Fluorine is anomalous due to its small size.) All four stable halogens experience intermolecular van der Waals forces of attraction, and their strength increases together with the number of electrons among all homonuclear diatomic halogen molecules. Thus, the melting and boiling points of chlorine are intermediate between those of fluorine and bromine: chlorine melts at −101.0 °C and boils at −34.0 °C. As a result of the increasing molecular weight of the halogens down the group, the density and heats of fusion and vaporisation of chlorine are again intermediate between those of bromine and fluorine, although all their heats of vaporisation are fairly low (leading to high volatility) thanks to their diatomic molecular structure. The halogens darken in colour as the group is descended: thus, while fluorine is a pale yellow gas, chlorine is distinctly yellow-green. This trend occurs because the wavelengths of visible light absorbed by the halogens increase down the group. Specifically, the colour of a halogen, such as chlorine, results from the electron transition between the highest occupied antibonding πg molecular orbital and the lowest vacant antibonding σu molecular orbital. The colour fades at low temperatures, so that solid chlorine at −195 °C is almost colourless. Like solid bromine and iodine, solid chlorine crystallises in the orthorhombic crystal system, in a layered lattice of Cl2 molecules. The Cl–Cl distance is 198 pm (close to the gaseous Cl–Cl distance of 199 pm) and the Cl···Cl distance between molecules is 332 pm within a layer and 382 pm between layers (compare the van der Waals radius of chlorine, 180 pm). This structure means that chlorine is a very poor conductor of electricity, and indeed its conductivity is so low as to be practically unmeasurable. Isotopes Chlorine has two stable isotopes, 35Cl and 37Cl. These are its only two natural isotopes occurring in quantity, with 35Cl making up 76% of natural chlorine and 37Cl making up the remaining 24%. Both are synthesised in stars in the oxygen-burning and silicon-burning processes. Both have nuclear spin 3/2+ and thus may be used for nuclear magnetic resonance, although the spin magnitude being greater than 1/2 results in non-spherical nuclear charge distribution and thus resonance broadening as a result of a nonzero nuclear quadrupole moment and resultant quadrupolar relaxation. The other chlorine isotopes are all radioactive, with half-lives too short to occur in nature primordially. Of these, the most commonly used in the laboratory are 36Cl (t1/2 = 3.0×105 y) and 38Cl (t1/2 = 37.2 min), which may be produced from the neutron activation of natural chlorine. The most stable chlorine radioisotope is 36Cl. The primary decay mode of isotopes lighter than 35Cl is electron capture to isotopes of sulfur; that of isotopes heavier than 37Cl is beta decay to isotopes of argon; and 36Cl may decay by either mode to stable 36S or 36Ar. 36Cl occurs in trace quantities in nature as a cosmogenic nuclide in a ratio of about (7–10) × 10−13 to 1 with stable chlorine isotopes: it is produced in the atmosphere by spallation of 36Ar by interactions with cosmic ray protons. In the top meter of the lithosphere, 36Cl is generated primarily by thermal neutron activation of 35Cl and spallation of 39K and 40Ca. In the subsurface environment, muon capture by 40Ca becomes more important as a way to generate 36Cl. Chemistry and compounds Chlorine is intermediate in reactivity between fluorine and bromine, and is one of the most reactive elements. Chlorine is a weaker oxidising agent than fluorine but a stronger one than bromine or iodine. This can be seen from the standard electrode potentials of the X2/X− couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3  V). However, this trend is not shown in the bond energies because fluorine is singular due to its small size, low polarisability, and inability to show hypervalence. As another difference, chlorine has a significant chemistry in positive oxidation states while fluorine does not. Chlorination often leads to higher oxidation states than bromination or iodination but lower oxidation states than fluorination. Chlorine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Cl bonds. Given that E°(O2/H2O) = +1.229 V, which is less than +1.395 V, it would be expected that chlorine should be able to oxidise water to oxygen and hydrochloric acid. However, the kinetics of this reaction are unfavorable, and there is also a bubble overpotential effect to consider, so that electrolysis of aqueous chloride solutions evolves chlorine gas and not oxygen gas, a fact that is very useful for the industrial production of chlorine. Hydrogen chloride The simplest chlorine compound is hydrogen chloride, HCl, a major chemical in industry as well as in the laboratory, both as a gas and dissolved in water as hydrochloric acid. It is often produced by burning hydrogen gas in chlorine gas, or as a byproduct of chlorinating hydrocarbons. Another approach is to treat sodium chloride with concentrated sulfuric acid to produce hydrochloric acid, also known as the "salt-cake" process: NaCl + H2SO4 NaHSO4 + HCl NaCl + NaHSO4 Na2SO4 + HCl In the laboratory, hydrogen chloride gas may be made by drying the acid with concentrated sulfuric acid. Deuterium chloride, DCl, may be produced by reacting benzoyl chloride with heavy water (D2O). At room temperature, hydrogen chloride is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the larger electronegative chlorine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen chloride at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Hydrochloric acid is a strong acid (pKa = −7) because the hydrogen bonds to chlorine are too weak to inhibit dissociation. The HCl/H2O system has many hydrates HCl·nH2O for n = 1, 2, 3, 4, and 6. Beyond a 1:1 mixture of HCl and H2O, the system separates completely into two separate liquid phases. Hydrochloric acid forms an azeotrope with boiling point 108.58 °C at 20.22 g HCl per 100 g solution; thus hydrochloric acid cannot be concentrated beyond this point by distillation. Unlike hydrogen fluoride, anhydrous liquid hydrogen chloride is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into H2Cl+ and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and chlorine, though its salts with very large and weakly polarising cations such as Cs+ and (R = Me, Et, Bun) may still be isolated. Anhydrous hydrogen chloride is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. It readily protonates electrophiles containing lone-pairs or π bonds. Solvolysis, ligand replacement reactions, and oxidations are well-characterised in hydrogen chloride solution: Ph3SnCl + HCl ⟶ Ph2SnCl2 + PhH (solvolysis) Ph3COH + 3 HCl ⟶ + H3O+Cl− (solvolysis) + BCl3 ⟶ + HCl (ligand replacement) PCl3 + Cl2 + HCl ⟶ (oxidation) Other binary chlorides Nearly all elements in the periodic table form binary chlorides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the highly unstable XeCl2 and XeCl4); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than chlorine's (oxygen and fluorine) so that the resultant binary compounds are formally not chlorides but rather oxides or fluorides of chlorine. Even though nitrogen in NCl3 is bearing a negative charge, the compound is usually called nitrogen trichloride. Chlorination of metals with Cl2 usually leads to a higher oxidation state than bromination with Br2 when multiple oxidation states are available, such as in MoCl5 and MoBr3. Chlorides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrochloric acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen chloride gas. These methods work best when the chloride product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative chlorination of the element with chlorine or hydrogen chloride, high-temperature chlorination of a metal oxide or other halide by chlorine, a volatile metal chloride, carbon tetrachloride, or an organic chloride. For instance, zirconium dioxide reacts with chlorine at standard conditions to produce zirconium tetrachloride, and uranium trioxide reacts with hexachloropropene when heated under reflux to give uranium tetrachloride. The second example also involves a reduction in oxidation state, which can also be achieved by reducing a higher chloride using hydrogen or a metal as a reducing agent. This may also be achieved by thermal decomposition or disproportionation as follows: EuCl3 + H2 ⟶ EuCl2 + HCl ReCl5 ReCl3 + Cl2 AuCl3 AuCl + Cl2 Most metal chlorides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular chlorides, as do metals in high oxidation states from +3 and above. Both ionic and covalent chlorides are known for metals in oxidation state +3 (e.g. scandium chloride is mostly ionic, but aluminium chloride is not). Silver chloride is very insoluble in water and is thus often used as a qualitative test for chlorine. Polychlorine compounds Although dichlorine is a strong oxidising agent with a high first ionisation energy, it may be oxidised under extreme conditions to form the cation. This is very unstable and has only been characterised by its electronic band spectrum when produced in a low-pressure discharge tube. The yellow cation is more stable and may be produced as follows: This reaction is conducted in the oxidising solvent arsenic pentafluoride. The trichloride anion, , has also been characterised; it is analogous to triiodide. Chlorine fluorides The three fluorides of chlorine form a subset of the interhalogen compounds, all of which are diamagnetic. Some cationic and anionic derivatives are known, such as , , , and Cl2F+. Some pseudohalides of chlorine are also known, such as cyanogen chloride (ClCN, linear), chlorine cyanate (ClNCO), chlorine thiocyanate (ClSCN, unlike its oxygen counterpart), and chlorine azide (ClN3). Chlorine monofluoride (ClF) is extremely thermally stable, and is sold commercially in 500-gram steel lecture bottles. It is a colourless gas that melts at −155.6 °C and boils at −100.1 °C. It may be produced by the reaction of its elements at 225 °C, though it must then be separated and purified from chlorine trifluoride and its reactants. Its properties are mostly intermediate between those of chlorine and fluorine. It will react with many metals and nonmetals from room temperature and above, fluorinating them and liberating chlorine. It will also act as a chlorofluorinating agent, adding chlorine and fluorine across a multiple bond or by oxidation: for example, it will attack carbon monoxide to form carbonyl chlorofluoride, COFCl. It will react analogously with hexafluoroacetone, (CF3)2CO, with a potassium fluoride catalyst to produce heptafluoroisopropyl hypochlorite, (CF3)2CFOCl; with nitriles RCN to produce RCF2NCl2; and with the sulfur oxides SO2 and SO3 to produce ClSO2F and ClOSO2F respectively. It will also react exothermically with compounds containing –OH and –NH groups, such as water: H2O + 2 ClF ⟶ 2 HF + Cl2O Chlorine trifluoride (ClF3) is a volatile colourless molecular liquid which melts at −76.3 °C and boils at 11.8  °C. It may be formed by directly fluorinating gaseous chlorine or chlorine monofluoride at 200–300 °C. One of the most reactive chemical compounds known, the list of elements it sets on fire is diverse, containing hydrogen, potassium, phosphorus, arsenic, antimony, sulfur, selenium, tellurium, bromine, iodine, and powdered molybdenum, tungsten, rhodium, iridium, and iron. It will also ignite water, along with many substances which in ordinary circumstances would be considered chemically inert such as asbestos, concrete, glass, and sand. When heated, it will even corrode noble metals as palladium, platinum, and gold, and even the noble gases xenon and radon do not escape fluorination. An impermeable fluoride layer is formed by sodium, magnesium, aluminium, zinc, tin, and silver, which may be removed by heating. Nickel, copper, and steel containers are usually used due to their great resistance to attack by chlorine trifluoride, stemming from the formation of an unreactive layer of metal fluoride. Its reaction with hydrazine to form hydrogen fluoride, nitrogen, and chlorine gases was used in experimental rocket engine, but has problems largely stemming from its extreme hypergolicity resulting in ignition without any measurable delay. Today, it is mostly used in nuclear fuel processing, to oxidise uranium to uranium hexafluoride for its enriching and to separate it from plutonium, as well as in the semiconductor industry, where it is used to clean chemical vapor deposition chambers. It can act as a fluoride ion donor or acceptor (Lewis base or acid), although it does not dissociate appreciably into and ions. Chlorine pentafluoride (ClF5) is made on a large scale by direct fluorination of chlorine with excess fluorine gas at 350 °C and 250 atm, and on a small scale by reacting metal chlorides with fluorine gas at 100–300  °C. It melts at −103 °C and boils at −13.1 °C. It is a very strong fluorinating agent, although it is still not as effective as chlorine trifluoride. Only a few specific stoichiometric reactions have been characterised. Arsenic pentafluoride and antimony pentafluoride form ionic adducts of the form [ClF4]+[MF6]− (M = As, Sb) and water reacts vigorously as follows: 2 H2O + ClF5 ⟶ 4 HF + FClO2 The product, chloryl fluoride, is one of the five known chlorine oxide fluorides. These range from the thermally unstable FClO to the chemically unreactive perchloryl fluoride (FClO3), the other three being FClO2, F3ClO, and F3ClO2. All five behave similarly to the chlorine fluorides, both structurally and chemically, and may act as Lewis acids or bases by gaining or losing fluoride ions respectively or as very strong oxidising and fluorinating agents. Chlorine oxides The chlorine oxides are well-studied in spite of their instability (all of them are endothermic compounds). They are important because they are produced when chlorofluorocarbons undergo photolysis in the upper atmosphere and cause the destruction of the ozone layer. None of them can be made from directly reacting the elements. Dichlorine monoxide (Cl2O) is a brownish-yellow gas (red-brown when solid or liquid) which may be obtained by reacting chlorine gas with yellow mercury(II) oxide. It is very soluble in water, in which it is in equilibrium with hypochlorous acid (HOCl), of which it is the anhydride. It is thus an effective bleach and is mostly used to make hypochlorites. It explodes on heating or sparking or in the presence of ammonia gas. Chlorine dioxide (ClO2) was the first chlorine oxide to be discovered in 1811 by Humphry Davy. It is a yellow paramagnetic gas (deep-red as a solid or liquid), as expected from its having an odd number of electrons: it is stable towards dimerisation due to the delocalisation of the unpaired electron. It explodes above −40 °C as a liquid and under pressure as a gas and therefore must be made at low concentrations for wood-pulp bleaching and water treatment. It is usually prepared by reducing a chlorate as follows: + Cl− + 2 H+ ⟶ ClO2 + Cl2 + H2O Its production is thus intimately linked to the redox reactions of the chlorine oxoacids. It is a strong oxidising agent, reacting with sulfur, phosphorus, phosphorus halides, and potassium borohydride. It dissolves exothermically in water to form dark-green solutions that very slowly decompose in the dark. Crystalline clathrate hydrates ClO2·nH2O (n ≈ 6–10) separate out at low temperatures. However, in the presence of light, these solutions rapidly photodecompose to form a mixture of chloric and hydrochloric acids. Photolysis of individual ClO2 molecules result in the radicals ClO and ClOO, while at room temperature mostly chlorine, oxygen, and some ClO3 and Cl2O6 are produced. Cl2O3 is also produced when photolysing the solid at −78 °C: it is a dark brown solid that explodes below 0 °C. The ClO radical leads to the depletion of atmospheric ozone and is thus environmentally important as follows: Cl• + O3 ⟶ ClO• + O2 ClO• + O• ⟶ Cl• + O2 Chlorine perchlorate (ClOClO3) is a pale yellow liquid that is less stable than ClO2 and decomposes at room temperature to form chlorine, oxygen, and dichlorine hexoxide (Cl2O6). Chlorine perchlorate may also be considered a chlorine derivative of perchloric acid (HOClO3), similar to the thermally unstable chlorine derivatives of other oxoacids: examples include chlorine nitrate (ClONO2, vigorously reactive and explosive), and chlorine fluorosulfate (ClOSO2F, more stable but still moisture-sensitive and highly reactive). Dichlorine hexoxide is a dark-red liquid that freezes to form a solid which turns yellow at −180 °C: it is usually made by reaction of chlorine dioxide with oxygen. Despite attempts to rationalise it as the dimer of ClO3, it reacts more as though it were chloryl perchlorate, [ClO2]+[ClO4]−, which has been confirmed to be the correct structure of the solid. It hydrolyses in water to give a mixture of chloric and perchloric acids: the analogous reaction with anhydrous hydrogen fluoride does not proceed to completion. Dichlorine heptoxide (Cl2O7) is the anhydride of perchloric acid (HClO4) and can readily be obtained from it by dehydrating it with phosphoric acid at −10 °C and then distilling the product at −35 °C and 1 mmHg. It is a shock-sensitive, colourless oily liquid. It is the least reactive of the chlorine oxides, being the only one to not set organic materials on fire at room temperature. It may be dissolved in water to regenerate perchloric acid or in aqueous alkalis to regenerate perchlorates. However, it thermally decomposes explosively by breaking one of the central Cl–O bonds, producing the radicals ClO3 and ClO4 which immediately decompose to the elements through intermediate oxides. Chlorine oxoacids and oxyanions Chlorine forms four oxoacids: hypochlorous acid (HOCl), chlorous acid (HOClO), chloric acid (HOClO2), and perchloric acid (HOClO3). As can be seen from the redox potentials given in the adjacent table, chlorine is much more stable towards disproportionation in acidic solutions than in alkaline solutions: {| |- | Cl2 + H2O || HOCl + H+ + Cl− || Kac = 4.2 × 10−4 mol2 l−2 |- | Cl2 + 2 OH− || OCl− + H2O + Cl− || Kalk = 7.5 × 1015 mol−1 l |} The hypochlorite ions also disproportionate further to produce chloride and chlorate (3 ClO− 2 Cl− + ) but this reaction is quite slow at temperatures below 70 °C in spite of the very favourable equilibrium constant of 1027. The chlorate ions may themselves disproportionate to form chloride and perchlorate (4 Cl− + 3 ) but this is still very slow even at 100 °C despite the very favourable equilibrium constant of 1020. The rates of reaction for the chlorine oxyanions increases as the oxidation state of chlorine decreases. The strengths of the chlorine oxyacids increase very quickly as the oxidation state of chlorine increases due to the increasing delocalisation of charge over more and more oxygen atoms in their conjugate bases. Most of the chlorine oxoacids may be produced by exploiting these disproportionation reactions. Hypochlorous acid (HOCl) is highly reactive and quite unstable; its salts are mostly used for their bleaching and sterilising abilities. They are very strong oxidising agents, transferring an oxygen atom to most inorganic species. Chlorous acid (HOClO) is even more unstable and cannot be isolated or concentrated without decomposition: it is known from the decomposition of aqueous chlorine dioxide. However, sodium chlorite is a stable salt and is useful for bleaching and stripping textiles, as an oxidising agent, and as a source of chlorine dioxide. Chloric acid (HOClO2) is a strong acid that is quite stable in cold water up to 30% concentration, but on warming gives chlorine and chlorine dioxide. Evaporation under reduced pressure allows it to be concentrated further to about 40%, but then it decomposes to perchloric acid, chlorine, oxygen, water, and chlorine dioxide. Its most important salt is sodium chlorate, mostly used to make chlorine dioxide to bleach paper pulp. The decomposition of chlorate to chloride and oxygen is a common way to produce oxygen in the laboratory on a small scale. Chloride and chlorate may comproportionate to form chlorine as follows: + 5 Cl− + 6 H+ ⟶ 3 Cl2 + 3 H2O Perchlorates and perchloric acid (HOClO3) are the most stable oxo-compounds of chlorine, in keeping with the fact that chlorine compounds are most stable when the chlorine atom is in its lowest (−1) or highest (+7) possible oxidation states. Perchloric acid and aqueous perchlorates are vigorous and sometimes violent oxidising agents when heated, in stark contrast to their mostly inactive nature at room temperature due to the high activation energies for these reactions for kinetic reasons. Perchlorates are made by electrolytically oxidising sodium chlorate, and perchloric acid is made by reacting anhydrous sodium perchlorate or barium perchlorate with concentrated hydrochloric acid, filtering away the chloride precipitated and distilling the filtrate to concentrate it. Anhydrous perchloric acid is a colourless mobile liquid that is sensitive to shock that explodes on contact with most organic compounds, sets hydrogen iodide and thionyl chloride on fire and even oxidises silver and gold. Although it is a weak ligand, weaker than water, a few compounds involving coordinated are known. Organochlorine compounds Like the other carbon–halogen bonds, the C–Cl bond is a common functional group that forms part of core organic chemistry. Formally, compounds with this functional group may be considered organic derivatives of the chloride anion. Due to the difference of electronegativity between chlorine (3.16) and carbon (2.55), the carbon in a C–Cl bond is electron-deficient and thus electrophilic. Chlorination modifies the physical properties of hydrocarbons in several ways: chlorocarbons are typically denser than water due to the higher atomic weight of chlorine versus hydrogen, and aliphatic organochlorides are alkylating agents because chloride is a leaving group. Alkanes and aryl alkanes may be chlorinated under free-radical conditions, with UV light. However, the extent of chlorination is difficult to control: the reaction is not regioselective and often results in a mixture of various isomers with different degrees of chlorination, though this may be permissible if the products are easily separated. Aryl chlorides may be prepared by the Friedel-Crafts halogenation, using chlorine and a Lewis acid catalyst. The haloform reaction, using chlorine and sodium hydroxide, is also able to generate alkyl halides from methyl ketones, and related compounds. Chlorine adds to the multiple bonds on alkenes and alkynes as well, giving di- or tetrachloro compounds. However, due to the expense and reactivity of chlorine, organochlorine compounds are more commonly produced by using hydrogen chloride, or with chlorinating agents such as phosphorus pentachloride (PCl5) or thionyl chloride (SOCl2). The last is very convenient in the laboratory because all side products are gaseous and do not have to be distilled out. Many organochlorine compounds have been isolated from natural sources ranging from bacteria to humans. Chlorinated organic compounds are found in nearly every class of biomolecules including alkaloids, terpenes, amino acids, flavonoids, steroids, and fatty acids. Organochlorides, including dioxins, are produced in the high temperature environment of forest fires, and dioxins have been found in the preserved ashes of lightning-ignited fires that predate synthetic dioxins. In addition, a variety of simple chlorinated hydrocarbons including dichloromethane, chloroform, and carbon tetrachloride have been isolated from marine algae. A majority of the chloromethane in the environment is produced naturally by biological decomposition, forest fires, and volcanoes. Some types of organochlorides, though not all, have significant toxicity to plants or animals, including humans. Dioxins, produced when organic matter is burned in the presence of chlorine, and some insecticides, such as DDT, are persistent organic pollutants which pose dangers when they are released into the environment. For example, DDT, which was widely used to control insects in the mid 20th century, also accumulates in food chains, and causes reproductive problems (e.g., eggshell thinning) in certain bird species. Due to the ready homolytic fission of the C–Cl bond to create chlorine radicals in the upper atmosphere, chlorofluorocarbons have been phased out due to the harm they do to the ozone layer. Occurrence and production Chlorine is too reactive to occur as the free element in nature but is very abundant in the form of its chloride salts. It is the twenty-first most abundant element in Earth's crust and makes up 126 parts per million of it, through the large deposits of chloride minerals, especially sodium chloride, that have been evaporated from water bodies. All of these pale in comparison to the reserves of chloride ions in seawater: smaller amounts at higher concentrations occur in some inland seas and underground brine wells, such as the Great Salt Lake in Utah and the Dead Sea in Israel. Small batches of chlorine gas are prepared in the laboratory by combining hydrochloric acid and manganese dioxide, but the need rarely arises due to its ready availability. In industry, elemental chlorine is usually produced by the electrolysis of sodium chloride dissolved in water. This method, the chloralkali process industrialized in 1892, now provides most industrial chlorine gas. Along with chlorine, the method yields hydrogen gas and sodium hydroxide, which is the most valuable product. The process proceeds according to the following chemical equation: 2 NaCl + 2 H2O → Cl2 + H2 + 2 NaOH The electrolysis of chloride solutions all proceed according to the following equations: Cathode: 2 H2O + 2 e− → H2 + 2 OH− Anode: 2 Cl− → Cl2 + 2 e− In diaphragm cell electrolysis, an asbestos (or polymer-fiber) diaphragm separates a cathode and an anode, preventing the chlorine forming at the anode from re-mixing with the sodium hydroxide and the hydrogen formed at the cathode. The salt solution (brine) is continuously fed to the anode compartment and flows through the diaphragm to the cathode compartment, where the caustic alkali is produced and the brine is partially depleted. Diaphragm methods produce dilute and slightly impure alkali, but they are not burdened with the problem of mercury disposal and they are more energy efficient. Membrane cell electrolysis employs permeable membrane as an ion exchanger. Saturated sodium (or potassium) chloride solution is passed through the anode compartment, leaving at a lower concentration. This method also produces very pure sodium (or potassium) hydroxide but has the disadvantage of requiring very pure brine at high concentrations. In the Deacon process, hydrogen chloride recovered from the production of organochlorine compounds is recovered as chlorine. The process relies on oxidation using oxygen: 4 HCl + O2 → 2 Cl2 + 2 H2O The reaction requires a catalyst. As introduced by Deacon, early catalysts were based on copper. Commercial processes, such as the Mitsui MT-Chlorine Process, have switched to chromium and ruthenium-based catalysts. The chlorine produced is available in cylinders from sizes ranging from 450 g to 70 kg, as well as drums (865 kg), tank wagons (15 tonnes on roads; 27–90 tonnes by rail), and barges (600–1200 tonnes). Applications Sodium chloride is the most common chlorine compound, and is the main source of chlorine for the demand by the chemical industry. About 15000 chlorine-containing compounds are commercially traded, including such diverse compounds as chlorinated methane, ethanes, vinyl chloride, polyvinyl chloride (PVC), aluminium trichloride for catalysis, the chlorides of magnesium, titanium, zirconium, and hafnium which are the precursors for producing the pure form of those elements. Quantitatively, of all elemental chlorine produced, about 63% is used in the manufacture of organic compounds, and 18% in the manufacture of inorganic chlorine compounds. About 15,000 chlorine compounds are used commercially. The remaining 19% of chlorine produced is used for bleaches and disinfection products. The most significant of organic compounds in terms of production volume are 1,2-dichloroethane and vinyl chloride, intermediates in the production of PVC. Other particularly important organochlorines are methyl chloride, methylene chloride, chloroform, vinylidene chloride, trichloroethylene, perchloroethylene, allyl chloride, epichlorohydrin, chlorobenzene, dichlorobenzenes, and trichlorobenzenes. The major inorganic compounds include HCl, Cl2O, HOCl, NaClO3, chlorinated isocyanurates, AlCl3, SiCl4, SnCl4, PCl3, PCl5, POCl3, AsCl3, SbCl3, SbCl5, BiCl3, and ZnCl2. Sanitation, disinfection, and antisepsis Combating putrefaction In France (as elsewhere), animal intestines were processed to make musical instrument strings, Goldbeater's skin and other products. This was done in "gut factories" (boyauderies), and it was an odiferous and unhealthy process. In or about 1820, the Société d'encouragement pour l'industrie nationale offered a prize for the discovery of a method, chemical or mechanical, for separating the peritoneal membrane of animal intestines without putrefaction. The prize was won by Antoine-Germain Labarraque, a 44-year-old French chemist and pharmacist who had discovered that Berthollet's chlorinated bleaching solutions ("Eau de Javel") not only destroyed the smell of putrefaction of animal tissue decomposition, but also actually retarded the decomposition. Labarraque's research resulted in the use of chlorides and hypochlorites of lime (calcium hypochlorite) and of sodium (sodium hypochlorite) in the boyauderies. The same chemicals were found to be useful in the routine disinfection and deodorization of latrines, sewers, markets, abattoirs, anatomical theatres, and morgues. They were successful in hospitals, lazarets, prisons, infirmaries (both on land and at sea), magnaneries, stables, cattle-sheds, etc.; and they were beneficial during exhumations, embalming, outbreaks of epidemic disease, fever, and blackleg in cattle. Disinfection Labarraque's chlorinated lime and soda solutions have been advocated since 1828 to prevent infection (called "contagious infection", presumed to be transmitted by "miasmas"), and to treat putrefaction of existing wounds, including septic wounds. In his 1828 work, Labarraque recommended that doctors breathe chlorine, wash their hands in chlorinated lime, and even sprinkle chlorinated lime about the patients' beds in cases of "contagious infection". In 1828, the contagion of infections was well known, even though the agency of the microbe was not discovered until more than half a century later. During the Paris cholera outbreak of 1832, large quantities of so-called chloride of lime were used to disinfect the capital. This was not simply modern calcium chloride, but chlorine gas dissolved in lime-water (dilute calcium hydroxide) to form calcium hypochlorite (chlorinated lime). Labarraque's discovery helped to remove the terrible stench of decay from hospitals and dissecting rooms, and by doing so, effectively deodorised the Latin Quarter of Paris. These "putrid miasmas" were thought by many to cause the spread of "contagion" and "infection" – both words used before the germ theory of infection. Chloride of lime was used for destroying odors and "putrid matter". One source claims chloride of lime was used by Dr. John Snow to disinfect water from the cholera-contaminated well that was feeding the Broad Street pump in 1854 London, though three other reputable sources that describe that famous cholera epidemic do not mention the incident. One reference makes it clear that chloride of lime was used to disinfect the offal and filth in the streets surrounding the Broad Street pump – a common practice in mid-nineteenth century England. Semmelweis and experiments with antisepsis Perhaps the most famous application of Labarraque's chlorine and chemical base solutions was in 1847, when Ignaz Semmelweis used chlorine-water (chlorine dissolved in pure water, which was cheaper than chlorinated lime solutions) to disinfect the hands of Austrian doctors, which Semmelweis noticed still carried the stench of decomposition from the dissection rooms to the patient examination rooms. Long before the germ theory of disease, Semmelweis theorized that "cadaveric particles" were transmitting decay from fresh medical cadavers to living patients, and he used the well-known "Labarraque's solutions" as the only known method to remove the smell of decay and tissue decomposition (which he found that soap did not). The solutions proved to be far more effective antiseptics than soap (Semmelweis was also aware of their greater efficacy, but not the reason), and this resulted in Semmelweis's celebrated success in stopping the transmission of childbed fever ("puerperal fever") in the maternity wards of Vienna General Hospital in Austria in 1847. Much later, during World War I in 1916, a standardized and diluted modification of Labarraque's solution containing hypochlorite (0.5%) and boric acid as an acidic stabilizer was developed by Henry Drysdale Dakin (who gave full credit to Labarraque's prior work in this area). Called Dakin's solution, the method of wound irrigation with chlorinated solutions allowed antiseptic treatment of a wide variety of open wounds, long before the modern antibiotic era. A modified version of this solution continues to be employed in wound irrigation in modern times, where it remains effective against bacteria that are resistant to multiple antibiotics (see Century Pharmaceuticals). Public sanitation The first continuous application of chlorination to drinking U.S. water was installed in Jersey City, New Jersey, in 1908. By 1918, the US Department of Treasury called for all drinking water to be disinfected with chlorine. Chlorine is presently an important chemical for water purification (such as in water treatment plants), in disinfectants, and in bleach. Even small water supplies are now routinely chlorinated. Chlorine is usually used (in the form of hypochlorous acid) to kill bacteria and other microbes in drinking water supplies and public swimming pools. In most private swimming pools, chlorine itself is not used, but rather sodium hypochlorite, formed from chlorine and sodium hydroxide, or solid tablets of chlorinated isocyanurates. The drawback of using chlorine in swimming pools is that the chlorine reacts with the amino acids in proteins in human hair and skin. Contrary to popular belief, the distinctive "chlorine aroma" associated with swimming pools is not the result of elemental chlorine itself, but of chloramine, a chemical compound produced by the reaction of free dissolved chlorine with amines in organic substances including those in urine and sweat. As a disinfectant in water, chlorine is more than three times as effective against Escherichia coli as bromine, and more than six times as effective as iodine. Increasingly, monochloramine itself is being directly added to drinking water for purposes of disinfection, a process known as chloramination. It is often impractical to store and use poisonous chlorine gas for water treatment, so alternative methods of adding chlorine are used. These include hypochlorite solutions, which gradually release chlorine into the water, and compounds like sodium dichloro-s-triazinetrione (dihydrate or anhydrous), sometimes referred to as "dichlor", and trichloro-s-triazinetrione, sometimes referred to as "trichlor". These compounds are stable while solid and may be used in powdered, granular, or tablet form. When added in small amounts to pool water or industrial water systems, the chlorine atoms hydrolyze from the rest of the molecule, forming hypochlorous acid (HOCl), which acts as a general biocide, killing germs, microorganisms, algae, and so on. Use as a weapon World War I Chlorine gas, also known as bertholite, was first used as a weapon in World War I by Germany on April 22, 1915, in the Second Battle of Ypres. As described by the soldiers, it had the distinctive smell of a mixture of pepper and pineapple. It also tasted metallic and stung the back of the throat and chest. Chlorine reacts with water in the mucosa of the lungs to form hydrochloric acid, destructive to living tissue and potentially lethal. Human respiratory systems can be protected from chlorine gas by gas masks with activated charcoal or other filters, which makes chlorine gas much less lethal than other chemical weapons. It was pioneered by a German scientist later to be a Nobel laureate, Fritz Haber of the Kaiser Wilhelm Institute in Berlin, in collaboration with the German chemical conglomerate IG Farben, which developed methods for discharging chlorine gas against an entrenched enemy. After its first use, both sides in the conflict used chlorine as a chemical weapon, but it was soon replaced by the more deadly phosgene and mustard gas. Middle east Chlorine gas was also used during the Iraq War in Anbar Province in 2007, with insurgents packing truck bombs with mortar shells and chlorine tanks. The attacks killed two people from the explosives and sickened more than 350. Most of the deaths were caused by the force of the explosions rather than the effects of chlorine since the toxic gas is readily dispersed and diluted in the atmosphere by the blast. In some bombings, over a hundred civilians were hospitalized due to breathing difficulties. The Iraqi authorities tightened security for elemental chlorine, which is essential for providing safe drinking water to the population. On 23 October 2014, it was reported that the Islamic State of Iraq and the Levant had used chlorine gas in the town of Duluiyah, Iraq. Laboratory analysis of clothing and soil samples confirmed the use of chlorine gas against Kurdish Peshmerga Forces in a vehicle-borne improvised explosive device attack on 23 January 2015 at the Highway 47 Kiske Junction near Mosul. Another country in the middle east, Syria, has used chlorine as a chemical weapon delivered from barrel bombs and rockets. In 2016, the OPCW-UN Joint Investigative Mechanism concluded that the Syrian government used chlorine as a chemical weapon in three separate attacks. Later investigations from the OPCW's Investigation and Identification Team concluded that the Syrian Air Force was responsible for chlorine attacks in 2017 and 2018. Biological role The chloride anion is an essential nutrient for metabolism. Chlorine is needed for the production of hydrochloric acid in the stomach and in cellular pump functions. The main dietary source is table salt, or sodium chloride. Overly low or high concentrations of chloride in the blood are examples of electrolyte disturbances. Hypochloremia (having too little chloride) rarely occurs in the absence of other abnormalities. It is sometimes associated with hypoventilation. It can be associated with chronic respiratory acidosis. Hyperchloremia (having too much chloride) usually does not produce symptoms. When symptoms do occur, they tend to resemble those of hypernatremia (having too much sodium). Reduction in blood chloride leads to cerebral dehydration; symptoms are most often caused by rapid rehydration which results in cerebral edema. Hyperchloremia can affect oxygen transport. Hazards Chlorine is a toxic gas that attacks the respiratory system, eyes, and skin. Because it is denser than air, it tends to accumulate at the bottom of poorly ventilated spaces. Chlorine gas is a strong oxidizer, which may react with flammable materials. Chlorine is detectable with measuring devices in concentrations as low as 0.2 parts per million (ppm), and by smell at 3 ppm. Coughing and vomiting may occur at 30 ppm and lung damage at 60 ppm. About 1000 ppm can be fatal after a few deep breaths of the gas. The IDLH (immediately dangerous to life and health) concentration is 10 ppm. Breathing lower concentrations can aggravate the respiratory system and exposure to the gas can irritate the eyes. When chlorine is inhaled at concentrations greater than 30 ppm, it reacts with water within the lungs, producing hydrochloric acid (HCl) and hypochlorous acid (HOCl). When used at specified levels for water disinfection, the reaction of chlorine with water is not a major concern for human health. Other materials present in the water may generate disinfection by-products that are associated with negative effects on human health. In the United States, the Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit for elemental chlorine at 1 ppm, or 3 mg/m3. The National Institute for Occupational Safety and Health has designated a recommended exposure limit of 0.5 ppm over 15 minutes. In the home, accidents occur when hypochlorite bleach solutions come into contact with certain acidic drain-cleaners to produce chlorine gas. Hypochlorite bleach (a popular laundry additive) combined with ammonia (another popular laundry additive) produces chloramines, another toxic group of chemicals. Chlorine-induced cracking in structural materials Chlorine is widely used for purifying water, especially potable water supplies and water used in swimming pools. Several catastrophic collapses of swimming pool ceilings have occurred from chlorine-induced stress corrosion cracking of stainless steel suspension rods. Some polymers are also sensitive to attack, including acetal resin and polybutene. Both materials were used in hot and cold water domestic plumbing, and stress corrosion cracking caused widespread failures in the US in the 1980s and 1990s. Chlorine-iron fire The element iron can combine with chlorine at high temperatures in a strong exothermic reaction, creating a chlorine-iron fire. Chlorine-iron fires are a risk in chemical process plants, where much of the pipework that carries chlorine gas is made of steel. See also Chlorine cycle Chlorine gas poisoning Industrial gas Polymer degradation Reductive dechlorination References Explanatory notes General bibliography External links Chlorine at The Periodic Table of Videos (University of Nottingham) Agency for Toxic Substances and Disease Registry: Chlorine Electrolytic production Production and liquefaction of chlorine Chlorine Production Using Mercury, Environmental Considerations and Alternatives National Pollutant Inventory – Chlorine National Institute for Occupational Safety and Health – Chlorine Page Chlorine Institute – Trade association representing the chlorine industry Chlorine Online – the web portal of Eurochlor – the business association of the European chlor-alkali industry Chemical elements Diatomic nonmetals Gases with color Halogens Hazardous air pollutants Industrial gases Chemical hazards Oxidizing agents Pulmonary agents Reactive nonmetals Swimming pool equipment
5668
https://en.wikipedia.org/wiki/Calcium
Calcium
Calcium is a chemical element with the symbol Ca and atomic number 20. As an alkaline earth metal, calcium is a reactive metal that forms a dark oxide-nitride layer when exposed to air. Its physical and chemical properties are most similar to its heavier homologues strontium and barium. It is the fifth most abundant element in Earth's crust, and the third most abundant metal, after iron and aluminium. The most common calcium compound on Earth is calcium carbonate, found in limestone and the fossilised remnants of early sea life; gypsum, anhydrite, fluorite, and apatite are also sources of calcium. The name derives from Latin calx "lime", which was obtained from heating limestone. Some calcium compounds were known to the ancients, though their chemistry was unknown until the seventeenth century. Pure calcium was isolated in 1808 via electrolysis of its oxide by Humphry Davy, who named the element. Calcium compounds are widely used in many industries: in foods and pharmaceuticals for calcium supplementation, in the paper industry as bleaches, as components in cement and electrical insulators, and in the manufacture of soaps. On the other hand, the metal in pure form has few applications due to its high reactivity; still, in small quantities it is often used as an alloying component in steelmaking, and sometimes, as a calcium–lead alloy, in making automotive batteries. Calcium is the most abundant metal and the fifth-most abundant element in the human body. As electrolytes, calcium ions (Ca2+) play a vital role in the physiological and biochemical processes of organisms and cells: in signal transduction pathways where they act as a second messenger; in neurotransmitter release from neurons; in contraction of all muscle cell types; as cofactors in many enzymes; and in fertilization. Calcium ions outside cells are important for maintaining the potential difference across excitable cell membranes, protein synthesis, and bone formation. Characteristics Classification Calcium is a very ductile silvery metal (sometimes described as pale yellow) whose properties are very similar to the heavier elements in its group, strontium, barium, and radium. A calcium atom has twenty electrons, arranged in the electron configuration [Ar]4s2. Like the other elements placed in group 2 of the periodic table, calcium has two valence electrons in the outermost s-orbital, which are very easily lost in chemical reactions to form a dipositive ion with the stable electron configuration of a noble gas, in this case argon. Hence, calcium is almost always divalent in its compounds, which are usually ionic. Hypothetical univalent salts of calcium would be stable with respect to their elements, but not to disproportionation to the divalent salts and calcium metal, because the enthalpy of formation of MX2 is much higher than those of the hypothetical MX. This occurs because of the much greater lattice energy afforded by the more highly charged Ca2+ cation compared to the hypothetical Ca+ cation. Calcium, strontium, barium, and radium are always considered to be alkaline earth metals; the lighter beryllium and magnesium, also in group 2 of the periodic table, are often included as well. Nevertheless, beryllium and magnesium differ significantly from the other members of the group in their physical and chemical behaviour: they behave more like aluminium and zinc respectively and have some of the weaker metallic character of the post-transition metals, which is why the traditional definition of the term "alkaline earth metal" excludes them. Physical properties Calcium metal melts at 842 °C and boils at 1494 °C; these values are higher than those for magnesium and strontium, the neighbouring group 2 metals. It crystallises in the face-centered cubic arrangement like strontium; above 450 °C, it changes to an anisotropic hexagonal close-packed arrangement like magnesium. Its density of 1.55 g/cm3 is the lowest in its group. Calcium is harder than lead but can be cut with a knife with effort. While calcium is a poorer conductor of electricity than copper or aluminium by volume, it is a better conductor by mass than both due to its very low density. While calcium is infeasible as a conductor for most terrestrial applications as it reacts quickly with atmospheric oxygen, its use as such in space has been considered. Chemical properties The chemistry of calcium is that of a typical heavy alkaline earth metal. For example, calcium spontaneously reacts with water more quickly than magnesium and less quickly than strontium to produce calcium hydroxide and hydrogen gas. It also reacts with the oxygen and nitrogen in the air to form a mixture of calcium oxide and calcium nitride. When finely divided, it spontaneously burns in air to produce the nitride. In bulk, calcium is less reactive: it quickly forms a hydration coating in moist air, but below 30% relative humidity it may be stored indefinitely at room temperature. Besides the simple oxide CaO, the peroxide CaO2 can be made by direct oxidation of calcium metal under a high pressure of oxygen, and there is some evidence for a yellow superoxide Ca(O2)2. Calcium hydroxide, Ca(OH)2, is a strong base, though it is not as strong as the hydroxides of strontium, barium or the alkali metals. All four dihalides of calcium are known. Calcium carbonate (CaCO3) and calcium sulfate (CaSO4) are particularly abundant minerals. Like strontium and barium, as well as the alkali metals and the divalent lanthanides europium and ytterbium, calcium metal dissolves directly in liquid ammonia to give a dark blue solution. Due to the large size of the calcium ion (Ca2+), high coordination numbers are common, up to 24 in some intermetallic compounds such as CaZn13. Calcium is readily complexed by oxygen chelates such as EDTA and polyphosphates, which are useful in analytic chemistry and removing calcium ions from hard water. In the absence of steric hindrance, smaller group 2 cations tend to form stronger complexes, but when large polydentate macrocycles are involved the trend is reversed. Although calcium is in the same group as magnesium and organomagnesium compounds are very commonly used throughout chemistry, organocalcium compounds are not similarly widespread because they are more difficult to make and more reactive, although they have recently been investigated as possible catalysts. Organocalcium compounds tend to be more similar to organoytterbium compounds due to the similar ionic radii of Yb2+ (102 pm) and Ca2+ (100 pm). Most of these compounds can only be prepared at low temperatures; bulky ligands tend to favor stability. For example, calcium dicyclopentadienyl, Ca(C5H5)2, must be made by directly reacting calcium metal with mercurocene or cyclopentadiene itself; replacing the C5H5 ligand with the bulkier C5(CH3)5 ligand on the other hand increases the compound's solubility, volatility, and kinetic stability. Isotopes Natural calcium is a mixture of five stable isotopes (40Ca, 42Ca, 43Ca, 44Ca, and 46Ca) and one isotope with a half-life so long that it can be considered stable for all practical purposes (48Ca, with a half-life of about 4.3 × 1019 years). Calcium is the first (lightest) element to have six naturally occurring isotopes. By far the most common isotope of calcium in nature is 40Ca, which makes up 96.941% of all natural calcium. It is produced in the silicon-burning process from fusion of alpha particles and is the heaviest stable nuclide with equal proton and neutron numbers; its occurrence is also supplemented slowly by the decay of primordial 40K. Adding another alpha particle leads to unstable 44Ti, which quickly decays via two successive electron captures to stable 44Ca; this makes up 2.806% of all natural calcium and is the second-most common isotope. The other four natural isotopes, 42Ca, 43Ca, 46Ca, and 48Ca, are significantly rarer, each comprising less than 1% of all natural calcium. The four lighter isotopes are mainly products of the oxygen-burning and silicon-burning processes, leaving the two heavier ones to be produced via neutron capture processes. 46Ca is mostly produced in a "hot" s-process, as its formation requires a rather high neutron flux to allow short-lived 45Ca to capture a neutron. 48Ca is produced by electron capture in the r-process in type Ia supernovae, where high neutron excess and low enough entropy ensures its survival. 46Ca and 48Ca are the first "classically stable" nuclides with a six-neutron or eight-neutron excess respectively. Although extremely neutron-rich for such a light element, 48Ca is very stable because it is a doubly magic nucleus, having 20 protons and 28 neutrons arranged in closed shells. Its beta decay to 48Sc is very hindered because of the gross mismatch of nuclear spin: 48Ca has zero nuclear spin, being even–even, while 48Sc has spin 6+, so the decay is forbidden by the conservation of angular momentum. While two excited states of 48Sc are available for decay as well, they are also forbidden due to their high spins. As a result, when 48Ca does decay, it does so by double beta decay to 48Ti instead, being the lightest nuclide known to undergo double beta decay. The heavy isotope 46Ca can also theoretically undergo double beta decay to 46Ti as well, but this has never been observed. The lightest and most common isotope 40Ca is also doubly magic and could undergo double electron capture to 40Ar, but this has likewise never been observed. Calcium is the only element to have two primordial doubly magic isotopes. The experimental lower limits for the half-lives of 40Ca and 46Ca are 5.9 × 1021 years and 2.8 × 1015 years respectively. Apart from the practically stable 48Ca, the longest lived radioisotope of calcium is 41Ca. It decays by electron capture to stable 41K with a half-life of about a hundred thousand years. Its existence in the early Solar System as an extinct radionuclide has been inferred from excesses of 41K: traces of 41Ca also still exist today, as it is a cosmogenic nuclide, continuously reformed through neutron activation of natural 40Ca. Many other calcium radioisotopes are known, ranging from 35Ca to 60Ca. They are all much shorter-lived than 41Ca, the most stable among them being 45Ca (half-life 163 days) and 47Ca (half-life 4.54 days). The isotopes lighter than 42Ca usually undergo beta plus decay to isotopes of potassium, and those heavier than 44Ca usually undergo beta minus decay to isotopes of scandium, although near the nuclear drip lines, proton emission and neutron emission begin to be significant decay modes as well. Like other elements, a variety of processes alter the relative abundance of calcium isotopes. The best studied of these processes is the mass-dependent fractionation of calcium isotopes that accompanies the precipitation of calcium minerals such as calcite, aragonite and apatite from solution. Lighter isotopes are preferentially incorporated into these minerals, leaving the surrounding solution enriched in heavier isotopes at a magnitude of roughly 0.025% per atomic mass unit (amu) at room temperature. Mass-dependent differences in calcium isotope composition are conventionally expressed by the ratio of two isotopes (usually 44Ca/40Ca) in a sample compared to the same ratio in a standard reference material. 44Ca/40Ca varies by about 1% among common earth materials. History Calcium compounds were known for millennia, although their chemical makeup was not understood until the 17th century. Lime as a building material and as plaster for statues was used as far back as around 7000 BC. The first dated lime kiln dates back to 2500 BC and was found in Khafajah, Mesopotamia. At about the same time, dehydrated gypsum (CaSO4·2H2O) was being used in the Great Pyramid of Giza. This material would later be used for the plaster in the tomb of Tutankhamun. The ancient Romans instead used lime mortars made by heating limestone (CaCO3). The name "calcium" itself derives from the Latin word calx "lime". Vitruvius noted that the lime that resulted was lighter than the original limestone, attributing this to the boiling of the water. In 1755, Joseph Black proved that this was due to the loss of carbon dioxide, which as a gas had not been recognised by the ancient Romans. In 1789, Antoine Lavoisier suspected that lime might be an oxide of a fundamental chemical element. In his table of the elements, Lavoisier listed five "salifiable earths" (i.e., ores that could be made to react with acids to produce salts (salis = salt, in Latin): chaux (calcium oxide), magnésie (magnesia, magnesium oxide), baryte (barium sulfate), alumine (alumina, aluminium oxide), and silice (silica, silicon dioxide)). About these "elements", Lavoisier reasoned: Calcium, along with its congeners magnesium, strontium, and barium, was first isolated by Humphry Davy in 1808. Following the work of Jöns Jakob Berzelius and Magnus Martin af Pontin on electrolysis, Davy isolated calcium and magnesium by putting a mixture of the respective metal oxides with mercury(II) oxide on a platinum plate which was used as the anode, the cathode being a platinum wire partially submerged into mercury. Electrolysis then gave calcium–mercury and magnesium–mercury amalgams, and distilling off the mercury gave the metal. However, pure calcium cannot be prepared in bulk by this method and a workable commercial process for its production was not found until over a century later. Occurrence and production At 3%, calcium is the fifth most abundant element in the Earth's crust, and the third most abundant metal behind aluminium and iron. It is also the fourth most abundant element in the lunar highlands. Sedimentary calcium carbonate deposits pervade the Earth's surface as fossilized remains of past marine life; they occur in two forms, the rhombohedral calcite (more common) and the orthorhombic aragonite (forming in more temperate seas). Minerals of the first type include limestone, dolomite, marble, chalk, and iceland spar; aragonite beds make up the Bahamas, the Florida Keys, and the Red Sea basins. Corals, sea shells, and pearls are mostly made up of calcium carbonate. Among the other important minerals of calcium are gypsum (CaSO4·2H2O), anhydrite (CaSO4), fluorite (CaF2), and apatite ([Ca5(PO4)3X], X = OH, Cl, or F). The major producers of calcium are China (about 10000 to 12000 tonnes per year), Russia (about 6000 to 8000 tonnes per year), and the United States (about 2000 to 4000 tonnes per year). Canada and France are also among the minor producers. In 2005, about 24000 tonnes of calcium were produced; about half of the world's extracted calcium is used by the United States, with about 80% of the output used each year. In Russia and China, Davy's method of electrolysis is still used, but is instead applied to molten calcium chloride. Since calcium is less reactive than strontium or barium, the oxide–nitride coating that results in air is stable and lathe machining and other standard metallurgical techniques are suitable for calcium. In the United States and Canada, calcium is instead produced by reducing lime with aluminium at high temperatures. Geochemical cycling Calcium cycling provides a link between tectonics, climate, and the carbon cycle. In the simplest terms, uplift of mountains exposes calcium-bearing rocks such as some granites to chemical weathering and releases Ca2+ into surface water. These ions are transported to the ocean where they react with dissolved CO2 to form limestone (), which in turn settles to the sea floor where it is incorporated into new rocks. Dissolved CO2, along with carbonate and bicarbonate ions, are termed "dissolved inorganic carbon" (DIC). The actual reaction is more complicated and involves the bicarbonate ion (HCO) that forms when CO2 reacts with water at seawater pH: Ca^2+ + 2 HCO3- -> CaCO3_v + CO2 + H2O At seawater pH, most of the CO2 is immediately converted back into . The reaction results in a net transport of one molecule of CO2 from the ocean/atmosphere into the lithosphere. The result is that each Ca2+ ion released by chemical weathering ultimately removes one CO2 molecule from the surficial system (atmosphere, ocean, soils and living organisms), storing it in carbonate rocks where it is likely to stay for hundreds of millions of years. The weathering of calcium from rocks thus scrubs CO2 from the ocean and atmosphere, exerting a strong long-term effect on climate. Uses The largest use of metallic calcium is in steelmaking, due to its strong chemical affinity for oxygen and sulfur. Its oxides and sulfides, once formed, give liquid lime aluminate and sulfide inclusions in steel which float out; on treatment, these inclusions disperse throughout the steel and become small and spherical, improving castability, cleanliness and general mechanical properties. Calcium is also used in maintenance-free automotive batteries, in which the use of 0.1% calcium–lead alloys instead of the usual antimony–lead alloys leads to lower water loss and lower self-discharging. Due to the risk of expansion and cracking, aluminium is sometimes also incorporated into these alloys. These lead–calcium alloys are also used in casting, replacing lead–antimony alloys. Calcium is also used to strengthen aluminium alloys used for bearings, for the control of graphitic carbon in cast iron, and to remove bismuth impurities from lead. Calcium metal is found in some drain cleaners, where it functions to generate heat and calcium hydroxide that saponifies the fats and liquefies the proteins (for example, those in hair) that block drains. Besides metallurgy, the reactivity of calcium is exploited to remove nitrogen from high-purity argon gas and as a getter for oxygen and nitrogen. It is also used as a reducing agent in the production of chromium, zirconium, thorium, and uranium. It can also be used to store hydrogen gas, as it reacts with hydrogen to form solid calcium hydride, from which the hydrogen can easily be re-extracted. Calcium isotope fractionation during mineral formation has led to several applications of calcium isotopes. In particular, the 1997 observation by Skulan and DePaolo that calcium minerals are isotopically lighter than the solutions from which the minerals precipitate is the basis of analogous applications in medicine and in paleoceanography. In animals with skeletons mineralized with calcium, the calcium isotopic composition of soft tissues reflects the relative rate of formation and dissolution of skeletal mineral. In humans, changes in the calcium isotopic composition of urine have been shown to be related to changes in bone mineral balance. When the rate of bone formation exceeds the rate of bone resorption, the 44Ca/40Ca ratio in soft tissue rises and vice versa. Because of this relationship, calcium isotopic measurements of urine or blood may be useful in the early detection of metabolic bone diseases like osteoporosis. A similar system exists in seawater, where 44Ca/40Ca tends to rise when the rate of removal of Ca2+ by mineral precipitation exceeds the input of new calcium into the ocean. In 1997, Skulan and DePaolo presented the first evidence of change in seawater 44Ca/40Ca over geologic time, along with a theoretical explanation of these changes. More recent papers have confirmed this observation, demonstrating that seawater Ca2+ concentration is not constant, and that the ocean is never in a "steady state" with respect to calcium input and output. This has important climatological implications, as the marine calcium cycle is closely tied to the carbon cycle. Many calcium compounds are used in food, as pharmaceuticals, and in medicine, among others. For example, calcium and phosphorus are supplemented in foods through the addition of calcium lactate, calcium diphosphate, and tricalcium phosphate. The last is also used as a polishing agent in toothpaste and in antacids. Calcium lactobionate is a white powder that is used as a suspending agent for pharmaceuticals. In baking, calcium phosphate is used as a leavening agent. Calcium sulfite is used as a bleach in papermaking and as a disinfectant, calcium silicate is used as a reinforcing agent in rubber, and calcium acetate is a component of liming rosin and is used to make metallic soaps and synthetic resins. Calcium is on the World Health Organization's List of Essential Medicines. Food sources Foods rich in calcium include dairy products, such as yogurt and cheese, sardines, salmon, soy products, kale, and fortified breakfast cereals. Because of concerns for long-term adverse side effects, including calcification of arteries and kidney stones, both the U.S. Institute of Medicine (IOM) and the European Food Safety Authority (EFSA) set Tolerable Upper Intake Levels (ULs) for combined dietary and supplemental calcium. From the IOM, people of ages 9–18 years are not to exceed 3 g/day combined intake; for ages 19–50, not to exceed 2.5 g/day; for ages 51 and older, not to exceed 2 g/day. EFSA set the UL for all adults at 2.5 g/day, but decided the information for children and adolescents was not sufficient to determine ULs. Biological and pathological role Function Calcium is an essential element needed in large quantities. The Ca2+ ion acts as an electrolyte and is vital to the health of the muscular, circulatory, and digestive systems; is indispensable to the building of bone; and supports synthesis and function of blood cells. For example, it regulates the contraction of muscles, nerve conduction, and the clotting of blood. As a result, intra- and extracellular calcium levels are tightly regulated by the body. Calcium can play this role because the Ca2+ ion forms stable coordination complexes with many organic compounds, especially proteins; it also forms compounds with a wide range of solubilities, enabling the formation of the skeleton. Binding Calcium ions may be complexed by proteins through binding the carboxyl groups of glutamic acid or aspartic acid residues; through interacting with phosphorylated serine, tyrosine, or threonine residues; or by being chelated by γ-carboxylated amino acid residues. Trypsin, a digestive enzyme, uses the first method; osteocalcin, a bone matrix protein, uses the third. Some other bone matrix proteins such as osteopontin and bone sialoprotein use both the first and the second. Direct activation of enzymes by binding calcium is common; some other enzymes are activated by noncovalent association with direct calcium-binding enzymes. Calcium also binds to the phospholipid layer of the cell membrane, anchoring proteins associated with the cell surface. Solubility As an example of the wide range of solubility of calcium compounds, monocalcium phosphate is very soluble in water, 85% of extracellular calcium is as dicalcium phosphate with a solubility of 2.00 mM, and the hydroxyapatite of bones in an organic matrix is tricalcium phosphate with a solubility of 1000 μM. Nutrition Calcium is a common constituent of multivitamin dietary supplements, but the composition of calcium complexes in supplements may affect its bioavailability which varies by solubility of the salt involved: calcium citrate, malate, and lactate are highly bioavailable, while the oxalate is less. Other calcium preparations include calcium carbonate, calcium citrate malate, and calcium gluconate. The intestine absorbs about one-third of calcium eaten as the free ion, and plasma calcium level is then regulated by the kidneys. Hormonal regulation of bone formation and serum levels Parathyroid hormone and vitamin D promote the formation of bone by allowing and enhancing the deposition of calcium ions there, allowing rapid bone turnover without affecting bone mass or mineral content. When plasma calcium levels fall, cell surface receptors are activated and the secretion of parathyroid hormone occurs; it then proceeds to stimulate the entry of calcium into the plasma pool by taking it from targeted kidney, gut, and bone cells, with the bone-forming action of parathyroid hormone being antagonised by calcitonin, whose secretion increases with increasing plasma calcium levels. Abnormal serum levels Excess intake of calcium may cause hypercalcemia. However, because calcium is absorbed rather inefficiently by the intestines, high serum calcium is more likely caused by excessive secretion of parathyroid hormone (PTH) or possibly by excessive intake of vitamin D, both of which facilitate calcium absorption. All these conditions result in excess calcium salts being deposited in the heart, blood vessels, or kidneys. Symptoms include anorexia, nausea, vomiting, memory loss, confusion, muscle weakness, increased urination, dehydration, and metabolic bone disease. Chronic hypercalcaemia typically leads to calcification of soft tissue and its serious consequences: for example, calcification can cause loss of elasticity of vascular walls and disruption of laminar blood flow—and thence to plaque rupture and thrombosis. Conversely, inadequate calcium or vitamin D intakes may result in hypocalcemia, often caused also by inadequate secretion of parathyroid hormone or defective PTH receptors in cells. Symptoms include neuromuscular excitability, which potentially causes tetany and disruption of conductivity in cardiac tissue. Bone disease As calcium is required for bone development, many bone diseases can be traced to the organic matrix or the hydroxyapatite in molecular structure or organization of bone. Osteoporosis is a reduction in mineral content of bone per unit volume, and can be treated by supplementation of calcium, vitamin D, and bisphosphonates. Inadequate amounts of calcium, vitamin D, or phosphates can lead to softening of bones, called osteomalacia. Safety Metallic calcium Because calcium reacts exothermically with water and acids, calcium metal coming into contact with bodily moisture results in severe corrosive irritation. When swallowed, calcium metal has the same effect on the mouth, oesophagus, and stomach, and can be fatal. However, long-term exposure is not known to have distinct adverse effects. References Bibliography Chemical elements Alkaline earth metals Dietary minerals Dietary supplements Reducing agents Sodium channel blockers World Health Organization essential medicines Chemical elements with face-centered cubic structure
5669
https://en.wikipedia.org/wiki/Chromium
Chromium
Chromium is a chemical element with the symbol Cr and atomic number 24. It is the first element in group 6. It is a steely-grey, lustrous, hard, and brittle transition metal. Chromium metal is valued for its high corrosion resistance and hardness. A major development in steel production was the discovery that steel could be made highly resistant to corrosion and discoloration by adding metallic chromium to form stainless steel. Stainless steel and chrome plating (electroplating with chromium) together comprise 85% of the commercial use. Chromium is also greatly valued as a metal that is able to be highly polished while resisting tarnishing. Polished chromium reflects almost 70% of the visible spectrum, and almost 90% of infrared light. The name of the element is derived from the Greek word χρῶμα, chrōma, meaning color, because many chromium compounds are intensely colored. Industrial production of chromium proceeds from chromite ore (mostly FeCr2O4) to produce ferrochromium, an iron-chromium alloy, by means of aluminothermic or silicothermic reactions. Ferrochromium is then used to produce alloys such as stainless steel. Pure chromium metal is produced by a different process: roasting and leaching of chromite to separate it from iron, followed by reduction with carbon and then aluminium. In the United States, trivalent chromium (Cr(III)) ion is considered an essential nutrient in humans for insulin, sugar, and lipid metabolism. However, in 2014, the European Food Safety Authority, acting for the European Union, concluded that there was insufficient evidence for chromium to be recognized as essential. While chromium metal and Cr(III) ions are considered non-toxic, hexavalent chromium, Cr(VI), is toxic and carcinogenic. According to the European Chemicals Agency (ECHA), chromium trioxide that is used in industrial electroplating processes is a "substance of very high concern" (SVHC). Abandoned chromium production sites often require environmental cleanup. Physical properties Atomic Chromium is the fourth transition metal found on the periodic table, and has an electron configuration of [Ar] 3d5 4s1. It is also the first element in the periodic table whose ground-state electron configuration violates the Aufbau principle. This occurs again later in the periodic table with other elements and their electron configurations, such as copper, niobium, and molybdenum. This occurs because electrons in the same orbital repel each other due to their like charges. In the previous elements, the energetic cost of promoting an electron to the next higher energy level is too great to compensate for that released by lessening inter-electronic repulsion. However, in the 3d transition metals, the energy gap between the 3d and the next-higher 4s subshell is very small, and because the 3d subshell is more compact than the 4s subshell, inter-electron repulsion is smaller between 4s electrons than between 3d electrons. This lowers the energetic cost of promotion and increases the energy released by it, so that the promotion becomes energetically feasible and one or even two electrons are always promoted to the 4s subshell. (Similar promotions happen for every transition metal atom but one, palladium.) Chromium is the first element in the 3d series where the 3d electrons start to sink into the nucleus; they thus contribute less to metallic bonding, and hence the melting and boiling points and the enthalpy of atomisation of chromium are lower than those of the preceding element vanadium. Chromium(VI) is a strong oxidising agent in contrast to the molybdenum(VI) and tungsten(VI) oxides. Bulk Chromium is extremely hard, and is the third hardest element behind carbon (diamond) and boron. Its Mohs hardness is 8.5, which means that it can scratch samples of quartz and topaz, but can be scratched by corundum. Chromium is highly resistant to tarnishing, which makes it useful as a metal that preserves its outermost layer from corroding, unlike other metals such as copper, magnesium, and aluminium. Chromium has a melting point of 1907 °C (3465 °F), which is relatively low compared to the majority of transition metals. However, it still has the second highest melting point out of all the Period 4 elements, being topped by vanadium by 3 °C (5 °F) at 1910 °C (3470 °F). The boiling point of 2671 °C (4840 °F), however, is comparatively lower, having the fourth lowest boiling point out of the Period 4 transition metals alone behind copper, manganese and zinc. The electrical resistivity of chromium at 20 °C is 125 nanoohm-meters. Chromium has a high specular reflection in comparison to other transition metals. In infrared, at 425 μm, chromium has a maximum reflectance of about 72%, reducing to a minimum of 62% at 750 μm before rising again to 90% at 4000 μm. When chromium is used in stainless steel alloys and polished, the specular reflection decreases with the inclusion of additional metals, yet is still high in comparison with other alloys. Between 40% and 60% of the visible spectrum is reflected from polished stainless steel. The explanation on why chromium displays such a high turnout of reflected photon waves in general, especially the 90% in infrared, can be attributed to chromium's magnetic properties. Chromium has unique magnetic properties - chromium is the only elemental solid that shows antiferromagnetic ordering at room temperature and below. Above 38 °C, its magnetic ordering becomes paramagnetic. The antiferromagnetic properties, which cause the chromium atoms to temporarily ionize and bond with themselves, are present because the body-centric cubic's magnetic properties are disproportionate to the lattice periodicity. This is due to the magnetic moments at the cube's corners and the unequal, but antiparallel, cube centers. From here, the frequency-dependent relative permittivity of chromium, deriving from Maxwell's equations and chromium's antiferromagnetism, leaves chromium with a high infrared and visible light reflectance. Passivation Chromium metal left standing in air is passivated - it forms a thin, protective, surface layer of oxide. This layer has a spinel structure a few atomic layers thick; it is very dense and inhibits the diffusion of oxygen into the underlying metal. In contrast, iron forms a more porous oxide through which oxygen can migrate, causing continued rusting. Passivation can be enhanced by short contact with oxidizing acids like nitric acid. Passivated chromium is stable against acids. Passivation can be removed with a strong reducing agent that destroys the protective oxide layer on the metal. Chromium metal treated in this way readily dissolves in weak acids. Chromium, unlike iron and nickel, does not suffer from hydrogen embrittlement. However, it does suffer from nitrogen embrittlement, reacting with nitrogen from air and forming brittle nitrides at the high temperatures necessary to work the metal parts. Isotopes Naturally occurring chromium is composed of four stable isotopes; 50Cr, 52Cr, 53Cr and 54Cr, with 52Cr being the most abundant (83.789% natural abundance). 50Cr is observationally stable, as it is theoretically capable of decaying to 50Ti via double electron capture with a half-life of no less than 1.3 years. Twenty-five radioisotopes have been characterized, ranging from 42Cr to 70Cr; the most stable radioisotope is 51Cr with a half-life of 27.7 days. All of the remaining radioactive isotopes have half-lives that are less than 24 hours and the majority less than 1 minute. Chromium also has two metastable nuclear isomers. 53Cr is the radiogenic decay product of 53Mn (half-life 3.74 million years). Chromium isotopes are typically collocated (and compounded) with manganese isotopes. This circumstance is useful in isotope geology. Manganese-chromium isotope ratios reinforce the evidence from 26Al and 107Pd concerning the early history of the Solar System. Variations in 53Cr/52Cr and Mn/Cr ratios from several meteorites indicate an initial 53Mn/55Mn ratio that suggests Mn-Cr isotopic composition must result from in-situ decay of 53Mn in differentiated planetary bodies. Hence 53Cr provides additional evidence for nucleosynthetic processes immediately before coalescence of the Solar System. The isotopes of chromium range in atomic mass from 43 u (43Cr) to 67 u (67Cr). The primary decay mode before the most abundant stable isotope, 52Cr, is electron capture and the primary mode after is beta decay. 53Cr has been posited as a proxy for atmospheric oxygen concentration. Chemistry and compounds Chromium is a member of group 6, of the transition metals. The +3 and +6 states occur most commonly within chromium compounds, followed by +2; charges of +1, +4 and +5 for chromium are rare, but do nevertheless occasionally exist. Common oxidation states Chromium(0) Many Cr(0) complexes are known. Bis(benzene)chromium and chromium hexacarbonyl are highlights in organochromium chemistry. Chromium(II) Chromium(II) compounds are uncommon, in part because they readily oxidize to chromium(III) derivatives in air. Water-stable chromium(II) chloride that can be made by reducing chromium(III) chloride with zinc. The resulting bright blue solution created from dissolving chromium(II) chloride is stable at neutral pH. Some other notable chromium(II) compounds include chromium(II) oxide , and chromium(II) sulfate . Many chromium(II) carboxylates are known. The red chromium(II) acetate (Cr2(O2CCH3)4) is somewhat famous. It features a Cr-Cr quadruple bond. Chromium(III) A large number of chromium(III) compounds are known, such as chromium(III) nitrate, chromium(III) acetate, and chromium(III) oxide. Chromium(III) can be obtained by dissolving elemental chromium in acids like hydrochloric acid or sulfuric acid, but it can also be formed through the reduction of chromium(VI) by cytochrome c7. The ion has a similar radius (63 pm) to (radius 50 pm), and they can replace each other in some compounds, such as in chrome alum and alum. Chromium(III) tends to form octahedral complexes. Commercially available chromium(III) chloride hydrate is the dark green complex [CrCl2(H2O)4]Cl. Closely related compounds are the pale green [CrCl(H2O)5]Cl2 and violet [Cr(H2O)6]Cl3. If anhydrous violet chromium(III) chloride is dissolved in water, the violet solution turns green after some time as the chloride in the inner coordination sphere is replaced by water. This kind of reaction is also observed with solutions of chrome alum and other water-soluble chromium(III) salts. A tetrahedral coordination of chromium(III) has been reported for the Cr-centered Keggin anion [α-CrW12O40]5–. Chromium(III) hydroxide (Cr(OH)3) is amphoteric, dissolving in acidic solutions to form [Cr(H2O)6]3+, and in basic solutions to form . It is dehydrated by heating to form the green chromium(III) oxide (Cr2O3), a stable oxide with a crystal structure identical to that of corundum. Chromium(VI) Chromium(VI) compounds are oxidants at low or neutral pH. Chromate anions () and dichromate (Cr2O72−) anions are the principal ions at this oxidation state. They exist at an equilibrium, determined by pH: 2 [CrO4]2− + 2 H+ [Cr2O7]2− + H2O Chromium(VI) oxyhalides are known also and include chromyl fluoride (CrO2F2) and chromyl chloride (). However, despite several erroneous claims, chromium hexafluoride (as well as all higher hexahalides) remains unknown, as of 2020. Sodium chromate is produced industrially by the oxidative roasting of chromite ore with sodium carbonate. The change in equilibrium is visible by a change from yellow (chromate) to orange (dichromate), such as when an acid is added to a neutral solution of potassium chromate. At yet lower pH values, further condensation to more complex oxyanions of chromium is possible. Both the chromate and dichromate anions are strong oxidizing reagents at low pH: + 14 + 6 e− → 2 + 21 (ε0 = 1.33 V) They are, however, only moderately oxidizing at high pH: + 4 + 3 e− → + 5 (ε0 = −0.13 V) Chromium(VI) compounds in solution can be detected by adding an acidic hydrogen peroxide solution. The unstable dark blue chromium(VI) peroxide (CrO5) is formed, which can be stabilized as an ether adduct . Chromic acid has the hypothetical formula . It is a vaguely described chemical, despite many well-defined chromates and dichromates being known. The dark red chromium(VI) oxide , the acid anhydride of chromic acid, is sold industrially as "chromic acid". It can be produced by mixing sulfuric acid with dichromate and is a strong oxidizing agent. Other oxidation states Compounds of chromium(V) are rather rare; the oxidation state +5 is only realized in few compounds but are intermediates in many reactions involving oxidations by chromate. The only binary compound is the volatile chromium(V) fluoride (CrF5). This red solid has a melting point of 30 °C and a boiling point of 117 °C. It can be prepared by treating chromium metal with fluorine at 400 °C and 200 bar pressure. The peroxochromate(V) is another example of the +5 oxidation state. Potassium peroxochromate (K3[Cr(O2)4]) is made by reacting potassium chromate with hydrogen peroxide at low temperatures. This red brown compound is stable at room temperature but decomposes spontaneously at 150–170 °C. Compounds of chromium(IV) are slightly more common than those of chromium(V). The tetrahalides, CrF4, CrCl4, and CrBr4, can be produced by treating the trihalides () with the corresponding halogen at elevated temperatures. Such compounds are susceptible to disproportionation reactions and are not stable in water. Organic compounds containing Cr(IV) state such as chromium tetra t-butoxide are also known. Most chromium(I) compounds are obtained solely by oxidation of electron-rich, octahedral chromium(0) complexes. Other chromium(I) complexes contain cyclopentadienyl ligands. As verified by X-ray diffraction, a Cr-Cr quintuple bond (length 183.51(4)  pm) has also been described. Extremely bulky monodentate ligands stabilize this compound by shielding the quintuple bond from further reactions. Occurrence Chromium is the 21st most abundant element in Earth's crust with an average concentration of 100 ppm. Chromium compounds are found in the environment from the erosion of chromium-containing rocks, and can be redistributed by volcanic eruptions. Typical background concentrations of chromium in environmental media are: atmosphere <10 ng/m3; soil <500 mg/kg; vegetation <0.5 mg/kg; freshwater <10 μg/L; seawater <1 μg/L; sediment <80 mg/kg. Chromium is mined as chromite (FeCr2O4) ore. About two-fifths of the chromite ores and concentrates in the world are produced in South Africa, about a third in Kazakhstan, while India, Russia, and Turkey are also substantial producers. Untapped chromite deposits are plentiful, but geographically concentrated in Kazakhstan and southern Africa. Although rare, deposits of native chromium exist. The Udachnaya Pipe in Russia produces samples of the native metal. This mine is a kimberlite pipe, rich in diamonds, and the reducing environment helped produce both elemental chromium and diamonds. The relation between Cr(III) and Cr(VI) strongly depends on pH and oxidative properties of the location. In most cases, Cr(III) is the dominating species, but in some areas, the ground water can contain up to 39 µg/L of total chromium, of which 30 µg/L is Cr(VI). History Early applications Chromium minerals as pigments came to the attention of the west in the eighteenth century. On 26 July 1761, Johann Gottlob Lehmann found an orange-red mineral in the Beryozovskoye mines in the Ural Mountains which he named Siberian red lead. Though misidentified as a lead compound with selenium and iron components, the mineral was in fact crocoite with a formula of PbCrO4. In 1770, Peter Simon Pallas visited the same site as Lehmann and found a red lead mineral that was discovered to possess useful properties as a pigment in paints. After Pallas, the use of Siberian red lead as a paint pigment began to develop rapidly throughout the region. Crocoite would be the principal source of chromium in pigments until the discovery of chromite many years later. In 1794, Louis Nicolas Vauquelin received samples of crocoite ore. He produced chromium trioxide (CrO3) by mixing crocoite with hydrochloric acid. In 1797, Vauquelin discovered that he could isolate metallic chromium by heating the oxide in a charcoal oven, for which he is credited as the one who truly discovered the element. Vauquelin was also able to detect traces of chromium in precious gemstones, such as ruby and emerald. During the nineteenth century, chromium was primarily used not only as a component of paints, but in tanning salts as well. For quite some time, the crocoite found in Russia was the main source for such tanning materials. In 1827, a larger chromite deposit was discovered near Baltimore, United States, which quickly met the demand for tanning salts much more adequately than the crocoite that had been used previously. This made the United States the largest producer of chromium products until the year 1848, when larger deposits of chromite were uncovered near the city of Bursa, Turkey. With the development of metallurgy and chemical industries in the Western world, the need for chromium increased. Chromium is also famous for its reflective, metallic luster when polished. It is used as a protective and decorative coating on car parts, plumbing fixtures, furniture parts and many other items, usually applied by electroplating. Chromium was used for electroplating as early as 1848, but this use only became widespread with the development of an improved process in 1924. Production Approximately 28.8 million metric tons (Mt) of marketable chromite ore was produced in 2013, and converted into 7.5 Mt of ferrochromium. According to John F. Papp, writing for the USGS, "Ferrochromium is the leading end use of chromite ore, [and] stainless steel is the leading end use of ferrochromium." The largest producers of chromium ore in 2013 have been South Africa (48%), Kazakhstan (13%), Turkey (11%), and India (10%), with several other countries producing the rest of about 18% of the world production. The two main products of chromium ore refining are ferrochromium and metallic chromium. For those products the ore smelter process differs considerably. For the production of ferrochromium, the chromite ore (FeCr2O4) is reduced in large scale in electric arc furnace or in smaller smelters with either aluminium or silicon in an aluminothermic reaction. For the production of pure chromium, the iron must be separated from the chromium in a two step roasting and leaching process. The chromite ore is heated with a mixture of calcium carbonate and sodium carbonate in the presence of air. The chromium is oxidized to the hexavalent form, while the iron forms the stable Fe2O3. The subsequent leaching at higher elevated temperatures dissolves the chromates and leaves the insoluble iron oxide. The chromate is converted by sulfuric acid into the dichromate. 4 FeCr2O4 + 8 Na2CO3 + 7 O2 → 8 Na2CrO4 + 2 Fe2O3 + 8 CO2 2 Na2CrO4 + H2SO4 → Na2Cr2O7 + Na2SO4 + H2O The dichromate is converted to the chromium(III) oxide by reduction with carbon and then reduced in an aluminothermic reaction to chromium. Na2Cr2O7 + 2 C → Cr2O3 + Na2CO3 + CO Cr2O3 + 2 Al → Al2O3 + 2 Cr Applications The creation of metal alloys account for 85% of the available chromium's usage. The remainder of chromium is used in the chemical, refractory, and foundry industries. Metallurgy The strengthening effect of forming stable metal carbides at grain boundaries, and the strong increase in corrosion resistance made chromium an important alloying material for steel. High-speed tool steels contain between 3 and 5% chromium. Stainless steel, the primary corrosion-resistant metal alloy, is formed when chromium is introduced to iron in concentrations above 11%. For stainless steel's formation, ferrochromium is added to the molten iron. Also, nickel-based alloys have increased strength due to the formation of discrete, stable, metal, carbide particles at the grain boundaries. For example, Inconel 718 contains 18.6% chromium. Because of the excellent high-temperature properties of these nickel superalloys, they are used in jet engines and gas turbines in lieu of common structural materials. ASTM B163 relies on Chromium for condenser and heat-exchanger tubes, while castings with high strength at elevated temperatures that contain Chromium are standardised with ASTM A567. AISI type 332 is used where high temperature would normally cause carburization, oxidation or corrosion. Incoloy 800 "is capable of remaining stable and maintaining its austenitic structure even after long time exposures to high temperatures". Nichrome is used as resistance wire for heating elements in things like toasters and space heaters. These uses make chromium a strategic material. Consequently, during World War II, U.S. road engineers were instructed to avoid chromium in yellow road paint, as it "may become a critical material during the emergency." The United States likewise considered chromium "essential for the German war industry" and made intense diplomatic efforts to keep it out of the hands of Nazi Germany. The high hardness and corrosion resistance of unalloyed chromium makes it a reliable metal for surface coating; it is still the most popular metal for sheet coating, with its above-average durability, compared to other coating metals. A layer of chromium is deposited on pretreated metallic surfaces by electroplating techniques. There are two deposition methods: thin, and thick. Thin deposition involves a layer of chromium below 1 µm thickness deposited by chrome plating, and is used for decorative surfaces. Thicker chromium layers are deposited if wear-resistant surfaces are needed. Both methods use acidic chromate or dichromate solutions. To prevent the energy-consuming change in oxidation state, the use of chromium(III) sulfate is under development; for most applications of chromium, the previously established process is used. In the chromate conversion coating process, the strong oxidative properties of chromates are used to deposit a protective oxide layer on metals like aluminium, zinc, and cadmium. This passivation and the self-healing properties of the chromate stored in the chromate conversion coating, which is able to migrate to local defects, are the benefits of this coating method. Because of environmental and health regulations on chromates, alternative coating methods are under development. Chromic acid anodizing (or Type I anodizing) of aluminium is another electrochemical process that does not lead to the deposition of chromium, but uses chromic acid as an electrolyte in the solution. During anodization, an oxide layer is formed on the aluminium. The use of chromic acid, instead of the normally used sulfuric acid, leads to a slight difference of these oxide layers. The high toxicity of Cr(VI) compounds, used in the established chromium electroplating process, and the strengthening of safety and environmental regulations demand a search for substitutes for chromium, or at least a change to less toxic chromium(III) compounds. Pigment The mineral crocoite (which is also lead chromate PbCrO4) was used as a yellow pigment shortly after its discovery. After a synthesis method became available starting from the more abundant chromite, chrome yellow was, together with cadmium yellow, one of the most used yellow pigments. The pigment does not photodegrade, but it tends to darken due to the formation of chromium(III) oxide. It has a strong color, and was used for school buses in the United States and for the postal services (for example, the Deutsche Post) in Europe. The use of chrome yellow has since declined due to environmental and safety concerns and was replaced by organic pigments or other alternatives that are free from lead and chromium. Other pigments that are based around chromium are, for example, the deep shade of red pigment chrome red, which is simply lead chromate with lead(II) hydroxide (PbCrO4·Pb(OH)2). A very important chromate pigment, which was used widely in metal primer formulations, was zinc chromate, now replaced by zinc phosphate. A wash primer was formulated to replace the dangerous practice of pre-treating aluminium aircraft bodies with a phosphoric acid solution. This used zinc tetroxychromate dispersed in a solution of polyvinyl butyral. An 8% solution of phosphoric acid in solvent was added just before application. It was found that an easily oxidized alcohol was an essential ingredient. A thin layer of about 10–15 µm was applied, which turned from yellow to dark green when it was cured. There is still a question as to the correct mechanism. Chrome green is a mixture of Prussian blue and chrome yellow, while the chrome oxide green is chromium(III) oxide. Chromium oxides are also used as a green pigment in the field of glassmaking and also as a glaze for ceramics. Green chromium oxide is extremely lightfast and as such is used in cladding coatings. It is also the main ingredient in infrared reflecting paints, used by the armed forces to paint vehicles and to give them the same infrared reflectance as green leaves. Other uses Chromium(III) ions present in corundum crystals (aluminium oxide) cause them to be colored red; when corundum appears as such, it is known as a ruby. If the corundum is lacking in chromium(III) ions, it is known as a sapphire. A red-colored artificial ruby may also be achieved by doping chromium(III) into artificial corundum crystals, thus making chromium a requirement for making synthetic rubies. Such a synthetic ruby crystal was the basis for the first laser, produced in 1960, which relied on stimulated emission of light from the chromium atoms in such a crystal. Ruby has a laser transition at 694.3 nanometers, in a deep red color. Because of their toxicity, chromium(VI) salts are used for the preservation of wood. For example, chromated copper arsenate (CCA) is used in timber treatment to protect wood from decay fungi, wood-attacking insects, including termites, and marine borers. The formulations contain chromium based on the oxide CrO3 between 35.3% and 65.5%. In the United States, 65,300 metric tons of CCA solution were used in 1996. Chromium(III) salts, especially chrome alum and chromium(III) sulfate, are used in the tanning of leather. The chromium(III) stabilizes the leather by cross linking the collagen fibers. Chromium tanned leather can contain between 4 and 5% of chromium, which is tightly bound to the proteins. Although the form of chromium used for tanning is not the toxic hexavalent variety, there remains interest in management of chromium in the tanning industry. Recovery and reuse, direct/indirect recycling, and "chrome-less" or "chrome-free" tanning are practiced to better manage chromium usage. The high heat resistivity and high melting point makes chromite and chromium(III) oxide a material for high temperature refractory applications, like blast furnaces, cement kilns, molds for the firing of bricks and as foundry sands for the casting of metals. In these applications, the refractory materials are made from mixtures of chromite and magnesite. The use is declining because of the environmental regulations due to the possibility of the formation of chromium(VI). Several chromium compounds are used as catalysts for processing hydrocarbons. For example, the Phillips catalyst, prepared from chromium oxides, is used for the production of about half the world's polyethylene. Fe-Cr mixed oxides are employed as high-temperature catalysts for the water gas shift reaction. Copper chromite is a useful hydrogenation catalyst. Chromates of metals are used in humistor. Uses of compounds Chromium(IV) oxide (CrO2) is a magnetic compound. Its ideal shape anisotropy, which imparts high coercivity and remnant magnetization, made it a compound superior to γ-Fe2O3. Chromium(IV) oxide is used to manufacture magnetic tape used in high-performance audio tape and standard audio cassettes. Chromium(III) oxide (Cr2O3) is a metal polish known as green rouge. Chromic acid is a powerful oxidizing agent and is a useful compound for cleaning laboratory glassware of any trace of organic compounds. It is prepared by dissolving potassium dichromate in concentrated sulfuric acid, which is then used to wash the apparatus. Sodium dichromate is sometimes used because of its higher solubility (50 g/L versus 200 g/L respectively). The use of dichromate cleaning solutions is now phased out due to the high toxicity and environmental concerns. Modern cleaning solutions are highly effective and chromium free. Potassium dichromate is a chemical reagent, used as a titrating agent. Chromates are added to drilling muds to prevent corrosion of steel under wet conditions. Chrome alum is Chromium(III) potassium sulfate and is used as a mordant (i.e., a fixing agent) for dyes in fabric and in tanning. Biological role The biologically beneficial effects of chromium(III) are debated. Chromium is accepted by the U.S. National Institutes of Health as a trace element for its roles in the action of insulin, a hormone that mediates the metabolism and storage of carbohydrate, fat, and protein. The mechanism of its actions in the body, however, have not been defined, leaving in question the essentiality of chromium. In contrast, hexavalent chromium (Cr(VI) or Cr6+) is highly toxic and mutagenic. Ingestion of chromium(VI) in water has been linked to stomach tumors, and it may also cause allergic contact dermatitis (ACD). "Chromium deficiency", involving a lack of Cr(III) in the body, or perhaps some complex of it, such as glucose tolerance factor, is controversial. Some studies suggest that the biologically active form of chromium (III) is transported in the body via an oligopeptide called low-molecular-weight chromium-binding substance (LMWCr), which might play a role in the insulin signaling pathway. The chromium content of common foods is generally low (1–13 micrograms per serving). The chromium content of food varies widely, due to differences in soil mineral content, growing season, plant cultivar, and contamination during processing. Chromium (and nickel) leach into food cooked in stainless steel, with the effect being largest when the cookware is new. Acidic foods that are cooked for many hours also exacerbate this effect. Dietary recommendations There is disagreement on chromium's status as an essential nutrient. Governmental departments from Australia, New Zealand, India, Japan, and the United States consider chromium essential while the European Food Safety Authority (EFSA) of the European Union does not. The U.S. National Academy of Medicine (NAM) updated the Estimated Average Requirements (EARs) and the Recommended Dietary Allowances (RDAs) for chromium in 2001. For chromium, there was insufficient information to set EARs and RDAs, so its needs are described as estimates for Adequate Intakes (AIs). The current AIs of chromium for women ages 14 through 50 is 25 μg/day, and the AIs for women ages 50 and above is 20 μg/day. The AIs for women who are pregnant are 30 μg/day, and for women who are lactating, the set AIs are 45 μg/day. The AIs for men ages 14 through 50 are 35 μg/day, and the AIs for men ages 50 and above are 30 μg/day. For children ages 1 through 13, the AIs increase with age from 0.2 μg/day up to 25 μg/day. As for safety, the NAM sets Tolerable Upper Intake Levels (ULs) for vitamins and minerals when the evidence is sufficient. In the case of chromium, there is not yet enough information, hence no UL has been established. Collectively, the EARs, RDAs, AIs, and ULs are the parameters for the nutrition recommendation system known as Dietary Reference Intake (DRI). Australia and New Zealand consider chromium to be an essential nutrient, with an AI of 35 μg/day for men, 25 μg/day for women, 30 μg/day for women who are pregnant, and 45 μg/day for women who are lactating. A UL has not been set due to the lack of sufficient data. India considers chromium to be an essential nutrient, with an adult recommended intake of 33 μg/day. Japan also considers chromium to be an essential nutrient, with an AI of 10 μg/day for adults, including women who are pregnant or lactating. A UL has not been set. The EFSA of the European Union however, does not consider chromium to be an essential nutrient; chromium is the only mineral for which the United States and the European Union disagree. Labeling For U.S. food and dietary supplement labeling purposes, the amount of the substance in a serving is expressed as a percent of the Daily Value (%DV). For chromium labeling purposes, 100% of the Daily Value was 120 μg. As of May 27, 2016, the percentage of daily value was revised to 35 μg to bring the chromium intake into a consensus with the official Recommended Dietary Allowance. A table of the old and new adult daily values is provided at Reference Daily Intake. Food sources Food composition databases such as those maintained by the U.S. Department of Agriculture do not contain information on the chromium content of foods. A wide variety of animal and vegetable foods contain chromium. Content per serving is influenced by the chromium content of the soil in which the plants are grown, by foodstuffs fed to animals, and by processing methods, as chromium is leached into foods if processed or cooked in stainless steel equipment. One diet analysis study conducted in Mexico reported an average daily chromium intake of 30 micrograms. An estimated 31% of adults in the United States consume multi-vitamin/mineral dietary supplements, which often contain 25 to 60 micrograms of chromium. Supplementation Chromium is an ingredient in total parenteral nutrition (TPN), because deficiency can occur after months of intravenous feeding with chromium-free TPN. It is also added to nutritional products for preterm infants. Although the mechanism of action in biological roles for chromium is unclear, in the United States chromium-containing products are sold as non-prescription dietary supplements in amounts ranging from 50 to 1,000 μg. Lower amounts of chromium are also often incorporated into multi-vitamin/mineral supplements consumed by an estimated 31% of adults in the United States. Chemical compounds used in dietary supplements include chromium chloride, chromium citrate, chromium(III) picolinate, chromium(III) polynicotinate, and other chemical compositions. The benefit of supplements has not been proven. Approved and disapproved health claims In 2005, the U.S. Food and Drug Administration had approved a qualified health claim for chromium picolinate with a requirement for very specific label wording: "One small study suggests that chromium picolinate may reduce the risk of insulin resistance, and therefore possibly may reduce the risk of type 2 diabetes. FDA concludes, however, that the existence of such a relationship between chromium picolinate and either insulin resistance or type 2 diabetes is highly uncertain." At the same time, in answer to other parts of the petition, the FDA rejected claims for chromium picolinate and cardiovascular disease, retinopathy or kidney disease caused by abnormally high blood sugar levels. In 2010, chromium(III) picolinate was approved by Health Canada to be used in dietary supplements. Approved labeling statements include: a factor in the maintenance of good health, provides support for healthy glucose metabolism, helps the body to metabolize carbohydrates and helps the body to metabolize fats. The European Food Safety Authority (EFSA) approved claims in 2010 that chromium contributed to normal macronutrient metabolism and maintenance of normal blood glucose concentration, but rejected claims for maintenance or achievement of a normal body weight, or reduction of tiredness or fatigue. Given the evidence for chromium deficiency causing problems with glucose management in the context of intravenous nutrition products formulated without chromium, research interest turned to whether chromium supplementation would benefit people who have type 2 diabetes but are not chromium deficient. Looking at the results from four meta-analyses, one reported a statistically significant decrease in fasting plasma glucose levels (FPG) and a non-significant trend in lower hemoglobin A1C. A second reported the same, a third reported significant decreases for both measures, while a fourth reported no benefit for either. A review published in 2016 listed 53 randomized clinical trials that were included in one or more of six meta-analyses. It concluded that whereas there may be modest decreases in FPG and/or HbA1C that achieve statistical significance in some of these meta-analyses, few of the trials achieved decreases large enough to be expected to be relevant to clinical outcome. Two systematic reviews looked at chromium supplements as a mean of managing body weight in overweight and obese people. One, limited to chromium picolinate, a popular supplement ingredient, reported a statistically significant −1.1 kg (2.4 lb) weight loss in trials longer than 12 weeks. The other included all chromium compounds and reported a statistically significant −0.50 kg (1.1 lb) weight change. Change in percent body fat did not reach statistical significance. Authors of both reviews considered the clinical relevance of this modest weight loss as uncertain/unreliable. The European Food Safety Authority reviewed the literature and concluded that there was insufficient evidence to support a claim. Chromium is promoted as a sports performance dietary supplement, based on the theory that it potentiates insulin activity, with anticipated results of increased muscle mass, and faster recovery of glycogen storage during post-exercise recovery. A review of clinical trials reported that chromium supplementation did not improve exercise performance or increase muscle strength. The International Olympic Committee reviewed dietary supplements for high-performance athletes in 2018 and concluded there was no need to increase chromium intake for athletes, nor support for claims of losing body fat. Fresh-water fish Chromium is naturally present in the environment in trace amounts, but industrial use in rubber and stainless steel manufacturing, chrome plating, dyes for textiles, tanneries and other uses contaminates aquatic systems. In Bangladesh, rivers in or downstream from industrialized areas exhibit heavy metal contamination. Irrigation water standards for chromium are 0.1 mg/L, but some rivers are more than five times that amount. The standard for fish for human consumption is less than 1 mg/kg, but many tested samples were more than five times that amount. Chromium, especially hexavalent chromium, is highly toxic to fish because it is easily absorbed across the gills, readily enters blood circulation, crosses cell membranes and bioconcentrates up the food chain. In contrast, the toxicity of trivalent chromium is very low, attributed to poor membrane permeability and little biomagnification. Acute and chronic exposure to chromium(VI) affects fish behavior, physiology, reproduction and survival. Hyperactivity and erratic swimming have been reported in contaminated environments. Egg hatching and fingerling survival are affected. In adult fish there are reports of histopathological damage to liver, kidney, muscle, intestines, and gills. Mechanisms include mutagenic gene damage and disruptions of enzyme functions. There is evidence that fish may not require chromium, but benefit from a measured amount in diet. In one study, juvenile fish gained weight on a zero chromium diet, but the addition of 500 μg of chromium in the form of chromium chloride or other supplement types, per kilogram of food (dry weight), increased weight gain. At 2,000 μg/kg the weight gain was no better than with the zero chromium diet, and there were increased DNA strand breaks. Precautions Water-insoluble chromium(III) compounds and chromium metal are not considered a health hazard, while the toxicity and carcinogenic properties of chromium(VI) have been known for a long time. Because of the specific transport mechanisms, only limited amounts of chromium(III) enter the cells. Acute oral toxicity ranges between 50 and 150 mg/kg. A 2008 review suggested that moderate uptake of chromium(III) through dietary supplements poses no genetic-toxic risk. In the US, the Occupational Safety and Health Administration (OSHA) has designated an air permissible exposure limit (PEL) in the workplace as a time-weighted average (TWA) of 1 mg/m3. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.5 mg/m3, time-weighted average. The IDLH (immediately dangerous to life and health) value is 250 mg/m3. Chromium(VI) toxicity The acute oral toxicity for chromium(VI) ranges between 1.5 and 3.3 mg/kg. In the body, chromium(VI) is reduced by several mechanisms to chromium(III) already in the blood before it enters the cells. The chromium(III) is excreted from the body, whereas the chromate ion is transferred into the cell by a transport mechanism, by which also sulfate and phosphate ions enter the cell. The acute toxicity of chromium(VI) is due to its strong oxidant properties. After it reaches the blood stream, it damages the kidneys, the liver and blood cells through oxidation reactions. Hemolysis, renal, and liver failure result. Aggressive dialysis can be therapeutic. The carcinogenity of chromate dust has been known for a long time, and in 1890 the first publication described the elevated cancer risk of workers in a chromate dye company. Three mechanisms have been proposed to describe the genotoxicity of chromium(VI). The first mechanism includes highly reactive hydroxyl radicals and other reactive radicals which are by products of the reduction of chromium(VI) to chromium(III). The second process includes the direct binding of chromium(V), produced by reduction in the cell, and chromium(IV) compounds to the DNA. The last mechanism attributed the genotoxicity to the binding to the DNA of the end product of the chromium(III) reduction. Chromium salts (chromates) are also the cause of allergic reactions in some people. Chromates are often used to manufacture, amongst other things, leather products, paints, cement, mortar and anti-corrosives. Contact with products containing chromates can lead to allergic contact dermatitis and irritant dermatitis, resulting in ulceration of the skin, sometimes referred to as "chrome ulcers". This condition is often found in workers that have been exposed to strong chromate solutions in electroplating, tanning and chrome-producing manufacturers. Environmental issues Because chromium compounds were used in dyes, paints, and leather tanning compounds, these compounds are often found in soil and groundwater at active and abandoned industrial sites, needing environmental cleanup and remediation. Primer paint containing hexavalent chromium is still widely used for aerospace and automobile refinishing applications. In 2010, the Environmental Working Group studied the drinking water in 35 American cities in the first nationwide study. The study found measurable hexavalent chromium in the tap water of 31 of the cities sampled, with Norman, Oklahoma, at the top of list; 25 cities had levels that exceeded California's proposed limit. The more toxic hexavalent chromium form can be reduced to the less soluble trivalent oxidation state in soils by organic matter, ferrous iron, sulfides, and other reducing agents, with the rates of such reduction being faster under more acidic conditions than under more alkaline ones. In contrast, trivalent chromium can be oxidized to hexavalent chromium in soils by manganese oxides, such as Mn(III) and Mn(IV) compounds. Since the solubility and toxicity of chromium (VI) are greater that those of chromium (III), the oxidation-reduction conversions between the two oxidation states have implications for movement and bioavailability of chromium in soils, groundwater, and plants. Notes References General bibliography External links ATSDR Case Studies in Environmental Medicine: Chromium Toxicity U.S. Department of Health and Human Services IARC Monograph "Chromium and Chromium compounds" It's Elemental – The Element Chromium The Merck Manual – Mineral Deficiency and Toxicity National Institute for Occupational Safety and Health – Chromium Page Chromium at The Periodic Table of Videos (University of Nottingham) Chemical elements Dietary minerals Native element minerals Chemical hazards Chemical elements with body-centered cubic structure
5671
https://en.wikipedia.org/wiki/Cymbal
Cymbal
A cymbal is a common percussion instrument. Often used in pairs, cymbals consist of thin, normally round plates of various alloys. The majority of cymbals are of indefinite pitch, although small disc-shaped cymbals based on ancient designs sound a definite note (such as crotales). Cymbals are used in many ensembles ranging from the orchestra, percussion ensembles, jazz bands, heavy metal bands, and marching groups. Drum kits usually incorporate at least a crash, ride, or crash/ride, and a pair of hi-hat cymbals. A player of cymbals is known as a cymbalist. Etymology and names The word cymbal is derived from the Latin cymbalum, which is the latinisation of the Greek word kymbalon, "cymbal", which in turn derives from kymbē, "cup, bowl". In orchestral scores, cymbals may be indicated by the French cymbales; German Becken, Schellbecken, Teller, or Tschinellen; Italian piatti or cinelli; and Spanish platillos. Many of these derive from the word for plates. History Cymbals have existed since ancient times. Representations of cymbals may be found in reliefs and paintings from Armenian Highlands (7th century BC), Larsa, Babylon, Assyria, ancient Egypt, ancient Greece, and ancient Rome. References to cymbals also appear throughout the Bible, through many Psalms and songs of praise to God. Cymbals may have been introduced to China from Central Asia in the 3rd or 4th century AD. India In India, cymbals have been in use since ancient times and are still used across almost all major temples and Buddhist sites. Gigantic aartis along the Ganges, which are revered by Hindus all over the world, are incomplete without large cymbals. Central Asia and Iran The Shahnameh (circa 977 and 1010 CE) mentions the use of cymbals at least 14 times in its text, most in the context of creating a loud din in war, to frighten the enemy or to celebrate. The Persian word is sanj or senj (Persian سنج), but the Shahnameh does not claim these to be Persian in origin. Several times it calls then "Indian cymbals." Other adjectives to describe them include "golden" and "brass," and to play them is to "clash" them. A different form is called sanj angshati (سنج انگشتی) or finger cymbals. These are zill. Ashura ceremony Besides the original use in war, another use in Persian culture was the Ashura ceremony. Originally in the ceremony, two pieces of stone were beaten on the sides of the mourner with special movements accompanied by a lamentation song. This has been replaced by beating Karbzani or Karebzani and playing sanj and ratchets. Cities where this has been performed include Lahijan and Aran of Kashan, as well as Semnan and Sabzevar. Etymology See Zang All theories about the etymology of the word Sanj, identify it as a Pahlavi word. By some accounts means weight; and it is possible that the original term was sanjkūb meaning ”striking weights” [against each other]. By some accounts the word is reform version of "Zang" (bell), referring to its bell-shaped plate. Turkey Cymbals were employed by Turkish janissaries in the 14th century or earlier. By the 17th century, such cymbals were used in European music, and more commonly played in military bands and orchestras by the mid 18th century. Since the 19th century, some composers have called for larger roles for cymbals in musical works, and a variety of cymbal shapes, techniques, and hardware have been developed in response. Anatomy The anatomy of the cymbal plays a large part in the sound it creates. A hole is drilled in the center of the cymbal, which is used to either mount the cymbal on a stand or for tying straps through (for hand playing). The bell, dome, or cup is the raised section immediately surrounding the hole. The bell produces a higher "pinging" pitch than the rest of the cymbal. The bow is the rest of the surface surrounding the bell. The bow is sometimes described in two areas: the ride and crash area. The ride area is the thicker section closer to the bell while the crash area is the thinner tapering section near the edge. The edge or rim is the immediate circumference of the cymbal. Cymbals are measured by their diameter either in inches or centimeters. The size of the cymbal affects its sound, larger cymbals usually being louder and having longer sustain. The weight describes how thick the cymbal is. Cymbal weights are important to the sound they produce and how they play. Heavier cymbals have a louder volume, more cut, and better stick articulation (when using drum sticks). Thin cymbals have a fuller sound, lower pitch, and faster response. The profile of the cymbal is the vertical distance of the bow from the bottom of the bell to the cymbal edge (higher profile cymbals are more bowl-shaped). The profile affects the pitch of the cymbal: higher profile cymbals have higher pitch. Types Orchestral cymbals Cymbals offer a composer nearly endless amounts of color and effect. Their unique timbre allows them to project even against a full orchestra and through the heaviest of orchestrations and enhance articulation and nearly any dynamic. Cymbals have been utilized historically to suggest frenzy, fury or bacchanalian revels, as seen in the Venus music in Wagner's Tannhäuser, Grieg's Peer Gynt suite, and Osmin's aria "O wie will ich triumphieren" from Mozart's Die Entführung aus dem Serail. Clash cymbals Orchestral clash cymbals are traditionally used in pairs, each one having a strap set in the bell of the cymbal by which they are held. Such a pair is known as clash cymbals, crash cymbals, hand cymbals, or plates. Certain sounds can be obtained by rubbing their edges together in a sliding movement for a "sizzle", striking them against each other in what is called a "crash", tapping the edge of one against the body of the other in what is called a "tap-crash", scraping the edge of one from the inside of the bell to the edge for a "scrape" or "zischen", or shutting the cymbals together and choking the sound in what is called a "hi-hat" or "crush". A skilled percussionist can obtain an enormous dynamic range from such cymbals. For example, in Beethoven's Symphony No. 9, the percussionist is employed to first play cymbals pianissimo, adding a touch of colour rather than loud crash. Crash cymbals are usually damped by pressing them against the percussionist's body. A composer may write laissez vibrer, or, "let vibrate" (usually abbreviated l.v.), secco (dry), or equivalent indications on the score; more usually, the percussionist must judge when to damp based on the written duration of a crash and the context in which it occurs. Crash cymbals have traditionally been accompanied by the bass drum playing an identical part. This combination, played loudly, is an effective way to accentuate a note since it contributes to both very low and very high-frequency ranges and provides a satisfying "crash-bang-wallop". In older music the composer sometimes provided one part for this pair of instruments, writing senza piatti or piatti soli () if only one is needed. This came from the common practice of having one percussionist play using one cymbal mounted to the shell of the bass drum. The percussionist would crash the cymbals with the left hand and use a mallet to strike the bass drum with the right. This method is nowadays often employed in pit orchestras and called for specifically by composers who desire a certain effect. Stravinsky calls for this in his ballet Petrushka, and Mahler calls for this in his Titan Symphony. The modern convention is for the instruments to have independent parts. However, in kit drumming, a cymbal crash is still most often accompanied by a simultaneous kick to the bass drum, which provides a musical effect and support to the crash. Hi hats Crash cymbals evolved into the low-sock and from this to the modern hi-hat. Even in a modern drum kit, they remain paired with the bass drum as the two instruments which are played with the player's feet. However, hi-hat cymbals tend to be heavy with little taper, more similar to a ride cymbal than to a clash cymbal as found in a drum kit, and perform a ride rather than a crash function. Suspended cymbal Another use of cymbals is the suspended cymbal. This instrument takes its name from the traditional method of suspending the cymbal by means of a leather strap or rope, thus allowing the cymbal to vibrate as freely as possible for maximum musical effect. Early jazz drumming pioneers borrowed this style of cymbal mounting during the early 1900s and later drummers further developed this instrument into the mounted horizontal or nearly horizontally mounted "crash" cymbals of a modern drum kit instead of a leather strap suspension system. Many modern drum kits use a mount with felt or otherwise dampening fabric to act as a barrier to hold the cymbals between metal clamps: thus forming the modern-day ride cymbal. Suspended cymbals can be played with yarn-, sponge-, or cord wrapped mallets. The first known instance of using a sponge-headed mallet on a cymbal is the final chord of Hector Berlioz' Symphonie Fantastique. Composers sometimes specifically request other types of mallets like felt mallets or timpani mallets for different attack and sustain qualities. Suspended cymbals can produce bright and slicing tones when forcefully struck, and give an eerie transparent "windy" sound when played quietly. A tremolo, or roll (played with two mallets alternately striking on opposing sides of the cymbal) can build in volume from almost inaudible to an overwhelming climax in a satisfyingly smooth manner (as in Humperdinck's Mother Goose Suite). The edge of a suspended cymbal may be hit with the shoulder of a drum stick to obtain a sound somewhat akin to that of clash cymbals. Other methods of playing include scraping a coin or triangle beater rapidly across the ridges on the top of the cymbal, giving a "zing" sound (as some percussionists do in the fourth movement of Dvořák's Symphony No. 9). Other effects that can be used include drawing a bass bow across the edge of the cymbal for a sound like squealing car brakes. Ancient cymbals Ancient, antique or tuned cymbals are much more rarely called for. Their timbre is entirely different, more like that of small hand-bells or of the notes of the keyed harmonica. They are not struck full against each other, but by one of their edges, and the note given in by them is higher in proportion as they are thicker and smaller. Berlioz's Romeo and Juliet calls for two pairs of cymbals, modeled on some old Pompeian instruments no larger than the hand (some are no larger than a large coin), and tuned to F and B flat. The modern instruments descended from this line are the crotales. List of cymbal types Cymbal types include: Bell cymbal China cymbal Clash cymbal Crash cymbal Crash/ride cymbal Finger cymbal Flat ride cymbal Hi-hat Ride cymbal Sizzle cymbal Splash cymbal Swish cymbal Suspended cymbal Taal – Indian cymbal (clash cymbal) See also Cymbal making and Cymbal alloys Cymbal manufacturers Percussion instruments Drum and Drum kit Taal Zill References Citations Bibliography External links Orchestral cymbal playing, with an excellent short history of cymbals Cymbal Colour Exploration, A 3D binaural audio recording of different cymbal sound colours Drum kit components Early musical instruments Idiophones Metal percussion instruments Military music Orchestral percussion instruments Unpitched percussion instruments
5672
https://en.wikipedia.org/wiki/Cadmium
Cadmium
Cadmium is a chemical element with the symbol Cd and atomic number 48. This soft, silvery-white metal is chemically similar to the two other stable metals in group 12, zinc and mercury. Like zinc, it demonstrates oxidation state +2 in most of its compounds, and like mercury, it has a lower melting point than the transition metals in groups 3 through 11. Cadmium and its congeners in group 12 are often not considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. The average concentration of cadmium in Earth's crust is between 0.1 and 0.5 parts per million (ppm). It was discovered in 1817 simultaneously by Stromeyer and Hermann, both in Germany, as an impurity in zinc carbonate. Cadmium occurs as a minor component in most zinc ores and is a byproduct of zinc production. Cadmium was used for a long time as a corrosion-resistant plating on steel, and cadmium compounds are used as red, orange, and yellow pigments, to color glass, and to stabilize plastic. Cadmium use is generally decreasing because it is toxic (it is specifically listed in the European Restriction of Hazardous Substances Directive) and nickel–cadmium batteries have been replaced with nickel–metal hydride and lithium-ion batteries. One of its few new uses is in cadmium telluride solar panels. Although cadmium has no known biological function in higher organisms, a cadmium-dependent carbonic anhydrase has been found in marine diatoms. Characteristics Physical properties Cadmium is a soft, malleable, ductile, silvery-white divalent metal. It is similar in many respects to zinc but forms complex compounds. Unlike most other metals, cadmium is resistant to corrosion and is used as a protective plate on other metals. As a bulk metal, cadmium is insoluble in water and is not flammable; however, in its powdered form it may burn and release toxic fumes. Chemical properties Although cadmium usually has an oxidation state of +2, it also exists in the +1 state. Cadmium and its congeners are not always considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. Cadmium burns in air to form brown amorphous cadmium oxide (CdO); the crystalline form of this compound is a dark red which changes color when heated, similar to zinc oxide. Hydrochloric acid, sulfuric acid, and nitric acid dissolve cadmium by forming cadmium chloride (CdCl2), cadmium sulfate (CdSO4), or cadmium nitrate (Cd(NO3)2). The oxidation state +1 can be produced by dissolving cadmium in a mixture of cadmium chloride and aluminium chloride, forming the Cd22+ cation, which is similar to the Hg22+ cation in mercury(I) chloride. Cd + CdCl2 + 2 AlCl3 → Cd2(AlCl4)2 The structures of many cadmium complexes with nucleobases, amino acids, and vitamins have been determined. Isotopes Naturally occurring cadmium is composed of eight isotopes. Two of them are radioactive, and three are expected to decay but have not measurably done so under laboratory conditions. The two natural radioactive isotopes are 113Cd (beta decay, half-life is ) and 116Cd (two-neutrino double beta decay, half-life is ). The other three are 106Cd, 108Cd (both double electron capture), and 114Cd (double beta decay); only lower limits on these half-lives have been determined. At least three isotopes – 110Cd, 111Cd, and 112Cd – are stable. Among the isotopes that do not occur naturally, the most long-lived are 109Cd with a half-life of 462.6 days, and 115Cd with a half-life of 53.46 hours. All of the remaining radioactive isotopes have half-lives of less than 2.5 hours, and the majority have half-lives of less than 5 minutes. Cadmium has 8 known meta states, with the most stable being 113mCd (t1⁄2 = 14.1 years), 115mCd (t1⁄2 = 44.6 days), and 117mCd (t1⁄2 = 3.36 hours). The known isotopes of cadmium range in atomic mass from 94.950 u (95Cd) to 131.946 u (132Cd). For isotopes lighter than 112 u, the primary decay mode is electron capture and the dominant decay product is element 47 (silver). Heavier isotopes decay mostly through beta emission producing element 49 (indium). One isotope of cadmium, 113Cd, absorbs neutrons with high selectivity: With very high probability, neutrons with energy below the cadmium cut-off will be absorbed; those higher than the cut-off will be transmitted. The cadmium cut-off is about 0.5 eV, and neutrons below that level are deemed slow neutrons, distinct from intermediate and fast neutrons. Cadmium is created via the s-process in low- to medium-mass stars with masses of 0.6 to 10 solar masses, over thousands of years. In that process, a silver atom captures a neutron and then undergoes beta decay. History Cadmium (Latin cadmia, Greek καδμεία meaning "calamine", a cadmium-bearing mixture of minerals that was named after the Greek mythological character Κάδμος, Cadmus, the founder of Thebes) was discovered in contaminated zinc compounds sold in pharmacies in Germany in 1817 by Friedrich Stromeyer. Karl Samuel Leberecht Hermann simultaneously investigated the discoloration in zinc oxide and found an impurity, first suspected to be arsenic, because of the yellow precipitate with hydrogen sulfide. Additionally Stromeyer discovered that one supplier sold zinc carbonate instead of zinc oxide. Stromeyer found the new element as an impurity in zinc carbonate (calamine), and, for 100 years, Germany remained the only important producer of the metal. The metal was named after the Latin word for calamine, because it was found in this zinc ore. Stromeyer noted that some impure samples of calamine changed color when heated but pure calamine did not. He was persistent in studying these results and eventually isolated cadmium metal by roasting and reducing the sulfide. The potential for cadmium yellow as pigment was recognized in the 1840s, but the lack of cadmium limited this application. Even though cadmium and its compounds are toxic in certain forms and concentrations, the British Pharmaceutical Codex from 1907 states that cadmium iodide was used as a medication to treat "enlarged joints, scrofulous glands, and chilblains". In 1907, the International Astronomical Union defined the international ångström in terms of a red cadmium spectral line (1 wavelength = 6438.46963 Å). This was adopted by the 7th General Conference on Weights and Measures in 1927. In 1960, the definitions of both the metre and ångström were changed to use krypton. After the industrial scale production of cadmium started in the 1930s and 1940s, the major application of cadmium was the coating of iron and steel to prevent corrosion; in 1944, 62% and in 1956, 59% of the cadmium in the United States was used for plating. In 1956, 24% of the cadmium in the United States was used for a second application in red, orange and yellow pigments from sulfides and selenides of cadmium. The stabilizing effect of cadmium chemicals like the carboxylates cadmium laurate and cadmium stearate on PVC led to an increased use of those compounds in the 1970s and 1980s. The demand for cadmium in pigments, coatings, stabilizers, and alloys declined as a result of environmental and health regulations in the 1980s and 1990s; in 2006, only 7% of to total cadmium consumption was used for plating, and only 10% was used for pigments. At the same time, these decreases in consumption were compensated by a growing demand for cadmium for nickel–cadmium batteries, which accounted for 81% of the cadmium consumption in the United States in 2006. Occurrence Cadmium makes up about 0.1 ppm of Earth's crust. It is much rarer than zinc, which makes up about 65 ppm. No significant deposits of cadmium-containing ores are known. The only cadmium mineral of importance, greenockite (CdS), is nearly always associated with sphalerite (ZnS). This association is caused by geochemical similarity between zinc and cadmium, with no geological process likely to separate them. Thus, cadmium is produced mainly as a byproduct of mining, smelting, and refining sulfidic ores of zinc, and, to a lesser degree, lead and copper. Small amounts of cadmium, about 10% of consumption, are produced from secondary sources, mainly from dust generated by recycling iron and steel scrap. Production in the United States began in 1907, but wide use began after World War I. Metallic cadmium can be found in the Vilyuy River basin in Siberia. Rocks mined for phosphate fertilizers contain varying amounts of cadmium, resulting in a cadmium concentration of as much as 300 mg/kg in the fertilizers and a high cadmium content in agricultural soils. Coal can contain significant amounts of cadmium, which ends up mostly in coal fly ash. Cadmium in soil can be absorbed by crops such as rice and cocoa. Chinese ministry of agriculture measured in 2002 that 28% of rice it sampled had excess lead and 10% had excess cadmium above limits defined by law. Consumer Reports tested 28 brands of dark chocolate sold in the United States in 2022, and found cadmium in all of them, with 13 exceeding the California Maximum Allowable Dose level. Some plants such as willow trees and poplars have been found to clean both lead and cadmium from soil. Typical background concentrations of cadmium do not exceed 5 ng/m3 in the atmosphere; 2 mg/kg in soil; 1 μg/L in freshwater and 50 ng/L in seawater. Concentrations of cadmium above 10 μg/L may be stable in water having low total solute concentrations and p H and can be difficult to remove by conventional water treatment processes. Production Cadmium is a common impurity in zinc ores, and it is most often isolated during the production of zinc. Some zinc ores concentrates from zinc sulfate ores contain up to 1.4% of cadmium. In the 1970s, the output of cadmium was per ton of zinc. Zinc sulfide ores are roasted in the presence of oxygen, converting the zinc sulfide to the oxide. Zinc metal is produced either by smelting the oxide with carbon or by electrolysis in sulfuric acid. Cadmium is isolated from the zinc metal by vacuum distillation if the zinc is smelted, or cadmium sulfate is precipitated from the electrolysis solution. The British Geological Survey reports that in 2001, China was the top producer of cadmium with almost one-sixth of the world's production, closely followed by South Korea and Japan. Applications Cadmium is a common component of electric batteries, pigments, coatings, and electroplating. Batteries In 2009, 86% of cadmium was used in batteries, predominantly in rechargeable nickel–cadmium batteries. Nickel–cadmium cells have a nominal cell potential of 1.2 V. The cell consists of a positive nickel hydroxide electrode and a negative cadmium electrode plate separated by an alkaline electrolyte (potassium hydroxide). The European Union put a limit on cadmium in electronics in 2004 of 0.01%, with some exceptions, and in 2006 reduced the limit on cadmium content to 0.002%. Another type of battery based on cadmium is the silver–cadmium battery. Electroplating Cadmium electroplating, consuming 6% of the global production, is used in the aircraft industry to reduce corrosion of steel components. This coating is passivated by chromate salts. A limitation of cadmium plating is hydrogen embrittlement of high-strength steels from the electroplating process. Therefore, steel parts heat-treated to tensile strength above 1300 MPa (200 ksi) should be coated by an alternative method (such as special low-embrittlement cadmium electroplating processes or physical vapor deposition). Titanium embrittlement from cadmium-plated tool residues resulted in banishment of those tools (and the implementation of routine tool testing to detect cadmium contamination) in the A-12/SR-71, U-2, and subsequent aircraft programs that use titanium. Nuclear fission Cadmium is used in the control rods of nuclear reactors, acting as a very effective neutron poison to control neutron flux in nuclear fission. When cadmium rods are inserted in the core of a nuclear reactor, cadmium absorbs neutrons, preventing them from creating additional fission events, thus controlling the amount of reactivity. The pressurized water reactor designed by Westinghouse Electric Company uses an alloy consisting of 80% silver, 15% indium, and 5% cadmium. Televisions QLED TVs have been starting to include cadmium in construction. Some companies have been looking to reduce the environmental impact of human exposure and pollution of the material in televisions during production. Anticancer drugs Complexes based on heavy metals have great potential for the treatment of a wide variety of cancers but their use is often limited due to toxic side effects. However, scientists are advancing in the field and new promising cadmium complex compounds with reduced toxicity have been discovered. Compounds Cadmium oxide was used in black and white television phosphors and in the blue and green phosphors of color television cathode ray tubes. Cadmium sulfide (CdS) is used as a photoconductive surface coating for photocopier drums. Various cadmium salts are used in paint pigments, with CdS as a yellow pigment being the most common. Cadmium selenide is a red pigment, commonly called cadmium red. To painters who work with the pigment, cadmium provides the most brilliant and durable yellows, oranges, and reds – so much so that during production, these colors are significantly toned down before they are ground with oils and binders or blended into watercolors, gouaches, acrylics, and other paint and pigment formulations. Because these pigments are potentially toxic, users should use a barrier cream on the hands to prevent absorption through the skin even though the amount of cadmium absorbed into the body through the skin is reported to be less than 1%. In PVC, cadmium was used as heat, light, and weathering stabilizers. Currently, cadmium stabilizers have been completely replaced with barium-zinc, calcium-zinc and organo-tin stabilizers. Cadmium is used in many kinds of solder and bearing alloys, because it has a low coefficient of friction and fatigue resistance. It is also found in some of the lowest-melting alloys, such as Wood's metal. Semiconductors Cadmium is an element in some semiconductor materials. Cadmium sulfide, cadmium selenide, and cadmium telluride are used in some photodetectors and solar cells. HgCdTe detectors are sensitive to mid-infrared light and used in some motion detectors. Laboratory uses Helium–cadmium lasers are a common source of blue or ultraviolet laser light. Lasers at wavelengths of 325, 354 and 442 nm are made using this gain medium; some models can switch between these wavelengths. They are notably used in fluorescence microscopy as well as various laboratory uses requiring laser light at these wavelengths. Cadmium selenide quantum dots emit bright luminescence under UV excitation (He–Cd laser, for example). The color of this luminescence can be green, yellow or red depending on the particle size. Colloidal solutions of those particles are used for imaging of biological tissues and solutions with a fluorescence microscope. In molecular biology, cadmium is used to block voltage-dependent calcium channels from fluxing calcium ions, as well as in hypoxia research to stimulate proteasome-dependent degradation of Hif-1α. Cadmium-selective sensors based on the fluorophore BODIPY have been developed for imaging and sensing of cadmium in cells. One powerful method for monitoring cadmium in aqueous environments involves electrochemistry. By employing a self-assembled monolayer one can obtain a cadmium selective electrode with a ppt-level sensitivity. Biological role and research Cadmium has no known function in higher organisms and is considered toxic. Cadmium is considered an environmental pollutant that causes health hazard to living organisms. Administration of cadmium to cells causes oxidative stress and increases the levels of antioxidants produced by cells to protect against macro molecular damage. However a cadmium-dependent carbonic anhydrase has been found in some marine diatoms. The diatoms live in environments with very low zinc concentrations and cadmium performs the function normally carried out by zinc in other anhydrases. This was discovered with X-ray absorption near edge structure (XANES) spectroscopy. Cadmium is preferentially absorbed in the kidneys of humans. Up to about 30 mg of cadmium is commonly inhaled throughout human childhood and adolescence. Cadmium is under research regarding its toxicity in humans, potentially elevating risks of cancer, cardiovascular disease, and osteoporosis. Environment The biogeochemistry of cadmium and its release to the environment has been the subject of review, as has the speciation of cadmium in the environment. Safety Individuals and organizations have been reviewing cadmium's bioinorganic aspects for its toxicity. The most dangerous form of occupational exposure to cadmium is inhalation of fine dust and fumes, or ingestion of highly soluble cadmium compounds. Inhalation of cadmium fumes can result initially in metal fume fever, but may progress to chemical pneumonitis, pulmonary edema, and death. Cadmium is also an environmental hazard. Human exposure is primarily from fossil fuel combustion, phosphate fertilizers, natural sources, iron and steel production, cement production and related activities, nonferrous metals production, and municipal solid waste incineration. Other sources of cadmium include bread, root crops, and vegetables. There have been a few instances of general population poisoning as the result of long-term exposure to cadmium in contaminated food and water. Research into an estrogen mimicry that may induce breast cancer is ongoing, . In the decades leading up to World War II, mining operations contaminated the Jinzū River in Japan with cadmium and traces of other toxic metals. As a consequence, cadmium accumulated in the rice crops along the riverbanks downstream of the mines. Some members of the local agricultural communities consumed the contaminated rice and developed itai-itai disease and renal abnormalities, including proteinuria and glucosuria. The victims of this poisoning were almost exclusively post-menopausal women with low iron and low body stores of other minerals. Similar general population cadmium exposures in other parts of the world have not resulted in the same health problems because the populations maintained sufficient iron and other mineral levels. Thus, although cadmium is a major factor in the itai-itai disease in Japan, most researchers have concluded that it was one of several factors. Cadmium is one of six substances banned by the European Union's Restriction of Hazardous Substances (RoHS) directive, which regulates hazardous substances in electrical and electronic equipment, but allows for certain exemptions and exclusions from the scope of the law. The International Agency for Research on Cancer has classified cadmium and cadmium compounds as carcinogenic to humans. Although occupational exposure to cadmium is linked to lung and prostate cancer, there is still uncertainty about the carcinogenicity of cadmium in low environmental exposure. Recent data from epidemiological studies suggest that intake of cadmium through diet is associated with a higher risk of endometrial, breast, and prostate cancer as well as with osteoporosis in humans. A recent study has demonstrated that endometrial tissue is characterized by higher levels of cadmium in current and former smoking females. Cadmium exposure is associated with a large number of illnesses including kidney disease, early atherosclerosis, hypertension, and cardiovascular diseases. Although studies show a significant correlation between cadmium exposure and occurrence of disease in human populations, a molecular mechanism has not yet been identified. One hypothesis holds that cadmium is an endocrine disruptor and some experimental studies have shown that it can interact with different hormonal signaling pathways. For example, cadmium can bind to the estrogen receptor alpha, and affect signal transduction along the estrogen and MAPK signaling pathways at low doses. The tobacco plant absorbs and accumulates heavy metals such as cadmium from the surrounding soil into its leaves. Following tobacco smoke inhalation, these are readily absorbed into the body of users. Tobacco smoking is the most important single source of cadmium exposure in the general population. An estimated 10% of the cadmium content of a cigarette is inhaled through smoking. Absorption of cadmium through the lungs is more effective than through the gut. As much as 50% of the cadmium inhaled in cigarette smoke may be absorbed. On average, cadmium concentrations in the blood of smokers is 4 to 5 times greater than non-smokers and in the kidney, 2–3 times greater than in non-smokers. Despite the high cadmium content in cigarette smoke, there seems to be little exposure to cadmium from passive smoking. In a non-smoking population, food is the greatest source of exposure. High quantities of cadmium can be found in crustaceans, mollusks, offal, frog legs, cocoa solids, bitter and semi-bitter chocolate, seaweed, fungi and algae products. However, grains, vegetables, and starchy roots and tubers are consumed in much greater quantity in the U.S., and are the source of the greatest dietary exposure there. Most plants bio-accumulate metal toxins such as cadmium and when composted to form organic fertilizers, yield a product that often can contain high amounts (e.g., over 0.5 mg) of metal toxins for every kilogram of fertilizer. Fertilizers made from animal dung (e.g., cow dung) or urban waste can contain similar amounts of cadmium. The cadmium added to the soil from fertilizers (rock phosphates or organic fertilizers) become bio-available and toxic only if the soil pH is low (i.e., acidic soils). Zinc, copper, calcium, and iron ions, and selenium with vitamin C are used to treat cadmium intoxication, though it is not easily reversed. Regulations Because of the adverse effects of cadmium on the environment and human health, the supply and use of cadmium is restricted in Europe under the REACH Regulation. The EFSA Panel on Contaminants in the Food Chain specifies that 2.5 μg/kg body weight is a tolerable weekly intake for humans. The Joint FAO/WHO Expert Committee on Food Additives has declared 7 μg/kg body weight to be the provisional tolerable weekly intake level. The state of California requires a food label to carry a warning about potential exposure to cadmium on products such as cocoa powder. The U.S. Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit (PEL) for cadmium at a time-weighted average (TWA) of 0.005 ppm. The National Institute for Occupational Safety and Health (NIOSH) has not set a recommended exposure limit (REL) and has designated cadmium as a known human carcinogen. The IDLH (immediately dangerous to life and health) level for cadmium is 9 mg/m3. In addition to mercury, the presence of cadmium in some batteries has led to the requirement of proper disposal (or recycling) of batteries. Product recalls In May 2006, a sale of the seats from Arsenal F.C.'s old stadium, Highbury in London, England was cancelled when the seats were discovered to contain trace amounts of cadmium. Reports of high levels of cadmium use in children's jewelry in 2010 led to a US Consumer Product Safety Commission investigation. The U.S. CPSC issued specific recall notices for cadmium content in jewelry sold by Claire's and Wal-Mart stores. In June 2010, McDonald's voluntarily recalled more than 12 million promotional Shrek Forever After 3D Collectible Drinking Glasses because of the cadmium levels in paint pigments on the glassware. The glasses were manufactured by Arc International, of Millville, New Jersey, USA. See also Red List building materials Toxic heavy metal References Further reading External links Cadmium at The Periodic Table of Videos (University of Nottingham) ATSDR Case Studies in Environmental Medicine: Cadmium Toxicity U.S. Department of Health and Human Services National Institute for Occupational Safety and Health – Cadmium Page NLM Hazardous Substances Databank – Cadmium, Elemental Chemical elements Transition metals Endocrine disruptors IARC Group 1 carcinogens Chemical hazards Soil contamination Testicular toxicants Native element minerals Chemical elements with hexagonal close-packed structure
5675
https://en.wikipedia.org/wiki/Curium
Curium
Curium is a transuranic, radioactive chemical element with the symbol Cm and atomic number 96 and its made entirely from curry. This actinide element was named after eminent scientists Marie and Pierre Curie, both known for their research on radioactivity. Curium was first intentionally made by the team of Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso in 1944, using the cyclotron at Berkeley. They bombarded the newly discovered element plutonium (the isotope 239Pu) with alpha particles. This was then sent to the Metallurgical Laboratory at University of Chicago where a tiny sample of curium was eventually separated and identified. The discovery was kept secret until after the end of World War II. The news was released to the public in November 1947. Most curium is produced by bombarding uranium or plutonium with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains ~20 grams of curium. Curium is a hard, dense, silvery metal with a high melting and boiling point for an actinide. It is paramagnetic at ambient conditions, but becomes antiferromagnetic upon cooling, and other magnetic transitions are also seen in many curium compounds. In compounds, curium usually has valence +3 and sometimes +4; the +3 valence is predominant in solutions. Curium readily oxidizes, and its oxides are a dominant form of this element. It forms strongly fluorescent complexes with various organic compounds, but there is no evidence of its incorporation into bacteria and archaea. If it gets into the human body, curium accumulates in bones, lungs, and liver, where it promotes cancer. All known isotopes of curium are radioactive and have small critical mass for a nuclear chain reaction. They mostly emit α-particles; radioisotope thermoelectric generators can use the heat from this process, but this is hindered by the rarity and high cost of curium. Curium is used in making heavier actinides and the 238Pu radionuclide for power sources in artificial cardiac pacemakers and RTGs for spacecraft. It served as the α-source in the alpha particle X-ray spectrometers of several space probes, including the Sojourner, Spirit, Opportunity, and Curiosity Mars rovers and the Philae lander on comet 67P/Churyumov–Gerasimenko, to analyze the composition and structure of the surface. History Though curium had likely been produced in previous nuclear experiments as well as the natural nuclear fission reactor at Oklo, Gabon, it was first intentionally synthesized, isolated and identified in 1944, at University of California, Berkeley, by Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso. In their experiments, they used a cyclotron. Curium was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory), University of Chicago. It was the third transuranium element to be discovered even though it is the fourth in the series – the lighter element americium was still unknown. The sample was prepared as follows: first plutonium nitrate solution was coated on a platinum foil of ~0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium(IV) oxide (PuO2) by annealing. Following cyclotron irradiation of the oxide, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid, and further separation was done by ion exchange to yield a certain isotope of curium. The separation of curium and americium was so painstaking that the Berkeley group initially called those elements pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness). Curium-242 was made in July–August 1944 by bombarding 239Pu with α-particles to produce curium with the release of a neutron: ^{239}_{94}Pu + ^{4}_{2}He -> ^{242}_{96}Cm + ^{1}_{0}n Curium-242 was unambiguously identified by the characteristic energy of the α-particles emitted during the decay: ^{242}_{96}Cm -> ^{238}_{94}Pu + ^{4}_{2}He The half-life of this alpha decay was first measured as 150 days and then corrected to 162.8 days. Another isotope 240Cm was produced in a similar reaction in March 1945: ^{239}_{94}Pu + ^{4}_{2}He -> ^{240}_{96}Cm + 3^{1}_{0}n The α-decay half-life of 240Cm was correctly determined as 26.7 days. The discovery of curium and americium in 1944 was closely related to the Manhattan Project, so the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children, the Quiz Kids, five days before the official presentation at an American Chemical Society meeting on November 11, 1945, when one listener asked if any new transuranic element beside plutonium and neptunium had been discovered during the war. The discovery of curium (242Cm and 240Cm), its production, and its compounds was later patented listing only Seaborg as the inventor. The element was named after Marie Curie and her husband Pierre Curie, who are known for discovering radium and for their work in radioactivity. It followed the example of gadolinium, a lanthanide element above curium in the periodic table, which was named after the explorer of rare-earth elements Johan Gadolin: "As the name for the element of atomic number 96 we should like to propose "curium", with symbol Cm. The evidence indicates that element 96 contains seven 5f electrons and is thus analogous to the element gadolinium, with its seven 4f electrons in the regular rare earth series. On this basis element 96 is named after the Curies in a manner analogous to the naming of gadolinium, in which the chemist Gadolin was honored." The first curium samples were barely visible, and were identified by their radioactivity. Louis Werner and Isadore Perlman made the first substantial sample of 30 µg curium-242 hydroxide at University of California, Berkeley in 1947 by bombarding americium-241 with neutrons. Macroscopic amounts of curium(III) fluoride were obtained in 1950 by W. W. T. Crane, J. C. Wallmann and B. B. Cunningham. Its magnetic susceptibility was very close to that of GdF3 providing the first experimental evidence for the +3 valence of curium in its compounds. Curium metal was produced only in 1951 by reduction of CmF3 with barium. Characteristics Physical A synthetic, radioactive element, curium is a hard, dense metal with a silvery-white appearance and physical and chemical properties resembling gadolinium. Its melting point of 1344 °C is significantly higher than that of the previous elements neptunium (637 °C), plutonium (639 °C) and americium (1176 °C). In comparison, gadolinium melts at 1312 °C. Curium boils at 3556 °C. With a density of 13.52 g/cm3, curium is lighter than neptunium (20.45 g/cm3) and plutonium (19.8 g/cm3), but heavier than most other metals. Of two crystalline forms of curium, α-Cm is more stable at ambient conditions. It has a hexagonal symmetry, space group P63/mmc, lattice parameters a = 365 pm and c = 1182 pm, and four formula units per unit cell. The crystal consists of double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum. At pressure >23 GPa, at room temperature, α-Cm becomes β-Cm, which has face-centered cubic symmetry, space group Fmm and lattice constant a = 493 pm. On further compression to 43 GPa, curium becomes an orthorhombic γ-Cm structure similar to α-uranium, with no further transitions observed up to 52 GPa. These three curium phases are also called Cm I, II and III. Curium has peculiar magnetic properties. Its neighbor element americium shows no deviation from Curie-Weiss paramagnetism in the entire temperature range, but α-Cm transforms to an antiferromagnetic state upon cooling to 65–52 K, and β-Cm exhibits a ferrimagnetic transition at ~205 K. Curium pnictides show ferromagnetic transitions upon cooling: 244CmN and 244CmAs at 109 K, 248CmP at 73 K and 248CmSb at 162 K. The lanthanide analog of curium, gadolinium, and its pnictides, also show magnetic transitions upon cooling, but the transition character is somewhat different: Gd and GdN become ferromagnetic, and GdP, GdAs and GdSb show antiferromagnetic ordering. In accordance with magnetic data, electrical resistivity of curium increases with temperature – about twice between 4 and 60 K – and then is nearly constant up to room temperature. There is a significant increase in resistivity over time (~) due to self-damage of the crystal lattice by alpha decay. This makes uncertain the true resistivity of curium (~). Curium's resistivity is similar to that of gadolinium, and the actinides plutonium and neptunium, but significantly higher than that of americium, uranium, polonium and thorium. Under ultraviolet illumination, curium(III) ions show strong and stable yellow-orange fluorescence with a maximum in the range of 590–640 nm depending on their environment. The fluorescence originates from the transitions from the first excited state 6D7/2 and the ground state 8S7/2. Analysis of this fluorescence allows monitoring interactions between Cm(III) ions in organic and inorganic complexes. Chemical Curium ion in solution almost always has a +3 oxidation state, the most stable oxidation state for curium. A +4 oxidation state is seen mainly in a few solid phases, such as CmO2 and CmF4. Aqueous curium(IV) is only known in the presence of strong oxidizers such as potassium persulfate, and is easily reduced to curium(III) by radiolysis and even by water itself. Chemical behavior of curium is different from the actinides thorium and uranium, and is similar to americium and many lanthanides. In aqueous solution, the Cm3+ ion is colorless to pale green; Cm4+ ion is pale yellow. The optical absorption of Cm3+ ion contains three sharp peaks at 375.4, 381.2 and 396.5 nm and their strength can be directly converted into the concentration of the ions. The +6 oxidation state has only been reported once in solution in 1978, as the curyl ion (): this was prepared from beta decay of americium-242 in the americium(V) ion . Failure to get Cm(VI) from oxidation of Cm(III) and Cm(IV) may be due to the high Cm4+/Cm3+ ionization potential and the instability of Cm(V). Curium ions are hard Lewis acids and thus form most stable complexes with hard bases. The bonding is mostly ionic, with a small covalent component. Curium in its complexes commonly exhibits a 9-fold coordination environment, with a tricapped trigonal prismatic molecular geometry. Isotopes About 19 radioisotopes and 7 nuclear isomers, 233Cm to 251Cm, are known; none are stable. The longest half-lives are 15.6 million years (247Cm) and 348,000 years (248Cm). Other long-lived ones are 245Cm (8500 years), 250Cm (8300 years) and 246Cm (4760 years). Curium-250 is unusual: it mostly (~86%) decays by spontaneous fission. The most commonly used isotopes are 242Cm and 244Cm with the half-lives 162.8 days and 18.1 years, respectively. All isotopes 242Cm-248Cm, and 250Cm, undergo a self-sustaining nuclear chain reaction and thus in principle can be a nuclear fuel in a reactor. As in most transuranic elements, nuclear fission cross section is especially high for the odd-mass curium isotopes 243Cm, 245Cm and 247Cm. These can be used in thermal-neutron reactors, whereas a mixture of curium isotopes is only suitable for fast breeder reactors since the even-mass isotopes are not fissile in a thermal reactor and accumulate as burn-up increases. The mixed-oxide (MOX) fuel, which is to be used in power reactors, should contain little or no curium because neutron activation of 248Cm will create californium. Californium is a strong neutron emitter, and would pollute the back end of the fuel cycle and increase the dose to reactor personnel. Hence, if minor actinides are to be used as fuel in a thermal neutron reactor, the curium should be excluded from the fuel or placed in special fuel rods where it is the only actinide present. The adjacent table lists the critical masses for curium isotopes for a sphere, without moderator or reflector. With a metal reflector (30 cm of steel), the critical masses of the odd isotopes are about 3–4 kg. When using water (thickness ~20–30 cm) as the reflector, the critical mass can be as small as 59 gram for 245Cm, 155 gram for 243Cm and 1550 gram for 247Cm. There is significant uncertainty in these critical mass values. While it is usually on the order of 20%, the values for 242Cm and 246Cm were listed as large as 371 kg and 70.1 kg, respectively, by some research groups. Curium is not currently used as nuclear fuel due to its low availability and high price. 245Cm and 247Cm have very small critical mass and so could be used in tactical nuclear weapons, but none are known to have been made. Curium-243 is not suitable for such, due to its short half-life and strong α emission, which would cause excessive heat. Curium-247 would be highly suitable due to its long half-life, which is 647 times longer than plutonium-239 (used in many existing nuclear weapons). Occurrence The longest-lived isotope, 247Cm, has half-life 15.6 million years; so any primordial curium, that is, present on Earth when it formed, should have decayed by now. Its past presence as an extinct radionuclide is detectable as an excess of its primordial, long-lived daughter 235U. Traces of curium may occur naturally in uranium minerals due to neutron capture and beta decay, though this has not been confirmed. Traces of 247Cm are also probably brought to Earth in cosmic rays, but again this has not been confirmed. Curium is made artificially in small amounts for research purposes. It also occurs as one of the waste products in spent nuclear fuel. Curium is present in nature in some areas used for nuclear weapons testing. Analysis of the debris at the test site of the United States' first thermonuclear weapon, Ivy Mike, (1 November 1952, Enewetak Atoll), besides einsteinium, fermium, plutonium and americium also revealed isotopes of berkelium, californium and curium, in particular 245Cm, 246Cm and smaller quantities of 247Cm, 248Cm and 249Cm. Atmospheric curium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 4,000 times higher concentration of curium at the sandy soil particles than in water present in the soil pores. An even higher ratio of about 18,000 was measured in loam soils. The transuranium elements from americium to fermium, including curium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Curium, and other non-primordial actinides, have also been suspected to exist in the spectrum of Przybylski's Star. Synthesis Isotope preparation Curium is made in small amounts in nuclear reactors, and by now only kilograms of 242Cm and 244Cm have been accumulated, and grams or even milligrams for heavier isotopes. Hence the high price of curium, which has been quoted at 160–185 USD per milligram, with a more recent estimate at US$2,000/g for 242Cm and US$170/g for 244Cm. In nuclear reactors, curium is formed from 238U in a series of nuclear reactions. In the first chain, 238U captures a neutron and converts into 239U, which via β− decay transforms into 239Np and 239Pu. Further neutron capture followed by β−-decay gives americium (241Am) which further becomes 242Cm: For research purposes, curium is obtained by irradiating not uranium but plutonium, which is available in large amounts from spent nuclear fuel. A much higher neutron flux is used for the irradiation that results in a different reaction chain and formation of 244Cm: Curium-244 alpha decays to 240Pu, but it also absorbs neutrons, hence a small amount of heavier curium isotopes. Of those, 247Cm and 248Cm are popular in scientific research due to their long half-lives. But the production rate of 247Cm in thermal neutron reactors is low because it is prone to fission due to thermal neutrons. Synthesis of 250Cm by neutron capture is unlikely due to the short half-life of the intermediate 249Cm (64 min), which β− decays to the berkelium isotope 249Bk. The above cascade of (n,γ) reactions gives a mix of different curium isotopes. Their post-synthesis separation is cumbersome, so a selective synthesis is desired. Curium-248 is favored for research purposes due to its long half-life. The most efficient way to prepare this isotope is by α-decay of the californium isotope 252Cf, which is available in relatively large amounts due to its long half-life (2.65 years). About 35–50 mg of 248Cm is produced thus, per year. The associated reaction produces 248Cm with isotopic purity of 97%. Another isotope, 245Cm, can be obtained for research, from α-decay of 249Cf; the latter isotope is produced in small amounts from β−-decay of 249Bk. Metal preparation Most synthesis routines yield a mix of actinide isotopes as oxides, from which a given isotope of curium needs to be separated. An example procedure could be to dissolve spent reactor fuel (e.g. MOX fuel) in nitric acid, and remove the bulk of the uranium and plutonium using a PUREX (Plutonium – URanium EXtraction) type extraction with tributyl phosphate in a hydrocarbon. The lanthanides and the remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction to give, after stripping, a mixture of trivalent actinides and lanthanides. A curium compound is then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. Bis-triazinyl bipyridine complex has been recently proposed as such reagent which is highly selective to curium. Separation of curium from the very chemically similar americium can also be done by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone at elevated temperature. Both americium and curium are present in solutions mostly in the +3 valence state; americium oxidizes to soluble Am(IV) complexes, but curium stays unchanged and so can be isolated by repeated centrifugation. Metallic curium is obtained by reduction of its compounds. Initially, curium(III) fluoride was used for this purpose. The reaction was done in an environment free of water and oxygen, in an apparatus made of tantalum and tungsten, using elemental barium or lithium as reducing agents. Another possibility is reduction of curium(IV) oxide using a magnesium-zinc alloy in a melt of magnesium chloride and magnesium fluoride. Compounds and reactions Oxides Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides, but the divalent oxide CmO is also known. Black CmO2 can be obtained by burning curium oxalate (), nitrate (), or hydroxide in pure oxygen. Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3: 4CmO2 ->[\Delta T] 2Cm2O3 + O2. Or, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen: 2CmO2 + H2 -> Cm2O3 + H2O Also, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium. Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to give a volatile form of CmO2 and the volatile trioxide CmO3, one of two known examples of the very rare +6 state for curium. Another observed species was reported to behave similar to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state; but new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well. Halides The colorless curium(III) fluoride (CmF3) can be made by adding fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine: A series of ternary fluorides are known of the form A7Cm6F31 (A = alkali metal). The colorless curium(III) chloride (CmCl3) is made by reacting curium hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can be further turned into other halides such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at temperatures of ~400–450°C: Or, one can heat curium oxide to ~600°C with the corresponding acid (such as hydrobromic for curium bromide). Vapor phase hydrolysis of curium(III) chloride gives curium oxychloride: Chalcogenides and pnictides Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature. Curium pnictides of the type CmX are known for nitrogen, phosphorus, arsenic and antimony. They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperature. Organocurium compounds and biological aspects Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable "curocene" complex (η8-C8H8)2Cm, but it has not been reported experimentally yet. Formation of the complexes of the type (BTP = 2,6-di(1,2,4-triazin-3-yl)pyridine), in solutions containing n-C3H7-BTP and Cm3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and thus are useful for separating it from lanthanides and another actinides. Dissolved Cm3+ ions bind with many organic compounds, such as hydroxamic acid, urea, fluorescein and adenosine triphosphate. Many of these compounds are related to biological activity of various microorganisms. The resulting complexes show strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying interactions between the Cm3+ ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence. Curium has no biological significance. There are a few reports on biosorption of Cm3+ by bacteria and archaea, but no evidence for incorporation of curium into them. Applications Radionuclides Curium is one of the most radioactive isolable elements. Its two most common isotopes 242Cm and 244Cm are strong alpha emitters (energy 6 MeV); they have fairly short half-lives, 162.8 days and 18.1 years, and give as much as 120 W/g and 3 W/g of heat, respectively. Therefore, curium can be used in its common oxide form in radioisotope thermoelectric generators like those in spacecraft. This application has been studied for the 244Cm isotope, while 242Cm was abandoned due to its prohibitive price, around 2000 USD/g. 243Cm with a ~30-year half-life and good energy yield of ~1.6 W/g could be a suitable fuel, but it gives significant amounts of harmful gamma and beta rays from radioactive decay products. As an α-emitter, 244Cm needs much less radiation shielding, but it has a high spontaneous fission rate, and thus a lot of neutron and gamma radiation. Compared to a competing thermoelectric generator isotope such as 238Pu, 244Cm emits 500 times more neutrons, and its higher gamma emission requires a shield that is 20 times thicker— of lead for a 1 kW source, compared to for 238Pu. Therefore, this use of curium is currently considered impractical. A more promising use of 242Cm is for making 238Pu, a better radioisotope for thermoelectric generators such as in heart pacemakers. The alternate routes to 238Pu use the (n,γ) reaction of 237Np, or deuteron bombardment of uranium, though both reactions always produce 236Pu as an undesired by-product since the latter decays to 232U with strong gamma emission. Curium is a common starting material for making higher transuranic and superheavy elements. Thus, bombarding 248Cm with neon (22Ne), magnesium (26Mg), or calcium (48Ca) yields isotopes of seaborgium (265Sg), hassium (269Hs and 270Hs), and livermorium (292Lv, 293Lv, and possibly 294Lv). Californium was discovered when a microgram-sized target of curium-242 was irradiated with 35 MeV alpha particles using the cyclotron at Berkeley: + → + Only about 5,000 atoms of californium were produced in this experiment. The odd-mass curium isotopes 243Cm, 245Cm, and 247Cm are all highly fissile and can release additional energy in a thermal spectrum nuclear reactor. All curium isotopes are fissionable in fast-neutron reactors. This is one of the motives for minor actinide separation and transmutation in the nuclear fuel cycle, helping to reduce the long-term radiotoxicity of used, or spent nuclear fuel. X-ray spectrometer The most practical application of 244Cm—though rather limited in total volume—is as α-particle source in alpha particle X-ray spectrometers (APXS). These instruments were installed on the Sojourner, Mars, Mars 96, Mars Exploration Rovers and Philae comet lander, as well as the Mars Science Laboratory to analyze the composition and structure of the rocks on the surface of planet Mars. APXS was also used in the Surveyor 5–7 moon probes but with a 242Cm source. An elaborate APXS setup has a sensor head containing six curium sources with a total decay rate of several tens of millicuries (roughly one gigabecquerel). The sources are collimated on a sample, and the energy spectra of the alpha particles and protons scattered from the sample are analyzed (proton analysis is done only in some spectrometers). These spectra contain quantitative information on all major elements in the sample except for hydrogen, helium and lithium. Safety Due to its radioactivity, curium and its compounds must be handled in appropriate labs under special arrangements. While curium itself mostly emits α-particles which are absorbed by thin layers of common materials, some of its decay products emit significant fractions of beta and gamma rays, which require a more elaborate protection. If consumed, curium is excreted within a few days and only 0.05% is absorbed in the blood. From there, ~45% goes to the liver, 45% to the bones, and the remaining 10% is excreted. In bone, curium accumulates on the inside of the interfaces to the bone marrow and does not significantly redistribute with time; its radiation destroys bone marrow and thus stops red blood cell creation. The biological half-life of curium is about 20 years in the liver and 50 years in the bones. Curium is absorbed in the body much more strongly via inhalation, and the allowed total dose of 244Cm in soluble form is 0.3 μCi. Intravenous injection of 242Cm- and 244Cm-containing solutions to rats increased the incidence of bone tumor, and inhalation promoted lung and liver cancer. Curium isotopes are inevitably present in spent nuclear fuel (about 20 g/tonne). The isotopes 245Cm–248Cm have decay times of thousands of years and must be removed to neutralize the fuel for disposal. Such a procedure involves several steps, where curium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure, nuclear transmutation, while well documented for other elements, is still being developed for curium. References Bibliography Holleman, Arnold F. and Wiberg, Nils Lehrbuch der Anorganischen Chemie, 102 Edition, de Gruyter, Berlin 2007, . Penneman, R. A. and Keenan T. K. The radiochemistry of americium and curium, University of California, Los Alamos, California, 1960 External links Curium at The Periodic Table of Videos (University of Nottingham) NLM Hazardous Substances Databank – Curium, Radioactive Chemical elements Chemical elements with double hexagonal close-packed structure Actinides American inventions Synthetic elements Marie Curie Pierre Curie
5676
https://en.wikipedia.org/wiki/Californium
Californium
Californium is a radioactive chemical element with the symbol Cf and atomic number 98. The element was first synthesized in 1950 at Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory), by bombarding curium with alpha particles (helium-4 ions). It is an actinide element, the sixth transuranium element to be synthesized, and has the second-highest atomic mass of all elements that have been produced in amounts large enough to see with the naked eye (after einsteinium). The element was named after the university and the U.S. state of California. Two crystalline forms exist for californium at normal pressure: one above and one below . A third form exists at high pressure. Californium slowly tarnishes in air at room temperature. Californium compounds are dominated by the +3 oxidation state. The most stable of californium's twenty known isotopes is californium-251, with a half-life of 898 years. This short half-life means the element is not found in significant quantities in the Earth's crust. 252Cf, with a half-life of about 2.645 years, is the most common isotope used and is produced at Oak Ridge National Laboratory in the United States and Research Institute of Atomic Reactors in Russia. Californium is one of the few transuranium elements with practical applications. Most of these applications exploit the property of certain isotopes of californium to emit neutrons. For example, californium can be used to help start up nuclear reactors, and it is employed as a source of neutrons when studying materials using neutron diffraction and neutron spectroscopy. Californium can also be used in nuclear synthesis of higher mass elements; oganesson (element 118) was synthesized by bombarding californium-249 atoms with calcium-48 ions. Users of californium must take into account radiological concerns and the element's ability to disrupt the formation of red blood cells by bioaccumulating in skeletal tissue. Characteristics Physical properties Californium is a silvery-white actinide metal with a melting point of and an estimated boiling point of . The pure metal is malleable and is easily cut with a razor blade. Californium metal starts to vaporize above when exposed to a vacuum. Below californium metal is either ferromagnetic or ferrimagnetic (it acts like a magnet), between 48 and 66 K it is antiferromagnetic (an intermediate state), and above it is paramagnetic (external magnetic fields can make it magnetic). It forms alloys with lanthanide metals but little is known about the resulting materials. The element has two crystalline forms at standard atmospheric pressure: a double-hexagonal close-packed form dubbed alpha (α) and a face-centered cubic form designated beta (β). The α form exists below 600–800 °C with a density of 15.10 g/cm3 and the β form exists above 600–800 °C with a density of 8.74 g/cm3. At 48 GPa of pressure the β form changes into an orthorhombic crystal system due to delocalization of the atom's 5f electrons, which frees them to bond. The bulk modulus of a material is a measure of its resistance to uniform pressure. Californium's bulk modulus is , which is similar to trivalent lanthanide metals but smaller than more familiar metals, such as aluminium (70 GPa). Chemical properties and compounds Californium exhibits oxidation states of 4, 3, or 2. It typically forms eight or nine bonds to surrounding atoms or ions. Its chemical properties are predicted to be similar to other primarily 3+ valence actinide elements and the element dysprosium, which is the lanthanide above californium in the periodic table. Compounds in the +4 oxidation state are strong oxidizing agents and those in the +2 state are strong reducing agents. The element slowly tarnishes in air at room temperature, with the rate increasing when moisture is added. Californium reacts when heated with hydrogen, nitrogen, or a chalcogen (oxygen family element); reactions with dry hydrogen and aqueous mineral acids are rapid. Californium is only water-soluble as the californium(III) cation. Attempts to reduce or oxidize the +3 ion in solution have failed. The element forms a water-soluble chloride, nitrate, perchlorate, and sulfate and is precipitated as a fluoride, oxalate, or hydroxide. Californium is the heaviest actinide to exhibit covalent properties, as is observed in the californium borate. Isotopes Twenty isotopes of californium are known (mass number ranging from 237 to 256); the most stable are 251Cf with half-life 898 years, 249Cf with half-life 351 years, 250Cf with half-life 13.08 years, and 252Cf with half-life 2.645 years. All other isotopes have half-life shorter than a year, and most of these have half-lives less than 20 minutes. 249Cf is formed from beta decay of berkelium-249, and most other californium isotopes are made by subjecting berkelium to intense neutron radiation in a nuclear reactor. Though californium-251 has the longest half-life, its production yield is only 10% due to its tendency to collect neutrons (high neutron capture) and its tendency to interact with other particles (high neutron cross section). Californium-252 is a very strong neutron emitter, which makes it extremely radioactive and harmful. 252Cf, 96.9% of the time, alpha decays to curium-248; the other 3.1% of decays are spontaneous fission. One microgram (μg) of 252Cf emits 2.3 million neutrons per second, an average of 3.7 neutrons per spontaneous fission. Most other isotopes of californium, alpha decay to curium (atomic number 96). History Californium was first made at University of California Radiation Laboratory, Berkeley, by physics researchers Stanley Gerald Thompson, Kenneth Street Jr., Albert Ghiorso, and Glenn T. Seaborg, about February 9, 1950. It was the sixth transuranium element to be discovered; the team announced its discovery on March 17, 1950. To produce californium, a microgram-size target of curium-242 () was bombarded with 35 MeV alpha particles () in the cyclotron at Berkeley, which produced californium-245 () plus one free neutron (). + → + To identify and separate out the element, ion exchange and adsorsion methods were undertaken. Only about 5,000 atoms of californium were produced in this experiment, and these atoms had a half-life of 44 minutes. The discoverers named the new element after the university and the state. This was a break from the convention used for elements 95 to 97, which drew inspiration from how the elements directly above them in the periodic table were named. However, the element directly above #98 in the periodic table, dysprosium, has a name that means "hard to get at", so the researchers decided to set aside the informal naming convention. They added that "the best we can do is to point out [that] ... searchers a century ago found it difficult to get to California". Weighable amounts of californium were first produced by the irradiation of plutonium targets at Materials Testing Reactor at National Reactor Testing Station, eastern Idaho; these findings were reported in 1954. The high spontaneous fission rate of californium-252 was observed in these samples. The first experiment with californium in concentrated form occurred in 1958. The isotopes 249Cf to 252Cf were isolated that same year from a sample of plutonium-239 that had been irradiated with neutrons in a nuclear reactor for five years. Two years later, in 1960, Burris Cunningham and James Wallman of Lawrence Radiation Laboratory of the University of California created the first californium compounds—californium trichloride, californium(III) oxychloride, and californium oxide—by treating californium with steam and hydrochloric acid. The High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, started producing small batches of californium in the 1960s. By 1995, HFIR nominally produced of californium annually. Plutonium supplied by the United Kingdom to the United States under the 1958 US–UK Mutual Defence Agreement was used for making californium. The Atomic Energy Commission sold 252Cf to industrial and academic customers in the early 1970s for $10 per microgram, and an average of of 252Cf were shipped each year from 1970 to 1990. Californium metal was first prepared in 1974 by Haire and Baybarz, who reduced californium(III) oxide with lanthanum metal to obtain microgram amounts of sub-micrometer thick films. Occurrence Traces of californium can be found near facilities that use the element in mineral prospecting and in medical treatments. The element is fairly insoluble in water, but it adheres well to ordinary soil; and concentrations of it in the soil can be 500 times higher than in the water surrounding the soil particles. Nuclear fallout from atmospheric nuclear weapons testing prior to 1980 contributed a small amount of californium to the environment. Californium isotopes with mass numbers 249, 252, 253, and 254 have been observed in the radioactive dust collected from the air after a nuclear explosion. Californium is not a major radionuclide at United States Department of Energy legacy sites since it was not produced in large quantities. Californium was once believed to be produced in supernovas, as their decay matches the 60-day half-life of 254Cf. However, subsequent studies failed to demonstrate any californium spectra, and supernova light curves are now thought to follow the decay of nickel-56. The transuranium elements from americium to fermium, including californium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Spectral lines of californium, along with those of several other non-primordial elements, were detected in Przybylski's Star in 2008. Production Californium is produced in nuclear reactors and particle accelerators. Californium-250 is made by bombarding berkelium-249 () with neutrons, forming berkelium-250 () via neutron capture (n,γ) which, in turn, quickly beta decays (β−) to californium-250 () in the following reaction: (n,γ) → + β− Bombardment of californium-250 with neutrons produces californium-251 and californium-252. Prolonged irradiation of americium, curium, and plutonium with neutrons produces milligram amounts of californium-252 and microgram amounts of californium-249. As of 2006, curium isotopes 244 to 248 are irradiated by neutrons in special reactors to produce primarily californium-252 with lesser amounts of isotopes 249 to 255. Microgram quantities of californium-252 are available for commercial use through the U.S. Nuclear Regulatory Commission. Only two sites produce californium-252: the Oak Ridge National Laboratory in the United States, and the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. As of 2003, the two sites produce 0.25 grams and 0.025 grams of californium-252 per year, respectively. Three californium isotopes with significant half-lives are produced, requiring a total of 15 neutron captures by uranium-238 without nuclear fission or alpha decay occurring during the process. Californium-253 is at the end of a production chain that starts with uranium-238, includes several isotopes of plutonium, americium, curium, berkelium, and the californium isotopes 249 to 253 (see diagram). Applications Californium-252 has a number of specialized uses as a strong neutron emitter; it produces 139 million neutrons per microgram per minute. This property makes it useful as a startup neutron source for some nuclear reactors and as a portable (non-reactor based) neutron source for neutron activation analysis to detect trace amounts of elements in samples. Neutrons from californium are used as a treatment of certain cervical and brain cancers where other radiation therapy is ineffective. It has been used in educational applications since 1969 when Georgia Institute of Technology got a loan of 119 μg of 252Cf from the Savannah River Site. It is also used with online elemental coal analyzers and bulk material analyzers in the coal and cement industries. Neutron penetration into materials makes californium useful in detection instruments such as fuel rod scanners; neutron radiography of aircraft and weapons components to detect corrosion, bad welds, cracks and trapped moisture; and in portable metal detectors. Neutron moisture gauges use 252Cf to find water and petroleum layers in oil wells, as a portable neutron source for gold and silver prospecting for on-the-spot analysis, and to detect ground water movement. The main uses of 252Cf in 1982 were, reactor start-up (48.3%), fuel rod scanning (25.3%), and activation analysis (19.4%). By 1994, most 252Cf was used in neutron radiography (77.4%), with fuel rod scanning (12.1%) and reactor start-up (6.9%) as important but secondary uses. In 2021, fast neutrons from 252Cf were used for wireless data transmission. 251Cf has a very small calculated critical mass of about , high lethality, and a relatively short period of toxic environmental irradiation. The low critical mass of californium led to some exaggerated claims about possible uses for the element. In October 2006, researchers announced that three atoms of oganesson (element 118) had been identified at Joint Institute for Nuclear Research in Dubna, Russia, from bombarding 249Cf with calcium-48, making it the heaviest element ever made. The target contained about 10 mg of 249Cf deposited on a titanium foil of 32 cm2 area. Californium has also been used to produce other transuranium elements; for example, lawrencium was first synthesized in 1961 by bombarding californium with boron nuclei. Precautions Californium that bioaccumulates in skeletal tissue releases radiation that disrupts the body's ability to form red blood cells. The element plays no natural biological role in any organism due to its intense radioactivity and low concentration in the environment. Californium can enter the body from ingesting contaminated food or drinks or by breathing air with suspended particles of the element. Once in the body, only 0.05% of the californium will reach the bloodstream. About 65% of that californium will be deposited in the skeleton, 25% in the liver, and the rest in other organs, or excreted, mainly in urine. Half of the californium deposited in the skeleton and liver are gone in 50 and 20 years, respectively. Californium in the skeleton adheres to bone surfaces before slowly migrating throughout the bone. The element is most dangerous if taken into the body. In addition, californium-249 and californium-251 can cause tissue damage externally, through gamma ray emission. Ionizing radiation emitted by californium on bone and in the liver can cause cancer. Notes References Bibliography External links Californium at The Periodic Table of Videos (University of Nottingham) NuclearWeaponArchive.org – Californium Hazardous Substances Databank – Californium, Radioactive Chemical elements Chemical elements with double hexagonal close-packed structure Actinides Synthetic elements Neutron sources Ferromagnetic materials
5679
https://en.wikipedia.org/wiki/Christian%20Social%20Union%20in%20Bavaria
Christian Social Union in Bavaria
The Christian Social Union in Bavaria (German: , CSU) is a Christian democratic and conservative political party in Germany. Having a regionalist identity, the CSU operates only in Bavaria while its larger counterpart, the Christian Democratic Union (CDU), operates in the other fifteen states of Germany. It differs from the CDU by being somewhat more conservative in social matters, following Catholic social teaching. The CSU is considered the de facto successor of the Weimar-era Catholic Bavarian People's Party. At the federal level, the CSU forms a common faction in the Bundestag with the CDU which is frequently referred to as the Union Faction (die Unionsfraktion) or simply CDU/CSU. The CSU has 45 seats in the Bundestag since the 2021 federal election, making it currently the second smallest of the seven parties represented. The CSU is a member of the European People's Party and the International Democrat Union. Party leader Markus Söder serves as Minister-President of Bavaria, a position that CSU representatives have held from 1946 to 1954 and again since 1957. History Franz Josef Strauß (1915–1988) had left behind the strongest legacy as a leader of the party, having led the party from 1961 until his death in 1988. His political career in the federal cabinet was unique in that he had served in four ministerial posts in the years between 1953 and 1969. From 1978 until his death in 1988, Strauß served as the Minister-President of Bavaria. Strauß was the first leader of the CSU to be a candidate for the German chancellery in 1980. In the 1980 federal election, Strauß ran against the incumbent Helmut Schmidt of the Social Democratic Party of Germany (SPD) but lost thereafter as the SPD and the Free Democratic Party (FDP) managed to secure an absolute majority together, forming a social-liberal coalition. The CSU has led the Bavarian state government since it came into existence in 1946, save from 1954 to 1957 when the SPD formed a state government in coalition with the Bavaria Party and the state branches of the GB/BHE and FDP. Initially, the separatist Bavaria Party (BP) successfully competed for the same electorate as the CSU, as both parties saw and presented themselves as successors to the BVP. The CSU was ultimately able to win this power struggle for itself. Among other things, the BP was involved in the "casino affair" under dubious circumstances by the CSU at the end of the 1950s and lost considerable prestige and votes. In the 1966 state election, the BP finally left the state parliament. Before the 2008 elections in Bavaria, the CSU perennially achieved absolute majorities at the state level by itself. This level of dominance is unique among Germany's 16 states. Edmund Stoiber took over the CSU leadership in 1999. He ran for Chancellor of Germany in 2002, but his preferred CDU/CSU–FDP coalition lost against the SPD candidate Gerhard Schröder's SPD–Green alliance. In the 2003 Bavarian state election, the CSU won 60.7% of the vote and 124 of 180 seats in the state parliament. This was the first time any party had won a two-thirds majority in a German state parliament. The Economist later suggested that this exceptional result was due to a backlash against Schröder's government in Berlin. The CSU's popularity declined in subsequent years. Stoiber stepped down from the posts of Minister-President and CSU chairman in September 2007. A year later, the CSU lost its majority in the 2008 Bavarian state election, with its vote share dropping from 60.7% to 43.4%. The CSU remained in power by forming a coalition with the FDP. In the 2009 general election, the CSU received only 42.5% of the vote in Bavaria in the 2009 election, which by then constituted its weakest showing in the party's history. The CSU made gains in the 2013 Bavarian state election and the 2013 federal election, which were held a week apart in September 2013. The CSU regained their majority in the Bavarian Landtag and remained in government in Berlin. They had three ministers in the Fourth Merkel cabinet, namely Horst Seehofer (Minister of the Interior, Building and Community), Andreas Scheuer (Minister of Transport and Digital Infrastructure) and Gerd Müller (Minister for Economic Cooperation and Development). The 2018 Bavarian state election yielded the worst result for the CSU in the state elections (top candidate Markus Söder) since 1950 with 37.2% of votes, a decline of over ten percentage points compared to the last result in 2013. After that, the CSU had to form a new coalition government with the minor partner Free Voters of Bavaria. The 2021 German federal election saw the worst election result ever for the Union. The CSU also had a weak showing with 5.2% of votes nationally and 31.7% of the total in Bavaria. Relationship with the CDU The CSU is the sister party of the Christian Democratic Union (CDU). Together, they are called the Union. The CSU operates only within Bavaria, and the CDU operates in all states other than Bavaria. While virtually independent, at the federal level the parties form a common CDU/CSU faction. No Chancellor has ever come from the CSU, although Strauß and Edmund Stoiber were CDU/CSU candidates for Chancellor in the 1980 federal election and the 2002 federal election, respectively, which were both won by the Social Democratic Party of Germany (SPD). Below the federal level, the parties are entirely independent. Since its formation, the CSU has been more conservative than the CDU. CSU and the state of Bavaria decided not to sign the Grundgesetz of the Federal Republic of Germany as they could not agree with the division of Germany into two states after World War II. Although Bavaria like all German states has a separate police and justice system (distinctive and non-federal), the CSU has actively participated in all political affairs of the German Parliament, the German government, the German Bundesrat, the parliamentary elections of the German President, the European Parliament and meetings with Mikhail Gorbachev in Russia. Like the CDU, the CSU is pro-European, although some Eurosceptic tendencies were shown in the past. Leaders Party chairmen Ministers-president The CSU has contributed eleven of the twelve Ministers-President of Bavaria since 1945, with only Wilhelm Hoegner (1945–1946, 1954–1957) of the SPD also holding the office. Election results Federal parliament (Bundestag) European Parliament Landtag of Bavaria See also List of Christian Social Union of Bavaria politicians Politics of Germany Notes and references Further reading Alf Mintzel (1975). Die CSU. Anatomie einer konservativen Partei 1945–1972 . Opladen. . External links Christlich-Soziale Union – official website (English page) Christian-Social Union (Bavaria, Germany) Christian-Social Union of Bavaria (CSU) 1945 establishments in Germany Bavarian nationalism Catholic political parties Centre-right parties in Europe Christian democratic parties in Germany Conservative parties in Germany International Democrat Union member parties Member parties of the European People's Party Parties represented in the European Parliament Political parties established in 1945 Politics of Bavaria Pro-European political parties in Germany Regional parties in Germany Social conservative parties
5681
https://en.wikipedia.org/wiki/Corporate%20title
Corporate title
Corporate titles or business titles are given to corporate officers to show what duties and responsibilities they have in the organization. Such titles are used by publicly and privately held for-profit corporations, cooperatives, non-profit organizations, educational institutions, partnerships, and sole proprietorships that also confer corporate titles. Variations There are considerable variations in the composition and responsibilities of corporate title. Within the corporate office or corporate center of a corporation, some corporations have a chairman and chief executive officer (CEO) as the top-ranking executive, while the number two is the president and chief operating officer (COO); other corporations have a president and CEO but no official deputy. Typically, senior managers are "higher" than vice presidents, although many times a senior officer may also hold a vice president title, such as executive vice president and chief financial officer (CFO). The board of directors is technically not part of management itself, although its chairman may be considered part of the corporate office if he or she is an executive chairman. A corporation often consists of different businesses, whose senior executives report directly to the CEO or COO, but that depends on the form of the business. If organized as a division then the top manager is often known as an executive vice president (EVP). If that business is a subsidiary which has considerably more independence, then the title might be chairman and CEO. In many countries, particularly in Europe and Asia, there is a separate executive board for day-to-day business and supervisory board (elected by shareholders) for control purposes. In these countries, the CEO presides over the executive board and the chairman presides over the supervisory board, and these two roles will always be held by different people. This ensures a distinction between management by the executive board and governance by the supervisory board. This seemingly allows for clear lines of authority. There is a strong parallel here with the structure of government, which tends to separate the political cabinet from the management civil service. In the United States and other countries that follow a single-board corporate structure, the board of directors (elected by the shareholders) is often equivalent to the European or Asian supervisory board, while the functions of the executive board may be vested either in the board of directors or in a separate committee, which may be called an operating committee (J.P. Morgan Chase), management committee (Goldman Sachs), executive committee (Lehman Brothers), executive council (Hewlett-Packard), or executive board (HeiG) composed of the division/subsidiary heads and senior officers that report directly to the CEO. United States State laws in the United States traditionally required certain positions to be created within every corporation, such as president, secretary and treasurer. Today, the approach under the Model Business Corporation Act, which is employed in many states, is to grant corporations discretion in determining which titles to have, with the only mandated organ being the board of directors. Some states that do not employ the MBCA continue to require that certain offices be established. Under the law of Delaware, where most large US corporations are established, stock certificates must be signed by two officers with titles specified by law (e.g. a president and secretary or a president and treasurer). Every corporation incorporated in California must have a chairman of the board or a president (or both), as well as a secretary and a chief financial officer. Limited liability company (LLC)-structured companies are generally run directly by their members, but the members can agree to appoint officers such as a CEO or to appoint "managers" to operate the company. American companies are generally led by a CEO. In some companies, the CEO also has the title of "president". In other companies, a president is a different person, and the primary duties of the two positions are defined in the company's bylaws (or the laws of the governing legal jurisdiction). Many companies also have a CFO, a chief operating officer (COO) and other senior positions such as chief legal officer (CLO), chief strategy officer (CSO), chief marketing officer (CMO), etc. that report to the president and CEO. The next level, which are not executive positions, is middle management and may be called "vice presidents", "directors" or "managers", depending on the size and required managerial depth of the company. United Kingdom In British English, the title of managing director is generally synonymous with that of chief executive officer. Managing directors do not have any particular authority under the Companies Act in the UK, but do have implied authority based on the general understanding of what their position entails, as well as any authority expressly delegated by the board of directors. Japan and South Korea In Japan, corporate titles are roughly standardized across companies and organizations; although there is variation from company to company, corporate titles within a company are always consistent, and the large companies in Japan generally follow the same outline. These titles are the formal titles that are used on business cards. Korean corporate titles are similar to those of Japan. Legally, Japanese and Korean companies are only required to have a board of directors with at least one representative director. In Japanese, a company director is called a torishimariyaku (取締役) and a representative director is called a daihyō torishimariyaku (代表取締役). The equivalent Korean titles are isa (이사, 理事) and daepyo-isa (대표이사, 代表理事). These titles are often combined with lower titles, e.g. senmu torishimariyaku or jōmu torishimariyaku for Japanese executives who are also board members. Most Japanese companies also have statutory auditors, who operate alongside the board of directors in supervisory roles. Under the commercial code in Japan, Jugyōin (従業員) meaning the "employee", is different from Kaishain (会社員), meaning the "stockholders". The typical structure of executive titles in large companies includes the following: {| class="wikitable" !English gloss !Hanja !Korean !Comments |- |Chairman |会長 (會長) |Hoejang(회장) |Often a semi-retired president or company founder. Denotes a position with considerable power within the company exercised through behind-the-scenes influence via the active president. |- |Vice chairman |副会長 (副會長) |Bu-hoejang(부회장) |At Korean family-owned chaebol companies such as Samsung, the vice-chairman commonly holds the CEO title (i.e., vice chairman and CEO) |- |President |社長 |Sajang(사장) |Often CEO of the corporation. Some companies do not have the "chairman" position, in which case the "president" is the top position that is equally respected and authoritative. |- |Deputy president or senior executive vice president |副社長 |Bu-sajang(부사장) |Reports to the president |- |Executive vice president |専務 |Jŏnmu(전무) | |- |Senior vice president |常務 |Sangmu(상무) | |- |Vice president or general manager or department head |部長 |Bujang(부장) |Highest non-executive title; denotes a head of a division or department. There is significant variation in the official English translation used by different companies. |- |Deputy general manager |次長 |Chajang(차장) |Direct subordinate to bujang |- |Manager or section head |課長 |Gwajang(과장) |Denotes a head of a team or section underneath a larger division or department |- |Assistant manager or team leader |係長 (代理) |Daeri'''(대리) | |- |Staff |社員 |Sawon(사원) |Staff without managerial titles are often referred to without using a title at all |} The top management group, comprising jomu/sangmu and above, is often referred to collectively as "cadre" or "senior management" (幹部 or 重役; kambu or juyaku in Japanese; ganbu or jungyŏk in Korean). Some Japanese and Korean companies have also adopted American-style titles, but these are not yet widespread and their usage varies. For example, although there is a Korean translation for "chief operating officer" (최고운영책임자, choego unyŏng chaegimja), not many companies have yet adopted it with the exception of a few multi-national companies such as Samsung and CJ (a spin-off from Samsung), while the CFO title is often used alongside other titles such as bu-sajang (SEVP) or Jŏnmu (EVP). Since the late 1990s, many Japanese companies have introduced the title of shikkō yakuin (執行役員) or 'officer', seeking to emulate the separation of directors and officers found in American companies. In 2002, the statutory title of shikkō yaku (執行役) was introduced for use in companies that introduced a three-committee structure in their board of directors. The titles are frequently given to buchō and higher-level personnel. Although the two titles are very similar in intent and usage, there are several legal distinctions: shikkō yaku make their own decisions in the course of performing work delegated to them by the board of directors, and are considered managers of the company rather than employees, with a legal status similar to that of directors. Shikkō yakuin are considered employees of the company that follow the decisions of the board of directors, although in some cases directors may have the shikkō yakuin title as well. Senior management The highest-level executives in senior management usually have titles beginning with "chief" and ending with "officer", forming what is often called the "C-suite", or "CxO", where "x" is a variable that could be any functional area (not to be confused with CXO). The traditional three such officers are CEO, COO, and CFO. Depending on the management structure, titles may exist instead of, or be blended/overlapped with, other traditional executive titles, such as president, various designations of vice presidents (e.g. VP of marketing), and general managers or directors of various divisions (such as director of marketing); the latter may or may not imply membership of the board of directors. Certain other prominent positions have emerged, some of which are sector-specific. For example, chief audit executive (CAE), chief procurement officer (CPO) and chief risk officer (CRO) positions are often found in many types of financial services companies. Technology companies of all sorts now tend to have a chief technology officer (CTO) to manage technology development. A chief information officer (CIO) oversees information technology (IT) matters, either in companies that specialize in IT or in any kind of company that relies on it for supporting infrastructure. Many companies now also have a chief marketing officer (CMO), particularly mature companies in competitive sectors, where brand management is a high priority. A chief value officer (CVO) is introduced in companies where business processes and organizational entities are focused on the creation and maximization of value. Approximately 50% of the S&P 500 companies have created a chief strategy officer (CSO) in their top management team to lead strategic planning and manage inorganic growth, which provides a long range perspective versus the tactical view of the COO or CFO. This function often replaces a COO on the C-Suite team, in cases where the company wants to focus on growth rather than efficiency and cost containment. A chief administrative officer (CAO) may be found in many large complex organizations that have various departments or divisions. Additionally, many companies now call their top diversity leadership position the chief diversity officer (CDO). However, this and many other nontraditional and lower-ranking titles are not universally recognized as corporate officers, and they tend to be specific to particular organizational cultures or the preferences of employees. Specific corporate officer positions Chairman of the board – presiding officer of the corporate board of directors. The chairman influences the board of directors, which in turn elects and removes the officers of a corporation and oversees the human, financial, environmental and technical operations of a corporation. The CEO may also hold the title of "chairman", resulting in an executive chairman. In this case, the board frequently names an independent member of the board as a lead director. The C-suite is normally led by the CEO. Executive chairman – the chairman's post may also exist as an office separate from that of CEO, and it is considered an executive chairman if that titleholder wields influence over company operations, such as Vince McMahon of WWE, Steve Case of AOL Time Warner, and Douglas Flint of HSBC. In particular, the group chairmanship of HSBC is considered the top position of that institution, outranking the chief executive, and is responsible for leading the board and representing the company in meetings with government figures. Prior to the creation of the group management board in 2006, HSBC's chairman essentially held the duties of a chief executive at an equivalent institution, while HSBC's chief executive served as the deputy. After the 2006 reorganization, the management cadre ran the business, while the chairman oversaw the controls of the business through compliance and audit and the direction of the business. Non-executive chairman – also a separate post from the CEO, unlike an executive chairman, a non-executive chairman does not interfere in day-to-day company matters. Across the world, many companies have separated the roles of chairman and CEO, often resulting in a non-executive chairman, saying that this move improves corporate governance. Chief business officer is a corporate senior executive who assumes full management responsibility for the company's deal making, provides leadership and executes a deal strategy that will allow the company to fulfill its scientific/technology mission and build shareholder value, provides managerial guidance to the company's product development staff as needed. Chief of staff is a corporate director level manager who has overall responsibility for the staff activity within the company who often would have responsibility of hiring and firing of the highest level managers and sometimes directors. They can work with and report directly to managing directors and the chief executive officer. Commissioner Financial control officer, FCO or FC, also comptroller or controller – supervises accounting and financial reporting within an organization Director or member of a board of directors – high-level official with a fiduciary responsibility of overseeing the operation of a corporation and elects or removes officers of a corporation; nominally, directors, other than the chairman are usually not considered to be employees of the company per se, although they may receive compensation, often including benefits; in publicly held companies. A board of directors is normally made up of members (directors) who are a mixture of corporate officials who are also management employees of the company (inside directors) and persons who are not employed by the company in any capacity (outside directors or non-executive directors). In privately held companies, the board of directors often only consists of the statutory corporate officials, and in sole proprietorship and partnerships, the board is entirely optional, and if it does exist, only operates in an advisory capacity to the owner or partners. Non-profit corporations' governing board members may be called directors like most for-profit corporations, or an alternative like trustees, governors, etc. Director – a manager of managers within an organization who is often responsible for a major business function and who sometimes reports to a vice president (in some financial services companies the title vice president has a different meaning). Often used with name of a functional area; finance director, director of finance, marketing director, and so on. Not to be confused with a member of the board of directors, who is also referred to as a director. This is a middle management and not an executive level position, unless it is in the banking industry. Alternatively, a manager of managers is often referred to as a "senior manager' or as an "associate vice president", depending upon levels of management, and industry type. President – legally recognized highest "titled" corporate officer, and usually a member of the board of directors. There is much variation; often the CEO also holds the title of president, while in other organizations if there is a separate CEO, the president is then second highest-ranking position. In such a case the president is often the COO and is considered to be more focused upon daily operations compared to the CEO, who is supposed to be the visionary. If the corporate president is not the COO (such as Richard Parsons of Time Warner from 1995 to 2001), then many division heads report directly to the CEO themselves, with the president taking on special assignments from the CEO. Secretary or company secretary – legally recognized "titled" corporate officer who reports to the board of directors and is responsible for keeping the records of the board and the company. This title is often concurrently held by the treasurer in a dual position called secretary-treasurer; both positions may be concurrently held by the CFO. Note, however, that the secretary has a reporting line to the board of directors, regardless of any other reporting lines conferred by concurrent titles. Treasurer – legally recognized corporate officer entrusted with the fiduciary responsibility of caring for company funds. Often this title is held concurrently with that of secretary in a dual role called secretary-treasurer. It can also be held concurrently with the title of CFO or fall under the jurisdiction of one, though the CFO tends to oversee the finance department instead, which deals with accounting and audits, while the treasurer deals directly with company funds. Note, however, that the treasurer has a reporting line to the board of directors, regardless of any other reporting lines conferred by concurrent titles. Superintendent Owner (sometimes proprietor or sole proprietor, for sole proprietorships) Partner – Used in many different ways. This may indicate a co-owner as in a legal partnership or may be used in a general way to refer to a broad class of employees or temporary/contract workers who are often assigned field or customer service work. Associate is often used in a similar way. Vice chair or vice chairman – officer of the board of directors who may stand in for the chairman in his or her absence. However, this type of vice chairman title on its own usually has only an advisory role and not an operational one (such as Ted Turner at Time Warner). An unrelated definition of vice chair describes an executive who is higher ranking or has more seniority than executive vice president. Sometimes, EVPs report to the vice chair, who in turn reports directly to the CEO (so vice chairs in effect constitute an additional layer of management), other vice chairs have more responsibilities but are otherwise on an equal tier with EVPs. Executive vice chairman are usually not on the board of directors. Royal Bank of Canada previously used vice chairs in their inner management circle until 2004 but have since renamed them as group heads. List of chief officer (CO) titles Middle management Supervisor Foreman General manager or GM Manager Of counsel – A lawyer working on a part-time or temporary basis for a company or law firm. Vice president – Middle or upper manager in a corporation. They often appear in various hierarchical layers such as executive vice president, senior vice president, associate vice president, or assistant vice president, with EVP usually considered the highest and usually reporting to the CEO or president. Many times, corporate officers such as the CFO, COO, CSO, CIO, CTO, secretary, or treasurer will concurrently hold vice president'' titles, commonly EVP or SVP. Vice presidents in small companies are also referred to as chiefs of a certain division, such as vice president for finance, or vice president for administration. In some financial contexts, the title of vice president is actually subordinate to a director. See also Corporate liability Identification with corporation International Executive Resources Group List of corporate titles Outline of management References External links Taking Stock - Corporate Execs Get Scammed, Federal Bureau of Investigation Title . Corporation-related lists Lists of occupations Management occupations Positions of authority
5685
https://en.wikipedia.org/wiki/Cambridge%2C%20Massachusetts
Cambridge, Massachusetts
Cambridge ( ) is a city in Middlesex County, Massachusetts, in the United States. It is a major suburb in the Greater Boston metropolitan area, located directly across the Charles River from Boston. The city's population as of the 2020 U.S. census was 118,403, making it the most populous city in the county, the 4th most populous city in the state, behind Boston, Worcester, and Springfield, and ninth most populous city in New England. It was named in honor of the University of Cambridge in England, which was an important center of the Puritan theology that was embraced by the town's founders. Cambridge is known globally as home to two of the world's most prestigious universities. Harvard University, an Ivy League university founded in Cambridge in 1636, is the oldest institution of higher learning in the United States and has routinely been ranked as one of the best universities in the world. The Massachusetts Institute of Technology (MIT), founded in 1861, is also located in Cambridge and has been similarly ranked highly among the world's best universities. Lesley University and Hult International Business School also are based in Cambridge. Radcliffe College, an elite women's liberal arts college, also was based in Cambridge from its 1879 founding until its assimiliation into Harvard in 1999. Kendall Square, near MIT in the eastern part of Cambridge, has been called "the most innovative square mile on the planet" due to the high concentration of startup companies that have emerged there since 2010. History Pre-colonization Massachusett Tribe inhabited the area that would become Cambridge for thousands of years prior to European colonization of the Americas, most recently under the name Anmoughcawgen. At the time of European contact and exploration, the area was inhabited by Naumkeag or Pawtucket to the north and Massachusett to the south, and may have been inhabited by other groups such as the Totant not well described in later European narratives. The contact period introduced a number of European infectious diseases which would decimate native populations in virgin soil epidemics, leaving the area uncontested upon the arrival of large groups of English settlers in 1630. 17th century and colonialism In December 1630, the site of present-day Cambridge was chosen for settlement because it was safely upriver from Boston Harbor, making it easily defensible from attacks by enemy ships. The city was founded by Thomas Dudley, his daughter Anne Bradstreet, and his son-in-law Simon Bradstreet. The first houses were built in the spring of 1631. The settlement was initially referred to as "the newe towne". Official Massachusetts records show the name rendered as Newe Towne by 1632, and as Newtowne by 1638. Located at the first convenient Charles River crossing west of Boston, Newtowne was one of several towns, including Boston, Dorchester, Watertown, and Weymouth, founded by the 700 original Puritan colonists of the Massachusetts Bay Colony under Governor John Winthrop. Its first preacher was Thomas Hooker, who led many of its original inhabitants west in 1636 to found Hartford and the Connecticut Colony; before leaving, they sold their plots to more recent immigrants from England. The original village site is now within Harvard Square. The marketplace where farmers sold crops from surrounding towns at the edge of a salt marsh (since filled) remains within a small park at the corner of John F. Kennedy and Winthrop Streets. In 1636, Newe College, later renamed Harvard College after benefactor John Harvard, was founded as North America's first institution of higher learning. Its initial purpose was training ministers. According to Cotton Mather, Newtowne was chosen for the site of the college by the Great and General Court, then the legislature of Massachusetts Bay Colony, primarily for its proximity to the popular and highly respected Puritan preacher Thomas Shepard. In May 1638, the settlement's name was changed to Cambridge in honor of the University of Cambridge in Cambridge, England. In 1639, the Massachusetts General Court purchased the land that became present-day Cambridge from the Naumkeag Squaw Sachem of Mistick. The town comprised a much larger area than the present city, with various outlying parts becoming independent towns over the years: Cambridge Village (later Newtown and now Newton) in 1688, Cambridge Farms (now Lexington) in 1712 or 1713, and Little or South Cambridge (now Brighton) and Menotomy or West Cambridge (now Arlington) in 1807. In the late 19th century, various schemes for annexing Cambridge to Boston were pursued and rejected. Newtowne's ministers, Hooker and Shepard, the college's first president, the college's major benefactor, and the first schoolmaster Nathaniel Eaton were all Cambridge alumni, as was the colony's governor John Winthrop. In 1629, Winthrop had led the signing of the founding document of the city of Boston, which was known as the Cambridge Agreement, after the university. In 1650, Governor Thomas Dudley signed the charter creating the corporation that still governs Harvard College. Cambridge grew slowly as an agricultural village by road from Boston, the colony's capital. By the American Revolution, most residents lived near the Common and Harvard College, with most of the town comprising farms and estates. Most inhabitants were descendants of the original Puritan colonists, but there was also a small elite of Anglican "worthies" who were not involved in village life, made their livings from estates, investments, and trade, and lived in mansions along "the Road to Watertown", present-day Brattle Street, which is still known as Tory Row. 18th century and Revolutionary War Coming south from Virginia, George Washington took command of the force of Patriot soldiers camped on Cambridge Common on July 3, 1775, which is now considered the birthplace of the Continental Army. On January 24, 1776, Henry Knox arrived with an artillery train captured from Fort Ticonderoga, which allowed Washington to force the British Army to evacuate Boston. Most of the Loyalist estates in Cambridge were confiscated after the Revolutionary War. 19th century and industrialization Between 1790 and 1840, Cambridge grew rapidly with the construction of West Boston Bridge in 1792 connecting Cambridge directly to Boston, making it no longer necessary to travel through the Boston Neck, Roxbury, and Brookline to cross the Charles River. A second bridge, the Canal Bridge, opened in 1809 alongside the new Middlesex Canal. The new bridges and roads made what were formerly estates and marshland into prime industrial and residential districts. In the mid-19th century, Cambridge was the center of a literary revolution. It was home to some of the famous Fireside poets, named because their poems would often be read aloud by families in front of their evening fires. The Fireside poets, including Henry Wadsworth Longfellow, James Russell Lowell, and Oliver Wendell Holmes, were highly popular and influential in this era. Soon after, turnpikes were built: the Cambridge and Concord Turnpike (today's Broadway and Concord Ave.), the Middlesex Turnpike (Hampshire St. and Massachusetts Ave. northwest of Porter Square), and what are today's Cambridge, Main, and Harvard Streets connected various areas of Cambridge to the bridges. In addition, the town was connected to the Boston & Maine Railroad, leading to the development of Porter Square as well as the creation of neighboring Somerville from the formerly rural parts of Charlestown. Cambridge was incorporated as a city in 1846. The city's commercial center began to shift from Harvard Square to Central Square, which became the city's downtown around that time. Between 1850 and 1900, Cambridge took on much of its present character, featuring streetcar suburban development along the turnpikes and working class and industrial neighborhoods focused on East Cambridge, comfortable middle-class housing on the old Cambridgeport, and Mid-Cambridge estates and upper-class enclaves near Harvard University and on the minor hills. The arrival of the railroad in North Cambridge and Northwest Cambridge led to three major changes: the development of massive brickyards and brickworks between Massachusetts Avenue, Concord Avenue, and Alewife Brook; the ice-cutting industry launched by Frederic Tudor on Fresh Pond; and the carving up of the last estates into residential subdivisions to house the thousands of immigrants who arrived to work in the new industries. For much of the 19th and early 20th centuries, the city's largest employer was the New England Glass Company, founded in 1818. By the middle of the 19th century, it was the world's largest and most modern glassworks. In 1888, Edward Drummond Libbey moved all production to Toledo, Ohio, where it continues today under the name Owens-Illinois. The company's flint glassware with heavy lead content is prized by antique glass collectors, and the Toledo Museum of Art has a large collection. The Museum of Fine Arts in Boston and the Sandwich Glass Museum on Cape Cod also house several pieces. In 1895, Edwin Ginn, founder of Ginn and Company, built the Athenaeum Press Building for his publishing textbook empire. 20th century By 1920, Cambridge was one of New England's main industrial cities, with nearly 120,000 residents. Among the largest businesses in Cambridge during the period of industrialization was Carter's Ink Company, whose neon sign long adorned the Charles River and which was for many years the world's largest ink manufacturer. Next door was the Athenaeum Press. Confectionery and snack manufacturers in the Cambridgeport-Area 4-Kendall corridor included Kennedy Biscuit Factory, later part of Nabisco and originator of the Fig Newton, Necco, Squirrel Brands, George Close Company (1861–1930s), Page & Shaw, Daggett Chocolate (1892–1960s, recipes bought by Necco), Fox Cross Company (1920–1980, originator of the Charleston Chew, and now part of Tootsie Roll Industries), Kendall Confectionery Company, and James O. Welch (1927–1963, originator of Junior Mints, Sugar Daddies, Sugar Mamas, and Sugar Babies, now part of Tootsie Roll Industries). Main Street was nicknamed "Confectioner's Row". Only the Cambridge Brands subsidiary of Tootsie Roll Industries remains in town, still manufacturing Junior Mints in the old Welch factory on Main Street. The Blake and Knowles Steam Pump Company (1886), the Kendall Boiler and Tank Company (1880, now in Chelmsford, Massachusetts), and the New England Glass Company (1818–1878) were among the industrial manufacturers in what are now Kendall Square and East Cambridge. In 1935, the Cambridge Housing Authority and the Public Works Administration demolished an integrated low-income tenement neighborhood with African Americans and European immigrants. In its place, it built the whites-only "Newtowne Court" public housing development and the adjoining, blacks-only "Washington Elms" project in 1940; the city required segregation in its other public housing projects as well. As industry in New England began to decline during the Great Depression and after World War II, Cambridge lost much of its industrial base. It also began to become an intellectual, rather than an industrial, center. Harvard University, which had always been important as both a landowner and an institution, began to play a more dominant role in the city's life and culture. When Radcliffe College was established in 1879, the town became a mecca for some of the nation's most academically talented female students. MIT's move from Boston to Cambridge in 1916 reinforced Cambridge's status as an intellectual center of the United States. After the 1950s, the city's population began to decline slowly as families tended to be replaced by single people and young couples. In Cambridge Highlands, the technology company Bolt, Beranek, & Newman produced the first network router in 1969 and hosted the invention of computer-to-computer email in 1971. The 1980s brought a wave of high technology startups. Those selling advanced minicomputers were overtaken by the microcomputer. Cambridge-based VisiCorp made the first spreadsheet software for personal computers, VisiCalc, and helped propel the Apple II to major consumer success. It was overtaken and purchased by Cambridge-based Lotus Development, maker of Lotus 1-2-3 (which was, in turn, replaced in by Microsoft Excel). The city continues to be home to many startups. Kendall Square was a major software hub through the dot-com boom and today hosts offices of such technology companies as Google, Microsoft, and Amazon. The Square also now houses the headquarters of Akamai. In 1976, Harvard's plans to start experiments with recombinant DNA led to a three-month moratorium and a citizen review panel. In the end, Cambridge decided to allow such experiments but passed safety regulations in 1977. This led to regulatory certainty and acceptance when Biogen opened a lab in 1982, in contrast to the hostility that caused the Genetic Institute, a Harvard spinoff, to abandon Somerville and Boston for Cambridge. The biotech and pharmaceutical industries have since thrived in Cambridge, which now includes headquarters for Biogen and Genzyme; laboratories for Novartis, Teva, Takeda, Alnylam, Ironwood, Catabasis, Moderna Therapeutics, Editas Medicine; support companies such as Cytel; and many smaller companies. By the end of the 20th century, Cambridge had one of the most costly housing markets in the Northeastern United States. While considerable class, race, and age diversity existed, it became more challenging for those who grew up in the city to afford to remain. The end of rent control in 1994 prompted many Cambridge renters to move to more affordable housing in Somerville and other Massachusetts cities and towns. 21st century Cambridge's mix of amenities and proximity to Boston kept housing prices relatively stable despite the bursting of the United States housing bubble in 2008 and 2009. Cambridge has been a sanctuary city since 1985 and reaffirmed its status as such in 2006. Geography According to the U.S. Census Bureau, Cambridge has a total area of , of which is land and (9.82%) is water. Adjacent municipalities Cambridge is located in eastern Massachusetts, bordered by: the city of Boston to the south and east (across the Charles River) the city of Somerville to the north the town of Arlington to the northwest the town of Belmont and the city of Watertown to the west The border between Cambridge and the neighboring city of Somerville passes through densely populated neighborhoods, which are connected by the MBTA Red Line. Some of the main squares, Inman, Porter, and to a lesser extent, Harvard and Lechmere, are very close to the city line, as are Somerville's Union and Davis Squares. Through the City of Cambridge's exclusive municipal water system, the city further controls two exclave areas, one being Payson Park Reservoir and Gatehouse, a 2009 listed American Water Landmark located roughly one mile west of Fresh Pond and surrounded by the town of Belmont. The second area is the larger Hobbs Brook and Stony Brook watersheds, which share borders with neighboring towns and cities including Lexington, Lincoln, Waltham and Weston. Neighborhoods Squares Cambridge has been called the "City of Squares", as most of its commercial districts are major street intersections known as squares. Each square acts as a neighborhood center. Kendall Square, formed by the junction of Broadway, Main Street, and Third Street, has been called "the most innovative square mile on the planet", owing to its high concentration of entrepreneurial start-ups and quality of innovation which have emerged in the vicinity of the square since 2010. Technology Square is an office and laboratory building cluster in this neighborhood. Just over the Longfellow Bridge from Boston, at the eastern end of the MIT campus, it is served by the Kendall/MIT station on the MBTA Red Line subway. Most of Cambridge's large office towers are located in the Square. A biotech industry has developed in this area. The Cambridge Innovation Center, a large co-working space, is in Kendall Square at 1 Broadway. The Cambridge Center office complex is in Kendall Square, and not at the actual center of Cambridge. The "One Kendall Square" complex is nearby, but not actually in Kendall Square. Central Square is formed by the junction of Massachusetts Avenue, Prospect Street, and Western Avenue. Containing a variety of ethnic restaurants, it was economically depressed as recently as the late 1990s; it underwent gentrification in recent years (in conjunction with the development of the nearby University Park at MIT), and continues to grow more costly. It is served by the Central Station stop on the MBTA Red Line subway. Lafayette Square, formed by the junction of Massachusetts Avenue, Columbia Street, Sidney Street, and Main Street, is considered part of the Central Square area. Cambridgeport is south of Central Square along Magazine Street and Brookline Street. Harvard Square is formed by the junction of Massachusetts Avenue, Brattle Street, Dunster Street, and JFK Street. This is the primary site of Harvard University and a major Cambridge shopping area. It is served by a Red Line station. Harvard Square was originally the Red Line's northwestern terminus and a major transfer point to streetcars that also operated in a short tunnel—which is still a major bus terminal, although the area under the Square was reconfigured dramatically in the 1980s when the Red Line was extended. The Harvard Square area includes Brattle Square and Eliot Square. A short distance away from the square lies the Cambridge Common, while the neighborhood north of Harvard and east of Massachusetts Avenue is known as Agassiz, after the famed scientist Louis Agassiz. Porter Square is about a mile north on Massachusetts Avenue from Harvard Square, at the junction of Massachusetts and Somerville Avenues. It includes part of the city of Somerville and is served by the Porter Square Station, a complex housing a Red Line stop and a Fitchburg Line commuter rail stop. Lesley University's University Hall and Porter campus are in Porter Square. Inman Square is at the junction of Cambridge and Hampshire streets in mid-Cambridge. It is home to restaurants, bars, music venues, and boutiques. Victorian streetlights, benches, and bus stops were added to the streets in the 2000s, and a new city park was installed. Lechmere Square is at the junction of Cambridge and First streets, adjacent to the CambridgeSide Galleria shopping mall. It is served by Lechmere station on the MBTA Green Line. Other neighborhoods Cambridge's residential neighborhoods border but are not defined by the squares. East Cambridge (Area 1) is bordered on the north by Somerville, on the east by the Charles River, on the south by Broadway and Main Street, and on the west by the Grand Junction Railroad tracks. It includes the NorthPoint development. MIT Campus (Area 2) is bordered on the north by Broadway, on the south and east by the Charles River, and on the west by the Grand Junction Railroad tracks. Wellington-Harrington (Area 3) is bordered on the north by Somerville, on the south and west by Hampshire Street, and on the east by the Grand Junction Railroad tracks. Referred to as "Mid-Block". The Port, formerly known as Area 4, is bordered on the north by Hampshire Street, on the south by Massachusetts Avenue, on the west by Prospect Street, and on the east by the Grand Junction Railroad tracks. Residents of Area 4 often simply call their neighborhood "The Port" and the area of Cambridgeport and Riverside "The Coast". In October 2015, the Cambridge City Council officially renamed Area 4 "The Port", formalizing the longtime nickname, largely on the initiative of neighborhood native and then-Vice Mayor Dennis Benzan. The port is usually the busier part of the city. Cambridgeport (Area 5) is bordered on the north by Massachusetts Avenue, on the south by the Charles River, on the west by River Street, and on the east by the Grand Junction Railroad tracks. Mid-Cambridge (Area 6) is bordered on the north by Kirkland and Hampshire Streets and Somerville, on the south by Massachusetts Avenue, on the west by Peabody Street, and on the east by Prospect Street. Riverside (Area 7), an area sometimes called "The Coast", is bordered on the north by Massachusetts Avenue, on the south by the Charles River, on the west by JFK Street, and on the east by River Street. Baldwin (Area 8) is bordered on the north by Somerville, on the south and east by Kirkland Street, and on the west by Massachusetts Avenue. Neighborhood Nine or Radcliffe (formerly called Peabody, until the recent relocation of a neighborhood school by that name) is bordered on the north by railroad tracks, on the south by Concord Avenue, on the west by railroad tracks, and on the east by Massachusetts Avenue. The Avon Hill sub-neighborhood consists of the higher elevations within the area bounded by Upland Road, Raymond Street, Linnaean Street and Massachusetts Avenue. Brattle area/West Cambridge (Area 10) is bordered on the north by Concord Avenue and Garden Street, on the south by the Charles River and Watertown, on the west by Fresh Pond and the Collins Branch Library, and on the east by JFK Street. It includes the sub-neighborhoods of Brattle Street (formerly known as Tory Row) and Huron Village. North Cambridge (Area 11) is bordered on the north by Arlington and Somerville, on the south by railroad tracks, on the west by Belmont, and on the east by Somerville. Cambridge Highlands (Area 12) is bordered on the north and east by railroad tracks, on the south by Fresh Pond, and on the west by Belmont. Strawberry Hill (Area 13) is bordered on the north by Fresh Pond, on the south by Watertown, on the west by Belmont, and on the east by the Watertown-Cambridge Greenway (formerly railroad tracks). Gallery Climate In the Köppen-Geiger classification, Cambridge has a hot-summer humid continental climate (Dfa) with hot summers and cold winters, that can appear in the southern end of New England's interior. Abundant rain falls on the city (and in the winter often as snow); it has no dry season. The average January temperature is 26.6 °F (−3 °C), making Cambridge part of Group D, independent of the isotherm. There are four well-defined seasons. Demographics As of the census of 2010, there were 105,162 people, 44,032 households, and 17,420 families residing in the city. The population density was . There were 47,291 housing units at an average density of . The racial makeup of the city was 66.60% White, 11.70% Black or African American, 0.20% Native American, 15.10% Asian (3.7% Chinese, 1.4% Asian Indian, 1.2% Korean, 1.0% Japanese), 0.01% Pacific Islander, 2.10% from other races, and 4.30% from two or more races. 7.60% of the population were Hispanic or Latino of any race (1.6% Puerto Rican, 1.4% Mexican, 0.6% Dominican, 0.5% Colombian & Salvadoran, 0.4% Spaniard). Non-Hispanic Whites were 62.1% of the population in 2010, down from 89.7% in 1970. An individual resident of Cambridge is known as a Cantabrigian. In 2010, there were 44,032 households, out of which 16.9% had children under the age of 18 living with them, 28.9% were married couples living together, 8.4% had a female householder with no husband present, and 60.4% were non-families. 40.7% of all households were made up of individuals, and 9.6% had someone living alone who was 65 years of age or older. The average household size was 2.00 and the average family size was 2.76. In the city, the population was spread out, with 13.3% of the population under the age of 18, 21.2% from 18 to 24, 38.6% from 25 to 44, 17.8% from 45 to 64, and 9.2% who were 65 years of age or older. The median age was 30.5 years. For every 100 females, there were 96.1 males. For every 100 females age 18 and over, there were 94.7 males. The median income for a household in the city was $47,979, and the median income for a family was $59,423 (these figures had risen to $58,457 and $79,533 respectively ). Males had a median income of $43,825 versus $38,489 for females. The per capita income for the city was $31,156. About 8.7% of families and 12.9% of the population were below the poverty line, including 15.1% of those under age 18 and 12.9% of those age 65 or over. Cambridge has been ranked as one of the most liberal cities in America. Locals living in and near the city jokingly refer to it as "The People's Republic of Cambridge". For 2016, the residential property tax rate in Cambridge was $6.99 per $1,000. Cambridge enjoys the highest possible bond credit rating, AAA, with all three Wall Street rating agencies. In 2000, 11.0% of city residents were of Irish ancestry; 7.2% were of English, 6.9% Italian, 5.5% West Indian and 5.3% German ancestry. 69.4% spoke only English at home, while 6.9% spoke Spanish, 3.2% Chinese or Mandarin, 3.0% Portuguese, 2.9% French Creole, 2.3% French, 1.5% Korean, and 1.0% Italian. Income Data is from the 2009–2013 American Community Survey 5-Year Estimates. Economy Manufacturing was an important part of Cambridge's economy in the late 19th and early 20th century, but educational institutions are its biggest employers today. Harvard and MIT together employ about 20,000. As a cradle of technological innovation, Cambridge was home to technology firms Analog Devices, Akamai, Bolt, Beranek, and Newman (BBN Technologies) (now part of Raytheon), General Radio (later GenRad), Lotus Development Corporation (now part of IBM), Polaroid, Symbolics, and Thinking Machines. In 1996, Polaroid, Arthur D. Little, and Lotus were Cambridge's top employers, with over 1,000 employees, but they faded out a few years later. Health care and biotechnology firms such as Genzyme, Biogen Idec, bluebird bio, Millennium Pharmaceuticals, Sanofi, Pfizer and Novartis have significant presences in the city. Though headquartered in Switzerland, Novartis continues to expand its operations in Cambridge. Other major biotech and pharmaceutical firms expanding their presence in Cambridge include GlaxoSmithKline, AstraZeneca, Shire, and Pfizer. Most of Cambridge's biotech firms are in Kendall Square and East Cambridge, which decades ago were the city's center of manufacturing. Some others are in University Park at MIT, a new development in another former manufacturing area. None of the high technology firms that once dominated the economy was among the 25 largest employers in 2005, but by 2008 Akamai and ITA Software were. Google, IBM Research, Microsoft Research, and Philips Research maintain offices in Cambridge. In late January 2012—less than a year after acquiring Billerica-based analytic database management company, Vertica—Hewlett-Packard announced it would also be opening its first offices in Cambridge. Also around that time, e-commerce giants Staples and Amazon.com said they would be opening research and innovation centers in Kendall Square. And LabCentral provides a shared laboratory facility for approximately 25 emerging biotech companies. The proximity of Cambridge's universities has also made the city a center for nonprofit groups and think tanks, including the National Bureau of Economic Research, the Smithsonian Astrophysical Observatory, the Lincoln Institute of Land Policy, Cultural Survival, and One Laptop per Child. In September 2011, Cambridge launched its Entrepreneur Walk of Fame initiative, recognizing people who have made contributions to innovation in global business. In 2021, Cambridge was one of approximately 27 US cities to receive a AAA rating from each of the three major credit rating agencies in the nation, Moody's Investors Service, Standard & Poor's and Fitch Ratings. 2021 marked the 22nd consecutive year that Cambridge had retained this distinction. Top employers , the city's ten largest employers are: Arts and culture Museums Harvard Art Museum, including the Busch-Reisinger Museum, a collection of Germanic art, the Fogg Art Museum, a comprehensive collection of Western art, and the Arthur M. Sackler Museum, a collection of Middle East and Asian art Harvard Museum of Natural History, including the Glass Flowers collection List Visual Arts Center, MIT MIT Museum Peabody Museum of Archaeology and Ethnology, Harvard Semitic Museum, Harvard Public art Cambridge has a large and varied collection of permanent public art, on both city property, managed by the Cambridge Arts Council, Community Art Center, and the Harvard and MIT campuses. Temporary public artworks are displayed as part of the annual Cambridge River Festival on the banks of the Charles River during winter celebrations in Harvard and Central Squares and at Harvard University campus sites. Experimental forms of public artistic and cultural expression include the Central Square World's Fair, the annual Somerville-based Honk! Festival, and If This House Could Talk, a neighborhood art and history event. Street musicians and other performers entertain tourists and locals in Harvard Square during the warmer months. The performances are coordinated through a public process that has been developed collaboratively by the performers, city administrators, private organizations and business groups. The Cambridge public library contains four Works Progress Administration murals completed in 1935 by Elizabeth Tracy Montminy: Religion, Fine Arts, History of Books and Paper, and The Development of the Printing Press. Architecture Despite intensive urbanization during the late 19th century and the 20th century, Cambridge has several historic buildings, including some from the 17th century. The city also has abundant contemporary architecture, largely built by Harvard and MIT. Notable historic buildings in the city include: The Asa Gray House (1810) Austin Hall, Harvard University (1882–1884) Cambridge City Hall (1888–1889) Cambridge Public Library (1888) Christ Church, Cambridge (1761) Cooper-Frost-Austin House (1689–1817) Elmwood House (1767), residence of the president of Harvard University First Church of Christ, Scientist (1924–1930) The First Parish in Cambridge (1833) Harvard-Epworth United Methodist Church (1891–1893) Harvard Lampoon Building (1909) The Hooper-Lee-Nichols House (1685–1850) Longfellow House–Washington's Headquarters National Historic Site (1759), former home of poet Henry Wadsworth Longfellow and headquarters of George Washington The Memorial Church of Harvard University (1932) Memorial Hall, Harvard University (1870–1877) Middlesex County Courthouse (1814–1848) Urban Rowhouse (1875) O'Reilly Spite House (1908), built to spite a neighbor who would not sell his adjacent land Contemporary architecture: Arthur M. Sackler Museum at Harvard University, one of the few buildings in the U.S. by Pritzker Prize winner James Stirling Baker House dormitory at MIT by Finnish architect Alvar Aalto, one of only two Aalto buildings in the U.S. Harvard Graduate Center/Harkness Commons by The Architects Collaborative with Walter Gropius Carpenter Center for the Visual Arts at Harvard, the only Le Corbusier building in North America Design Research Building by Benjamin Thompson and Associates Harvard Science Center, Holyoke Center, and Peabody Terrace by Catalan architect and Harvard Graduate School of Design Dean Josep Lluís Sert Kresge Auditorium, MIT, by Eero Saarinen Harvard Art Museums, renovation and major expansion of Fogg Museum building, completed in 2014 by Renzo Piano MIT Chapel by Eero Saarinen MIT Media Lab, two buildings by I. M. Pei and Fumihiko Maki Simmons Hall at MIT by Steven Holl Stata Center, home to the MIT Computer Science and Artificial Intelligence Laboratory, the Department of Linguistics, and the Department of Philosophy by Frank Gehry Music The city has an active music scene, from classical performances to the latest popular bands. Beyond its colleges and universities, Cambridge has many music venues, including The Middle East, Club Passim, The Plough and Stars, The Lizard Lounge and the Nameless Coffeehouse. Parks and recreation Consisting largely of densely built residential space, Cambridge lacks significant tracts of public parkland. Easily accessible open space on the university campuses, including Harvard Yard, Radcliffe Yard, and MIT's Great Lawn, as well as the considerable open space of Mount Auburn Cemetery and Fresh Pond Reservation, partly compensates for this. At Cambridge's western edge, the cemetery is known as a garden cemetery because of its landscaping (the oldest planned landscape in the country) and arboretum. Although known as a Cambridge landmark, much of the cemetery lies within Watertown. It is also an Important Bird Area (IBA) in the Greater Boston area. Fresh Pond Reservation is the largest open green space in Cambridge with 162 acres (656,000 m2) of land around a 155-acre (627,000 m2) kettle hole lake. This land includes a 2.25-mile walking trail around the reservoir and a public 9-hole golf course. Public parkland includes the esplanade along the Charles River, which mirrors its Boston counterpart, Cambridge Common, Danehy Park, and Alewife Brook Reservation. Government Federal and state representation Cambridge is split between Massachusetts's 5th and 7th U.S. congressional districts. The 5th district seat is held by Democrat Katherine Clark, who replaced now-Senator Ed Markey in a 2013 special election; the 7th is represented by Democrat Ayanna Pressley, elected in 2018. The state's senior United States senator is Democrat Elizabeth Warren, elected in 2012, who lives in Cambridge. The governor of Massachusetts is Democrat Maura Healey, elected in 2022. Cambridge is represented in six districts in the Massachusetts House of Representatives: the 24th Middlesex (which includes parts of Belmont and Arlington), the 25th and 26th Middlesex (the latter of which includes a portion of Somerville), the 29th Middlesex (which includes a small part of Watertown), and the Eighth and Ninth Suffolk (both including parts of the City of Boston). The city is represented in the Massachusetts Senate as a part of the 2nd Middlesex, Middlesex and Suffolk, and 1st Suffolk and Middlesex districts. Politics From 1860 to 1880, Republicans Abraham Lincoln, Ulysses S. Grant, Rutherford B. Hayes, and James Garfield each won Cambridge, Grant doing so by margins of over 20 points in both of his campaigns. Following that, from 1884 to 1892, Grover Cleveland won Cambridge in all three of his presidential campaigns, by less than ten points each time. Then from 1896 to 1924, Cambridge became something of a "swing" city with a slight Republican lean. GOP nominees carried the city in five of the eight presidential elections during that time frame, with five of the elections resulting in either a plurality or a margin of victory of fewer than ten points. The city of Cambridge is extremely Democratic in modern times, however. In the last 23 presidential elections dating back to the nomination of Al Smith in 1928, the Democratic nominee has carried Cambridge in every election. Every Democratic nominee since Massachusetts native John F. Kennedy in 1960 has received at least 70% of the vote, except for Jimmy Carter in 1976 and 1980. Since 1928, the only Republican nominee to come within ten points of carrying Cambridge is Dwight Eisenhower in his 1956 re-election bid. City government Cambridge has a city government led by a mayor and a nine-member city council. There is also a six-member school committee that functions alongside the superintendent of public schools. The councilors and school committee members are elected every two years using proportional representation. The mayor is elected by the city councilors from among themselves and serves as the chair of city council meetings. The mayor also sits on the school committee. The mayor is not the city's chief executive. Rather, the city manager, who is appointed by the city council, serves in that capacity. Under the city's Plan E form of government, the city council does not have the power to appoint or remove city officials who are under the direction of the city manager. The city council and its members are also forbidden from giving orders to any subordinate of the city manager. Yi-An Huang is the City Manager as of September 6, 2022, succeeding Owen O'Riordan (now the Deputy City Manager) who briefly served as the Acting City Manager after Louis DePasquale resigned on July 5, 2022, after six years in office. * = current mayor ** = former mayor On March 8, 2021, Cambridge City Council voted to recognize polyamorous domestic partnerships, becoming the second city in the United States following neighboring Somerville, which had done so in 2020. County government Cambridge was a county seat of Middlesex County, along with Lowell, until the abolition of county government. Though the county government was abolished in 1997, the county still exists as a geographical and political region. The employees of Middlesex County courts, jails, registries, and other county agencies now work directly for the state. The county's registrars of Deeds and Probate remain in Cambridge, but the Superior Court and District Attorney have had their operations transferred to Woburn. Third District Court has shifted operations to Medford, and the county Sheriff's office awaits near-term relocation. Education Higher education Cambridge is perhaps best known as an academic and intellectual center. Its colleges and universities include: Cambridge School of Culinary Arts Harvard University Hult International Business School Lesley University Longy School of Music of Bard College Massachusetts Institute of Technology Radcliffe College (now merged with Harvard College) At least 258 of the world's total 962 Nobel Prize winners have at some point in their careers been affiliated with universities in Cambridge. Cambridge College is named for Cambridge and was based in Cambridge until 2017, when it consolidated to a new headquarters in neighboring Boston. The American Academy of Arts and Sciences, one of the nation's oldest learned societies founded in 1780, is based in Cambridge. Primary and secondary public education The city's schools constitute the Cambridge Public School District. Schools include: Amigos School Baldwin School (formerly the Agassiz School) Cambridgeport School Fletcher-Maynard Academy Graham and Parks Alternative School Haggerty School Kennedy-Longfellow School King Open School Martin Luther King Jr. School Morse School (a Core Knowledge school) Peabody School Tobin School (a Montessori school) Five upper schools offer grades 6–8 in some of the same buildings as the elementary schools: Amigos School Cambridge Street Upper School Putnam Avenue Upper School Rindge Avenue Upper School Vassal Lane Upper School Cambridge has three district public high school programs, including Cambridge Rindge and Latin School (CRLS). Other public charter schools include Benjamin Banneker Charter School, which serves grades K–6; Community Charter School of Cambridge in Kendall Square, which serves grades 7–12; and Prospect Hill Academy, a charter school whose upper school is in Central Square though it is not a part of the Cambridge Public School District. Primary and secondary private education Cambridge also has several private schools, including: Boston Archdiocesan Choir School Buckingham Browne & Nichols School Cambridge Montessori school Cambridge Friends School Fayerweather Street School International School of Boston (formerly École Bilingue) Matignon High School Shady Hill School St. Peter School Media Newspapers Cambridge is served by a single online newspaper, Cambridge Day. The last physical newspaper in the city, Cambridge Chronicle, ceased publication in 2022 and today only cross-posts regional stories from other Gannett properties. Radio Cambridge is home to the following radio stations, including both commercially-licensed and student-run stations: Television and broadband Cambridge Community Television (CCTV) has served the city since its inception in 1988. CCTV operates Cambridge's public access television facility and three television channels, 8, 9, and 96, on the Cambridge cable system (Comcast). The city has invited tenders from other cable providers, but Comcast remains its only fixed television and broadband utility, though services from American satellite TV providers are available. In October 2014, Cambridge City Manager Richard Rossi appointed a citizen Broadband Task Force to "examine options to increase competition, reduce pricing, and improve speed, reliability and customer service for both residents and businesses." Infrastructure Utilities Cable television service is provided by XFINITY (Comcast Communications). Parts of Cambridge are served by a district heating systems loop for industrial organizations that also cover Boston. Electric service and natural gas are both provided by Eversource Energy. Landline telecommunications service are provided by Harvard University, Massachusetts Institute of Technology (MIT), and Verizon Communications. All phones in Cambridge are inter-connected to central office locations in the metropolitan area. The city maintains its own Public, educational, and government access (PEG) known as Cambridge Community Television (CCTV). Water department Cambridge obtains water from Hobbs Brook (in Lincoln and Waltham) and Stony Brook (Waltham and Weston), as well as an emergency connection to the Massachusetts Water Resources Authority. The city owns over of land in other towns that includes these reservoirs and portions of their watershed. Water from these reservoirs flows by gravity through an aqueduct to Fresh Pond in Cambridge. It is then treated in an adjacent plant and pumped uphill to an elevation of above sea level at the Payson Park Reservoir (Belmont). The water is then redistributed downhill via gravity to individual users in the city. A new water treatment plant opened in 2001. In October 2016, the city announced that, owing to drought conditions, they would begin buying water from the MWRA. On January 3, 2017, Cambridge announced that "As a result of continued rainfall each month since October 2016, we have been able to significantly reduce the need to use MWRA water. We have not purchased any MWRA water since December 12, 2016 and if 'average' rainfall continues this could continue for several months." Sewer service is available in Cambridge. The city is inter-connected with the Massachusetts Water Resources Authority (MWRA)'s sewage network with sewage treatment plant in the Boston Harbor. Transportation Road Cambridge is served by several major roads, including Route 2, Route 16, and the Route 28. The Massachusetts Turnpike does not pass through Cambridge but provides access by an exit in nearby Allston. Both U.S. Route 1 and Interstate 93 also provide additional access at the eastern end of Cambridge via Leverett Circle in Boston. Route 2A runs the length of the city, chiefly along Massachusetts Avenue. The Charles River forms the southern border of Cambridge and is crossed by 11 bridges connecting Cambridge to Boston, eight of which are open to motorized road traffic, including the Longfellow Bridge and the Harvard Bridge. Cambridge has an irregular street network because many of the roads date from the colonial era. Contrary to popular belief, the road system did not evolve from longstanding cow-paths. Roads connected various village settlements with each other and nearby towns and were shaped by geographic features, most notably streams, hills, and swampy areas. Today, the major "squares" are typically connected by long, mostly straight roads, such as Massachusetts Avenue between Harvard Square and Central Square or Hampshire Street between Kendall Square and Inman Square. On October 25, 2022, Cambridge City Council voted 8–1 to eliminate parking minimums from the city code, citing declining car ownership, with the aim of promoting housing construction. Mass transit Cambridge is served by the Massachusetts Bay Transportation Authority, including Porter station on the regional Commuter Rail, Lechmere station on the Green Line, and Alewife, Porter, Harvard, Central, and Kendall Square/MIT stations on the Red Line. Alewife station, the terminus of the Red Line, has a large multi-story parking garage. The Harvard bus tunnel under Harvard Square connects to the Red Line underground. This tunnel was originally opened for streetcars in 1912 and served trackless trolleys, trolleybuses, and buses as the routes were converted; four lines of the MBTA trolleybus system continued to use it until their conversion to diesel in 2022. The tunnel was partially reconfigured when the Red Line was extended to Alewife in the early 1980s. Both Union Square station in Somerville on the Green Line and Community College station in Charlestown on the Orange Line are located just outside of Cambridge. Besides the state-owned transit agency, the city is also served by the Charles River Transportation Management Agency (CRTMA) shuttles which are supported by some of the largest companies operating in the city, in addition to the municipal government itself. Cycling Cambridge has several bike paths, including one along the Charles River, and the Linear Park connecting the Minuteman Bikeway at Alewife with the Somerville Community Path. A connection to Watertown opened in 2022. Bike parking is common and there are bike lanes on many streets, although concerns have been expressed regarding the suitability of many of the lanes. On several central MIT streets, bike lanes transfer onto the sidewalk. Cambridge bans cycling on certain sections of sidewalk where pedestrian traffic is heavy. Bicycling Magazine in 2006 rated Boston as one of the worst cities in the nation for bicycling, but it has given Cambridge honorable mention as one of the best and was called "Boston's great hope" by the magazine. Boston has since then followed the example of Cambridge and made considerable efforts to improve bicycling safety and convenience. Walking Walking is a popular activity in Cambridge. In 2000, among U.S. cities with more than 100,000 residents, Cambridge had the highest percentage of commuters who walked to work. Cambridge's major historic squares have changed into modern walking neighborhoods, including traffic calming features based on the needs of pedestrians rather than of motorists. Intercity The Boston intercity bus and train stations at South Station in Boston, and Logan International Airport in East Boston, both of which are accessible by subway. The Fitchburg Line rail service from Porter Square connects to some western suburbs. Since October 2010, there has also been intercity bus service between Alewife Station (Cambridge) and New York City. Police department In addition to the Cambridge Police Department, the city is patrolled by the Fifth (Brighton) Barracks of Troop H of the Massachusetts State Police. Owing, however, to proximity, the city also practices functional cooperation with the Fourth (Boston) Barracks of Troop H, as well. The campuses of Harvard and MIT are patrolled by the Harvard University Police Department and MIT Police Department, respectively. Fire department The city of Cambridge is protected by the Cambridge Fire Department. Established in 1832, the CFD operates eight engine companies, four ladder companies, one rescue company, and three paramedic squad companies from eight fire stations located throughout the city. The Acting Chief is Thomas F. Cahill Jr. Emergency medical services (EMS) The city of Cambridge receives emergency medical services from PRO EMS, a privately contracted ambulance service. Public library services Further educational services are provided at the Cambridge Public Library. The large modern main building was built in 2009, and connects to the restored 1888 Richardson Romanesque building. It was founded as the private Cambridge Athenaeum in 1849 and was acquired by the city in 1858, and became the Dana Library. The 1888 building was a donation of Frederick H. Rindge. Sister cities and twin towns Cambridge's sister cities with active relationships are: Coimbra, Portugal (1982) Gaeta, Italy (1982) Tsukuba, Japan (1983) San José Las Flores, El Salvador (1987) Yerevan, Armenia (1987) Galway, Ireland (1997) Les Cayes, Haiti (2014) Cambridge has ten additional inactive sister city relationships: Dublin, Ireland (1983) Ischia, Italy (1984) Catania, Italy (1987) Kraków, Poland (1989) Florence, Italy (1992) Santo Domingo Oeste, Dominican Republic (2003) Southwark, England (2004) Yuseong (Daejeon), Korea (2005) Haidian (Beijing), China (2005) Cienfuegos, Cuba (2005) Notes References Citations Sources Cambridge article by Rev. Edward Abbott in Volume 1, pages 305–358. Eliot, Samuel Atkins. A History of Cambridge, Massachusetts: 1630–1913. Cambridge, Massachusetts: The Cambridge Tribune, 1913. Paige, Lucius. History of Cambridge, Massachusettse: 1630–1877. Cambridge, Massachusetts: The Riverside Press, 1877. Survey of Architectural History in Cambridge: Mid Cambridge. Cambridge, Massachusetts: Cambridge Historical Commission, 1967. . Survey of Architectural History in Cambridge: Cambridgeport. Cambridge, Massachusetts: Cambridge Historical Commission, 1971. . Survey of Architectural History in Cambridge: Old Cambridge. Cambridge, Massachusetts: Cambridge Historical Commission, 1973. . Survey of Architectural History in Cambridge: Northwest Cambridge. Cambridge, Massachusetts: Cambridge Historical Commission, 1977. . Survey of Architectural History in Cambridge: East Cambridge (revised edition). Cambridge, Massachusetts: Cambridge Historical Commission, 1988. External links The Innovation Trail – History of invention in Cambridge and Boston 1630 establishments in the Massachusetts Bay Colony Charles River Cities in Massachusetts Cities in Middlesex County, Massachusetts County seats in Massachusetts Populated places established in 1630
5686
https://en.wikipedia.org/wiki/Cambridge%20%28disambiguation%29
Cambridge (disambiguation)
Cambridge is a city and the county town of Cambridgeshire, United Kingdom, famous for being the location of the University of Cambridge. Cambridge may also refer to: Places Australia Cambridge, Tasmania, a suburb of Hobart Town of Cambridge, a Western Australian local government area Barbados Cambridge, Barbados, a populated place in the parish of Saint Joseph, Barbados Canada Cambridge, Ontario, a city in Canada Cambridge (federal electoral district), a federal electoral district corresponding to Cambridge, Ontario Cambridge (provincial electoral district), a provincial electoral district corresponding to Cambridge, Ontario Cambridge, Hants County, Nova Scotia, a small community in Canada Cambridge, Kings County, Nova Scotia, a small community in Canada Cambridge Bay, Nunavut, a hamlet in Canada Cambridge Parish, New Brunswick, a civil parish in Canada Cambridge-Narrows, New Brunswick, a small community in Canada Jamaica Cambridge, Jamaica Malta Cambridge Battery/Fort Cambridge, an artillery battery New Zealand Cambridge, New Zealand South Africa Cambridge, Eastern Cape United Kingdom Cambridge (ward), Southport Cambridge, Gloucestershire Cambridge, Scottish Borders, a location in the United Kingdom Cambridge, West Yorkshire, a location in the United Kingdom Cambridge (UK Parliament constituency) County of Cambridge, another name for Cambridgeshire Cambridge Heath, a place in the London borough of Tower Hamlets Cambridge Town (disambiguation) or Camberley, Surrey, England United States Cambridge, Idaho Cambridge, Illinois Cambridge, Iowa Cambridge, Kansas Cambridge, Kentucky Cambridge, Maine Cambridge, Maryland Cambridge, Massachusetts Cambridge, Minnesota Cambridge, Missouri Cambridge, Nebraska Cambridge, New Hampshire, a township Cambridge, Delran, New Jersey Cambridge, Evesham, New Jersey Cambridge (town), New York Cambridge (village), New York Cambridge, Ohio Cambridge, Vermont Cambridge (village), Vermont Cambridge, Wisconsin Cambridge City, Indiana Cambridge Springs, Pennsylvania Cambridge Township, Ohio Cambridge Township, Henry County, Illinois Cambridge Township, Michigan Cambridge Township, Minnesota Cambridge Township, Pennsylvania Extraterrestrial 2531 Cambridge, a stony Main Belt asteroid in the Solar System People Given name Cambridge Jones, British celebrity photographer Surnames Alice Cambridge (1762–1829), early Irish Methodist preacher Alyson Cambridge (born 1980), American operatic soprano and classical music, jazz, and American popular song singer Asuka Cambridge (born 1993), Japanese sprint athlete Barrington Cambridge (born 1957), Guyanese boxer Godfrey Cambridge (1933–1976), American stand-up comic and actor Richard Owen Cambridge (1717–1802), British poet Titles Duke of Cambridge Brands and enterprises Cambridge (cigarette) Cambridge Audio, a manufacturer of audio equipment Cambridge Glass, a glass company of Cambridge, Ohio Cambridge Scientific Instrument Company, founded 1881 in England Cambridge SoundWorks, a manufacturer of audio equipment Cambridge Theatre, a theatre in the West End of London Cambridge University Press Educational institutions Cambridge State University, US The Cambridge School (disambiguation) University of Cambridge, UK Other uses Cambridge (book), 2005 book by Tim Rawle Cambridge (ship), four merchant ships Austin Cambridge, motor car range produced by the Austin Motor Company Cambridge Circus (disambiguation)
5688
https://en.wikipedia.org/wiki/Colin%20Dexter
Colin Dexter
Norman Colin Dexter (29 September 1930 – 21 March 2017) was an English crime writer known for his Inspector Morse series of novels, which were written between 1975 and 1999 and adapted as an ITV television series, Inspector Morse, from 1987 to 2000. His characters have spawned a sequel series, Lewis from 2006 to 2015, and a prequel series, Endeavour from 2012 to 2023. Early life and career Dexter was born in Stamford, Lincolnshire, to Alfred and Dorothy Dexter. He had an elder brother, John, a fellow classicist, who taught Classics at The King's School, Peterborough, and a sister, Avril. Alfred ran a small garage and taxi company from premises in Scotgate, Stamford. Dexter was educated at St John's Infants School and Bluecoat Junior School, from which he gained a scholarship to Stamford School, a boys' grammar school, where a younger contemporary was England cricket captain and England rugby player M. J. K. Smith. After leaving school, Dexter completed his national service with the Royal Corps of Signals and then read Classics at Christ's College, Cambridge, graduating in 1953 and receiving a master's degree in 1958. In 1954, Dexter began his teaching career as assistant Classics master at Wyggeston Grammar School for Boys in Leicester. There he helped the school's Christian Union. However, in 2000 he stated that he shared the same views on politics and religion as Inspector Morse, who was portrayed in the final Morse novel, The Remorseful Day, as an atheist. A post at Loughborough Grammar School followed in 1957, then he took up the position of senior Classics teacher at Corby Grammar School, Northamptonshire, in 1959. In 1966, he was forced by the onset of deafness to retire from teaching and took up the post of senior assistant secretary at the University of Oxford Delegacy of Local Examinations (UODLE) in Oxford, a job he held until his retirement in 1988. In November 2008, Dexter featured prominently in the BBC Four programme "How to Solve a Cryptic Crossword" as part of the Timeshift series, in which he recounted some of the crossword clues solved by Morse. Writing career The initial books written by Dexter were general studies textbooks. He began writing mysteries in 1972 during a family holiday. Last Bus to Woodstock was published in 1975 and introduced the character of Inspector Morse, the irascible detective whose penchants for cryptic crosswords, English literature, cask ale, and music by Wagner reflected Dexter's own enthusiasms. Dexter's plots used false leads and other red herrings, "presenting Morse, and his readers, with fiendishly difficult puzzles to solve". The success of the 33 two-hour episodes of the ITV television series Inspector Morse, produced between 1987 and 2000, brought further attention to Dexter's writings. The show featured Inspector Morse, played by John Thaw, and his assistant Sergeant Robert Lewis, played by Kevin Whately. In the manner of Alfred Hitchcock, Dexter made a cameo appearance in almost all episodes. From 2006 to 2015, Morse's assistant Lewis was featured in a 33-episode ITV series titled Lewis (Inspector Lewis in the United States). Lewis is assisted by DS James Hathaway, played by Laurence Fox. A prequel series, Endeavour, features a young Morse and stars Shaun Evans and Roger Allam. Endeavour was first broadcast on the ITV network in 2012, ending with the ninth series in 2023, taking young Morse's career into 1972. Dexter was a consultant for Lewis and the first few years of Endeavour. As with Morse, Dexter occasionally made cameo appearances in both Lewis and Endeavour. Although Dexter's military service was as a Morse code operator in the Royal Corps of Signals, the character was named after his friend Sir Jeremy Morse, a crossword devotee like Dexter. The music for the television series, written by Barrington Pheloung, used a motif based on the Morse code for Morse's name. Awards and honours Dexter received several Crime Writers' Association awards: two Silver Daggers for Service of All the Dead in 1979 and The Dead of Jericho in 1981; two Gold Daggers for The Wench is Dead in 1989 and The Way Through the Woods in 1992; and a Cartier Diamond Dagger for lifetime achievement in 1997. In 1996, Dexter received a Macavity Award for his short story "Evans Tries an O-Level". In 1980, he was elected a member of the by-invitation-only Detection Club. In 2005 Dexter became a Fellow by Special Election of St Cross College, Oxford. In the 2000 Birthday Honours Dexter was appointed an Officer of the Order of the British Empire for services to literature. In 2001 he was awarded the Freedom of the City of Oxford. In September 2011, the University of Lincoln awarded Dexter an honorary Doctor of Letters degree. Personal life In 1956 he married Dorothy Cooper. They had a daughter, Sally, and a son, Jeremy. Death On 21 March 2017 Dexter's publisher, Macmillan, said in a statement "With immense sadness, Macmillan announces the death of Colin Dexter who died peacefully at his home in Oxford this morning". Bibliography Inspector Morse novels Last Bus to Woodstock (1975) Last Seen Wearing (1976) The Silent World of Nicholas Quinn (1977) Service of All the Dead (1979) The Dead of Jericho (1981) The Riddle of the Third Mile (1983) The Secret of Annexe 3 (1986) The Wench is Dead (1989) The Jewel That Was Ours (1991) The Way Through the Woods (1992) The Daughters of Cain (1994) Death Is Now My Neighbour (1996) The Remorseful Day (1999) Novellas and short story collections The Inside Story (1993) Neighbourhood Watch (1993) Morse's Greatest Mystery (1993); also published as As Good as Gold "As Good as Gold" (Morse) "Morse's Greatest Mystery" (Morse) "Evans Tries an O-Level" "Dead as a Dodo" (Morse) "At the Lulu-Bar Motel" "Neighbourhood Watch" (Morse) "A Case of Mis-Identity" (a Sherlock Holmes pastiche) "The Inside Story" (Morse) "Monty's Revolver" "The Carpet-Bagger" "Last Call" (Morse) Uncollected short stories "The Burglar" in You, The Mail on Sunday (1994) "The Double Crossing" in Mysterious Pleasures (2003) "Between the Lines" in The Detection Collection (2005) "The Case of the Curious Quorum" (featuring Inspector Lewis) in The Verdict of Us All (2006) "The Other Half" in The Strand Magazine (February–May 2007) "Morse and the Mystery of the Drunken Driver" in Daily Mail (December 2008) "Clued Up" (a 4-page story featuring Lewis and Morse solving a crossword) in Cracking Cryptic Crosswords (2009) Other Foreword to Chambers Crossword Manual (2001) Chambers Book of Morse Crosswords (2006) Foreword to Oxford: A Cultural and Literary History (2007) Cracking Cryptic Crosswords: A Guide to Solving Cryptic Crosswords (2010) Foreword to Oxford Through the Lens (2016) See also Diogenes Small References External links 1930 births 2017 deaths People from Stamford, Lincolnshire People educated at Stamford School Alumni of Christ's College, Cambridge Cartier Diamond Dagger winners English crime fiction writers English male novelists English mystery writers British detective fiction writers Fellows of St Cross College, Oxford Writers from Oxford Inspector Morse Macavity Award winners Members of the Detection Club Officers of the Order of the British Empire Crossword creators Royal Corps of Signals soldiers 20th-century British Army personnel
5689
https://en.wikipedia.org/wiki/College
College
A college (Latin: collegium) is an educational institution or a constituent part of one. A college may be a degree-awarding tertiary educational institution, a part of a collegiate or federal university, an institution offering vocational education, a further education institution, or a secondary school. In most of the world, a college may be a high school or secondary school, a college of further education, a training institution that awards trade qualifications, a higher-education provider that does not have university status (often without its own degree-awarding powers), or a constituent part of a university. In the United States, a college may offer undergraduate programs – either as an independent institution or as the undergraduate program of a university – or it may be a residential college of a university or a community college, referring to (primarily public) higher education institutions that aim to provide affordable and accessible education, usually limited to two-year associate degrees. The word is generally also used as a synonym for a university in the US. Colleges in countries such as France, Belgium, and Switzerland provide secondary education. Etymology The word "college" is from the Latin verb lego, legere, legi, lectum, "to collect, gather together, pick", plus the preposition cum, "with", thus meaning "selected together". Thus "colleagues" are literally "persons who have been selected to work together". In ancient Rome a collegium was a "body, guild, corporation united in colleagueship; of magistrates, praetors, tribunes, priests, augurs; a political club or trade guild". Thus a college was a form of corporation or corporate body, an artificial legal person (body/corpus) with its own legal personality, with the capacity to enter into legal contracts, to sue and be sued. In mediaeval England there were colleges of priests, for example in chantry chapels; modern survivals include the Royal College of Surgeons in England (originally the Guild of Surgeons Within the City of London), the College of Arms in London (a body of heralds enforcing heraldic law), an electoral college (to elect representatives); all groups of persons "selected in common" to perform a specified function and appointed by a monarch, founder or other person in authority. As for the modern "college of education", it was a body created for that purpose, for example Eton College was founded in 1440 by letters patent of King Henry VI for the constitution of a college of Fellows, priests, clerks, choristers, poor scholars, and old poor men, with one master or governor, whose duty it shall be to instruct these scholars and any others who may resort thither from any part of England in the knowledge of letters, and especially of grammar, without payment". Overview Higher education Within higher education, the term can be used to refer to: A constituent part of a collegiate university, for example King's College, Cambridge, or of a federal university, for example King's College London. A liberal arts college, an independent institution of higher education focusing on undergraduate education, such as Williams College or Amherst College. A liberal arts division of a university whose undergraduate program does not otherwise follow a liberal arts model, such as the Yuanpei College at Peking University. An institute providing specialised training, such as a college of further education, for example Belfast Metropolitan College, a teacher training college, or an art college. A Catholic higher education institute which includes universities, colleges, and other institutions of higher education privately run by the Catholic Church, typically by religious institutes. Those tied to the Holy See are specifically called pontifical universities. In the United States, college is sometimes but rarely a synonym for a research university, such as Dartmouth College, one of the eight universities in the Ivy League. In the United States, the undergraduate college of a university which also confers graduate degrees, such as Yale College, the undergraduate college within Yale University. Further education A sixth form college or college of further education is an educational institution in England, Wales, Northern Ireland, Belize, the Caribbean, Malta, Norway, Brunei, and Southern Africa, among others, where students aged 16 to 19 typically study for advanced school-level qualifications, such as A-levels, BTEC, HND or its equivalent and the International Baccalaureate Diploma, or school-level qualifications such as GCSEs. In Singapore and India, this is known as a junior college. The municipal government of the city of Paris uses the phrase "sixth form college" as the English name for a lycée. Secondary education In some national education systems, secondary schools may be called "colleges" or have "college" as part of their title. In Australia the term "college" is applied to any private or independent (non-government) primary and, especially, secondary school as distinct from a state school. Melbourne Grammar School, Cranbrook School, Sydney and The King's School, Parramatta are considered colleges. There has also been a recent trend to rename or create government secondary schools as "colleges". In the state of Victoria, some state high schools are referred to as secondary colleges, although the pre-eminent government secondary school for boys in Melbourne is still named Melbourne High School. In Western Australia, South Australia and the Northern Territory, "college" is used in the name of all state high schools built since the late 1990s, and also some older ones. In New South Wales, some high schools, especially multi-campus schools resulting from mergers, are known as "secondary colleges". In Queensland some newer schools which accept primary and high school students are styled state college, but state schools offering only secondary education are called "State High School". In Tasmania and the Australian Capital Territory, "college" refers to the final two years of high school (years 11 and 12), and the institutions which provide this. In this context, "college" is a system independent of the other years of high school. Here, the expression is a shorter version of matriculation college. In a number of Canadian cities, many government-run secondary schools are called "collegiates" or "collegiate institutes" (C.I.), a complicated form of the word "college" which avoids the usual "post-secondary" connotation. This is because these secondary schools have traditionally focused on academic, rather than vocational, subjects and ability levels (for example, collegiates offered Latin while vocational schools offered technical courses). Some private secondary schools (such as Upper Canada College, Vancouver College) choose to use the word "college" in their names nevertheless. Some secondary schools elsewhere in the country, particularly ones within the separate school system, may also use the word "college" or "collegiate" in their names. In New Zealand the word "college" normally refers to a secondary school for ages 13 to 17 and "college" appears as part of the name especially of private or integrated schools. "Colleges" most frequently appear in the North Island, whereas "high schools" are more common in the South Island. In the Netherlands, "college" is equivalent to HBO (Higher professional education). It is oriented towards professional training with clear occupational outlook, unlike universities which are scientifically oriented. In South Africa, some secondary schools, especially private schools on the English public school model, have "college" in their title, including six of South Africa's Elite Seven high schools. A typical example of this category would be St John's College. Private schools that specialize in improving children's marks through intensive focus on examination needs are informally called "cram-colleges". In Sri Lanka the word "college" (known as Vidyalaya in Sinhala) normally refers to a secondary school, which usually signifies above the 5th standard. During the British colonial period a limited number of exclusive secondary schools were established based on English public school model (Royal College Colombo, S. Thomas' College, Mount Lavinia, Trinity College, Kandy) these along with several Catholic schools (St. Joseph's College, Colombo, St Anthony's College) traditionally carry their name as colleges. Following the start of free education in 1931 large group of central colleges were established to educate the rural masses. Since Sri Lanka gained Independence in 1948, many schools that have been established have been named as "college". Other As well as an educational institution, the term, in accordance with its etymology, may also refer to any formal group of colleagues set up under statute or regulation; often under a Royal Charter. Examples include an electoral college, the College of Arms, a college of canons, and the College of Cardinals. Other collegiate bodies include professional associations, particularly in medicine and allied professions. In the UK these include the Royal College of Nursing and the Royal College of Physicians. Examples in the United States include the American College of Physicians, the American College of Surgeons, and the American College of Dentists. An example in Australia is the Royal Australian College of General Practitioners. College by country The different ways in which the term "College" is used to describe educational institutions in various regions of the world is listed below: Americas Canada In Canadian English, the term "college" usually refers to a trades school, applied arts/science/technology/business/health school or community college. These are post-secondary institutions granting certificates, diplomas, associate degrees and (in some cases) bachelor's degrees. The French acronym specific to public institutions within Quebec's particular system of pre-university and technical education is CEGEP (Collège d'enseignement général et professionnel, "college of general and professional education"). They are collegiate-level institutions that a student typically enrols in if they wish to continue onto university in the Quebec education system, or to learn a trade. In Ontario and Alberta, there are also institutions that are designated university colleges, which only grant undergraduate degrees. This is to differentiate between universities, which have both undergraduate and graduate programs and those that do not. In Canada, there is a strong distinction between "college" and "university". In conversation, one specifically would say either "they are going to university" (i.e., studying for a three- or four-year degree at a university) or "they are going to college" (i.e., studying at a technical/career training). Usage in a university setting The term college also applies to distinct entities that formally act as an affiliated institution of the university, formally referred to as federated college, or affiliated colleges. A university may also formally include several constituent colleges, forming a collegiate university. Examples of collegiate universities in Canada include Trent University, and the University of Toronto. These types of institutions act independently, maintaining their own endowments, and properties. However, they remain either affiliated, or federated with the overarching university, with the overarching university being the institution that formally grants the degrees. For example, Trinity College was once an independent institution, but later became federated with the University of Toronto. Several centralized universities in Canada have mimicked the collegiate university model; although constituent colleges in a centralized university remains under the authority of the central administration. Centralized universities that have adopted the collegiate model to a degree includes the University of British Columbia, with Green College and St. John's College; and the Memorial University of Newfoundland, with Sir Wilfred Grenfell College. Occasionally, "college" refers to a subject specific faculty within a university that, while distinct, are neither federated nor affiliated—College of Education, College of Medicine, College of Dentistry, College of Biological Science among others. The Royal Military College of Canada is a military college which trains officers for the Canadian Armed Forces. The institution is a full-fledged university, with the authority to issue graduate degrees, although it continues to word the term college in its name. The institution's sister schools, Royal Military College Saint-Jean also uses the term college in its name, although it academic offering is akin to a CEGEP institution in Quebec. A number of post-secondary art schools in Canada formerly used the word college in their names, despite formally being universities. However, most of these institutions were renamed, or re-branded in the early 21st century, omitting the word college from its name. Usage in secondary education The word college continues to be used in the names public separate secondary schools in Ontario. A number of independent schools across Canada also use the word college in its name. Public secular school boards in Ontario also refer to their secondary schools as collegiate institutes. However, usage of the word collegiate institute varies between school boards. Collegiate institute is the predominant name for secondary schools in Lakehead District School Board, and Toronto District School Board, although most school boards in Ontario use collegiate institute alongside high school, and secondary school in the names of their institutions. Similarly, secondary schools in Regina, and Saskatoon are referred to as Collegiate. Chile Officially, since 2009, the Pontifical Catholic University of Chile incorporated the term "college" as the name of a tertiary education program as a bachelor's degree. The program features a Bachelor of Natural Sciences and Mathematics, a Bachelor of Social Science and a Bachelor of Arts and Humanities. It has the same system as the American universities, it combines majors and minors and finally, it let the students continue a higher degree in the same university once the program it is completed. But in Chile, the term "college" is not usually used for tertiary education, but is used mainly in the name of some private bilingual schools, corresponding to levels 0, 1 and 2 of the ISCED 2011. Some examples are they Santiago College, Saint George's College, among others. United States In the United States, there were 5,916 post-secondary institutions (universities and colleges) having peaked at 7,253 in 2012–13 and fallen every year since. A "college" in the US can refer to a constituent part of a university (which can be a residential college, the sub-division of the university offering undergraduate courses, or a school of the university offering particular specialized courses), an independent institution offering bachelor's-level courses, or an institution offering instruction in a particular professional, technical or vocational field. In popular usage, the word "college" is the generic term for any post-secondary undergraduate education. Americans "go to college" after high school, regardless of whether the specific institution is formally a college or a university. Some students choose to dual-enroll, by taking college classes while still in high school. The word and its derivatives are the standard terms used to describe the institutions and experiences associated with American post-secondary undergraduate education. Students must pay for college before taking classes. Some borrow the money via loans, and some students fund their educations with cash, scholarships, grants, or some combination of these payment methods. In 2011, the state or federal government subsidized $8,000 to $100,000 for each undergraduate degree. For state-owned schools (called "public" universities), the subsidy was given to the college, with the student benefiting from lower tuition. The state subsidized on average 50% of public university tuition. Colleges vary in terms of size, degree, and length of stay. Two-year colleges, also known as junior or community colleges, usually offer an associate degree, and four-year colleges usually offer a bachelor's degree. Often, these are entirely undergraduate institutions, although some have graduate school programs. Four-year institutions in the U.S. that emphasize a liberal arts curriculum are known as liberal arts colleges. Until the 20th century, liberal arts, law, medicine, theology, and divinity were about the only form of higher education available in the United States. These schools have traditionally emphasized instruction at the undergraduate level, although advanced research may still occur at these institutions. While there is no national standard in the United States, the term "university" primarily designates institutions that provide undergraduate and graduate education. A university typically has as its core and its largest internal division an undergraduate college teaching a liberal arts curriculum, also culminating in a bachelor's degree. What often distinguishes a university is having, in addition, one or more graduate schools engaged in both teaching graduate classes and in research. Often these would be called a School of Law or School of Medicine, (but may also be called a college of law, or a faculty of law). An exception is Vincennes University, Indiana, which is styled and chartered as a "university" even though almost all of its academic programs lead only to two-year associate degrees. Some institutions, such as Dartmouth College and The College of William & Mary, have retained the term "college" in their names for historical reasons. In one unique case, Boston College and Boston University, the former located in Chestnut Hill, Massachusetts and the latter located in Boston, Massachusetts, are completely separate institutions. Usage of the terms varies among the states. In 1996, for example, Georgia changed all of its four-year institutions previously designated as colleges to universities, and all of its vocational technology schools to technical colleges. The terms "university" and "college" do not exhaust all possible titles for an American institution of higher education. Other options include "institute" (Worcester Polytechnic Institute and Massachusetts Institute of Technology), "academy" (United States Military Academy), "union" (Cooper Union), "conservatory" (New England Conservatory), and "school" (Juilliard School). In colloquial use, they are still referred to as "college" when referring to their undergraduate studies. The term college is also, as in the United Kingdom, used for a constituent semi-autonomous part of a larger university but generally organized on academic rather than residential lines. For example, at many institutions, the undergraduate portion of the university can be briefly referred to as the college (such as The College of the University of Chicago, Harvard College at Harvard, or Columbia College at Columbia) while at others, such as the University of California, Berkeley, "colleges" are collections of academic programs and other units that share some common characteristics, mission, or disciplinary focus (the "college of engineering", the "college of nursing", and so forth). There exist other variants for historical reasons, including some uses that exist because of mergers and acquisitions; for example, Duke University, which was called Trinity College until the 1920s, still calls its main undergraduate subdivision Trinity College of Arts and Sciences. Residential colleges Some American universities, such as Princeton, Rice, and Yale have established residential colleges (sometimes, as at Harvard, the first to establish such a system in the 1930s, known as houses) along the lines of Oxford or Cambridge. Unlike the Oxbridge colleges, but similarly to Durham, these residential colleges are not autonomous legal entities nor are they typically much involved in education itself, being primarily concerned with room, board, and social life. At the University of Michigan, University of California, San Diego and the University of California, Santa Cruz, each residential college teaches its own core writing courses and has its own distinctive set of graduation requirements. Many U.S. universities have placed increased emphasis on their residential colleges in recent years. This is exemplified by the creation of new colleges at Ivy League schools such as Yale University and Princeton University, and efforts to strengthen the contribution of the residential colleges to student education, including through a 2016 taskforce at Princeton on residential colleges. Origin of the U.S. usage The founders of the first institutions of higher education in the United States were graduates of the University of Oxford and the University of Cambridge. The small institutions they founded would not have seemed to them like universities – they were tiny and did not offer the higher degrees in medicine and theology. Furthermore, they were not composed of several small colleges. Instead, the new institutions felt like the Oxford and Cambridge colleges they were used to – small communities, housing and feeding their students, with instruction from residential tutors (as in the United Kingdom, described above). When the first students graduated, these "colleges" assumed the right to confer degrees upon them, usually with authority—for example, The College of William & Mary has a royal charter from the British monarchy allowing it to confer degrees while Dartmouth College has a charter permitting it to award degrees "as are usually granted in either of the universities, or any other college in our realm of Great Britain." The leaders of Harvard College (which granted America's first degrees in 1642) might have thought of their college as the first of many residential colleges that would grow up into a New Cambridge university. However, over time, few new colleges were founded there, and Harvard grew and added higher faculties. Eventually, it changed its title to university, but the term "college" had stuck and "colleges" have arisen across the United States. In U.S. usage, the word "college" not only embodies a particular type of school, but has historically been used to refer to the general concept of higher education when it is not necessary to specify a school, as in "going to college" or "college savings accounts" offered by banks. In a survey of more than 2,000 college students in 33 states and 156 different campuses, the U.S. Public Interest Research Group found the average student spends as much as $1,200 each year on textbooks and supplies alone. By comparison, the group says that's the equivalent of 39 percent of tuition and fees at a community college, and 14 percent of tuition and fees at a four-year public university. Morrill Land-Grant Act In addition to private colleges and universities, the U.S. also has a system of government funded, public universities. Many were founded under the Morrill Land-Grant Colleges Act of 1862. A movement had arisen to bring a form of more practical higher education to the masses, as "...many politicians and educators wanted to make it possible for all young Americans to receive some sort of advanced education." The Morrill Act "...made it possible for the new western states to establish colleges for the citizens." Its goal was to make higher education more easily accessible to the citizenry of the country, specifically to improve agricultural systems by providing training and scholarship in the production and sales of agricultural products, and to provide formal education in "...agriculture, home economics, mechanical arts, and other professions that seemed practical at the time." The act was eventually extended to allow all states that had remained with the Union during the American Civil War, and eventually all states, to establish such institutions. Most of the colleges established under the Morrill Act have since become full universities, and some are among the elite of the world. Benefits of college Selection of a four-year college as compared to a two-year junior college, even by marginal students such as those with a C+ grade average in high school and SAT scores in the mid 800s, increases the probability of graduation and confers substantial economic and social benefits. Asia Bangladesh In Bangladesh, educational institutions offering higher secondary (11th–12th grade) education are known as colleges. Hong Kong In Hong Kong, the term 'college' is used by tertiary institutions as either part of their names or to refer to a constituent part of the university, such as the colleges in the collegiate The Chinese University of Hong Kong; or to a residence hall of a university, such as St. John's College, University of Hong Kong. Many older secondary schools have the term 'college' as part of their names. India The modern system of education was heavily influenced by the British starting in 1835. In India, the term "college" is commonly reserved for institutions that offer high school diplomas at year 12 ("Junior College", similar to American high schools), and those that offer the bachelor's degree; some colleges, however, offer programmes up to PhD level. Generally, colleges are located in different parts of a state and all of them are affiliated to a regional university. The colleges offer programmes leading to degrees of that university. Colleges may be either Autonomous or non-autonomous. Autonomous Colleges are empowered to establish their own syllabus, and conduct and assess their own examinations; in non-autonomous colleges, examinations are conducted by the university, at the same time for all colleges under its affiliation. There are several hundred universities and each university has affiliated colleges, often a large number. The first liberal arts and sciences college in India was "Cottayam College" or the "Syrian College", Kerala in 1815. The First inter linguistic residential education institution in Asia was started at this college. At present it is a Theological seminary which is popularly known as Orthodox Theological Seminary or Old Seminary. After that, CMS College, Kottayam, established in 1817, and the Presidency College, Kolkata, also 1817, initially known as Hindu College. The first college for the study of Christian theology and ecumenical enquiry was Serampore College (1818). The first Missionary institution to impart Western style education in India was the Scottish Church College, Calcutta (1830). The first commerce and economics college in India was Sydenham College, Mumbai (1913). In India a new term has been introduced that is Autonomous Institutes & Colleges. An autonomous Colleges are colleges which need to be affiliated to a certain university. These colleges can conduct their own admission procedure, examination syllabus, fees structure etc. However, at the end of course completion, they cannot issue their own degree or diploma. The final degree or diploma is issued by the affiliated university. Also, some significant changes can pave way under the NEP (New Education Policy 2020) which may affect the present guidelines for universities and colleges. Israel In Israel, any non-university higher-learning facility is called a college. Institutions accredited by the Council for Higher Education in Israel (CHE) to confer a bachelor's degree are called "Academic Colleges" (; plural ). These colleges (at least 4 for 2012) may also offer master's degrees and act as Research facilities. There are also over twenty teacher training colleges or seminaries, most of which may award only a Bachelor of Education (BEd) degree. Academic colleges: Any educational facility that had been approved to offer at least bachelor's degree is entitled by CHE to use the term academic college in its name. Engineering academic college: Any academic facility that offer at least bachelor's degree and most of it faculties are providing an Engineering degree and Engineering license. Educational academic college: After an educational facility that had been approved for "Teachers seminar" status is then approved to provide a Bachelor of Education, its name is changed to include "Educational Academic college." Technical college: A "Technical college" () is an educational facility that is approved to allow to provide P.E degree (הנדסאי) (14'th class) or technician (טכנאי) (13'th class) diploma and licenses. Training College: A "Training College" ( or ) is an educational facility that provides basic training allowing a person to receive a working permit in a field such as alternative medicine, cooking, Art, Mechanical, Electrical and other professions. A trainee could receive the right to work in certain professions as apprentice (j. mechanic, j. Electrician etc.). After working in the training field for enough time an apprentice could have a license to operate (Mechanic, Electrician). This educational facility is mostly used to provide basic training for low tech jobs and for job seekers without any training that are provided by the nation's Employment Service (שירות התעסוקה). Macau Following the Portuguese usage, the term "college" (colégio) in Macau has traditionally been used in the names for private (and non-governmental) pre-university educational institutions, which correspond to form one to form six level tiers. Such schools are usually run by the Roman Catholic church or missionaries in Macau. Examples include Chan Sui Ki Perpetual Help College, Yuet Wah College, and Sacred Heart Canossian College. Philippines In the Philippines, colleges usually refer to institutions of learning that grant degrees but whose scholastic fields are not as diverse as that of a university (University of Santo Tomas, University of the Philippines, Ateneo de Manila University, De La Salle University, Far Eastern University, and AMA University), such as the San Beda College which specializes in law, AMA Computer College whose campuses are spread all over the Philippines which specializes in information and computing technologies, and the Mapúa Institute of Technology which specializes in engineering, or to component units within universities that do not grant degrees but rather facilitate the instruction of a particular field, such as a College of Science and College of Engineering, among many other colleges of the University of the Philippines. A state college may not have the word "college" on its name, but may have several component colleges, or departments. Thus, the Eulogio Amang Rodriguez Institute of Science and Technology is a state college by classification. Usually, the term "college" is also thought of as a hierarchical demarcation between the term "university", and quite a number of colleges seek to be recognized as universities as a sign of improvement in academic standards (Colegio de San Juan de Letran, San Beda College), and increase in the diversity of the offered degree programs (called "courses"). For private colleges, this may be done through a survey and evaluation by the Commission on Higher Education and accrediting organizations, as was the case of Urios College which is now the Fr. Saturnino Urios University. For state colleges, it is usually done by a legislation by the Congress or Senate. In common usage, "going to college" simply means attending school for an undergraduate degree, whether it's from an institution recognized as a college or a university. When it comes to referring to the level of education, college is the term more used to be synonymous to tertiary or higher education. A student who is or has studied his/her undergraduate degree at either an institution with college or university in its name is considered to be going to or have gone to college. Singapore The term "college" in Singapore is generally only used for pre-university educational institutions called "Junior Colleges", which provide the final two years of secondary education (equivalent to sixth form in British terms or grades 11–12 in the American system). Since 1 January 2005, the term also refers to the three campuses of the Institute of Technical Education with the introduction of the "collegiate system", in which the three institutions are called ITE College East, ITE College Central, and ITE College West respectively. The term "university" is used to describe higher-education institutions offering locally conferred degrees. Institutions offering diplomas are called "polytechnics", while other institutions are often referred to as "institutes" and so forth. Sri Lanka There are several professional and vocational institutions that offer post-secondary education without granting degrees that are referred to as "colleges". This includes the Sri Lanka Law College, the many Technical Colleges and Teaching Colleges. Turkey In Turkey, the term "kolej" (college) refers to a private high school, typically preceded by one year of preparatory language education. Notable Turkish colleges include Robert College, Uskudar American Academy, American Collegiate Institute and Tarsus American College. Africa South Africa Although the term "college" is hardly used in any context at any university in South Africa, some non-university tertiary institutions call themselves colleges. These include teacher training colleges, business colleges and wildlife management colleges. See: List of universities in South Africa#Private colleges and universities; List of post secondary institutions in South Africa. Zimbabwe The term college is mainly used by private or independent secondary schools with Advanced Level (Upper 6th formers) and also Polytechnic Colleges which confer diplomas only. A student can complete secondary education (International General Certificate of Secondary Education, IGCSE) at 16 years and proceed straight to a poly-technical college or they can proceed to Advanced level (16 to 19 years) and obtain a General Certificate of Education (GCE) certificate which enables them to enroll at a university, provided they have good grades. Alternatively, with lower grades, the GCE certificate holders will have an added advantage over their GCSE counterparts if they choose to enroll at a polytechnical college. Some schools in Zimbabwe choose to offer the International Baccalaureate studies as an alternative to the IGCSE and GCE. Europe Greece Kollegio (in Greek Κολλέγιο) refers to the Centers of Post-Lyceum Education (in Greek Κέντρο Μεταλυκειακής Εκπαίδευσης, abbreviated as KEME), which are principally private and belong to the Greek post-secondary education system. Some of them have links to EU or US higher education institutions or accreditation organizations, such as the NEASC. Kollegio (or Kollegia in plural) may also refer to private non-tertiary schools, such as the Athens College. Ireland In Ireland the term "college" is normally used to describe an institution of tertiary education. University students often say they attend "college" rather than "university". Until 1989, no university provided teaching or research directly; they were formally offered by a constituent college of the university. There are number of secondary education institutions that traditionally used the word "college" in their names: these are either older, private schools (such as Belvedere College, Gonzaga College, Castleknock College, and St. Michael's College) or what were formerly a particular kind of secondary school. These secondary schools, formerly known as "technical colleges," were renamed "community colleges," but remain secondary schools. The country's only ancient university is the University of Dublin. Created during the reign of Elizabeth I, it is modelled on the collegiate universities of Cambridge and Oxford. However, only one constituent college was ever founded, hence the curious position of Trinity College Dublin today; although both are usually considered one and the same, the university and college are completely distinct corporate entities with separate and parallel governing structures. Among more modern foundations, the National University of Ireland, founded in 1908, consisted of constituent colleges and recognised colleges until 1997. The former are now referred to as constituent universities – institutions that are essentially universities in their own right. The National University can trace its existence back to 1850 and the creation of the Queen's University of Ireland and the creation of the Catholic University of Ireland in 1854. From 1880, the degree awarding roles of these two universities was taken over by the Royal University of Ireland, which remained until the creation of the National University in 1908 and Queen's University Belfast. The state's two new universities, Dublin City University and University of Limerick, were initially National Institute for Higher Education institutions. These institutions offered university level academic degrees and research from the start of their existence and were awarded university status in 1989 in recognition of this. Third level technical education in the state has been carried out in the Institutes of Technology, which were established from the 1970s as Regional Technical Colleges. These institutions have delegated authority which entitles them to give degrees and diplomas from Quality and Qualifications Ireland (QQI) in their own names. A number of private colleges exist such as Dublin Business School, providing undergraduate and postgraduate courses validated by QQI and in some cases by other universities. Other types of college include colleges of education, such as the Church of Ireland College of Education. These are specialist institutions, often linked to a university, which provide both undergraduate and postgraduate academic degrees for people who want to train as teachers. A number of state-funded further education colleges exist – which offer vocational education and training in a range of areas from business studies and information and communications technology to sports injury therapy. These courses are usually one, two or less often three years in duration and are validated by QQI at Levels 5 or 6, or for the BTEC Higher National Diploma award, which is a Level 6/7 qualification, validated by Edexcel. There are numerous private colleges (particularly in Dublin and Limerick) which offer both further and higher education qualifications. These degrees and diplomas are often certified by foreign universities/international awarding bodies and are aligned to the National Framework of Qualifications at Levels 6, 7 and 8. Netherlands In the Netherlands there are 3 main educational routes after high school. MBO (middle-level applied education), which is the equivalent of junior college. Designed to prepare students for either skilled trades and technical occupations and workers in support roles in professions such as engineering, accountancy, business administration, nursing, medicine, architecture, and criminology or for additional education at another college with more advanced academic material. HBO (higher professional education), which is the equivalent of college and has a professional orientation. After HBO (typically 4–6 years), pupils can enroll in a (professional) master's program (1–2 years) or enter the job market. The HBO is taught in vocational universities (hogescholen), of which there are over 40 in the Netherlands, each of which offers a broad variety of programs, with the exception of some that specialize in arts or agriculture. Note that the hogescholen are not allowed to name themselves university in Dutch. This also stretches to English and therefore HBO institutions are known as universities of applied sciences. WO (Scientific education), which is the equivalent to university level education and has an academic orientation. HBO graduates can be awarded two titles, which are Baccalaureus (bc.) and Ingenieur (ing.). At a WO institution, many more bachelor's and master's titles can be awarded. Bachelor's degrees: Bachelor of Arts (BA), Bachelor of Science (BSc) and Bachelor of Laws (LLB). Master's degrees: Master of Arts (MA), Master of Laws (LLM) and Master of Science (MSc). The PhD title is a research degree awarded upon completion and defense of a doctoral thesis. Portugal Presently in Portugal, the term colégio (college) is normally used as a generic reference to a private (non-government) school that provides from basic to secondary education. Many of the private schools include the term colégio in their name. Some special public schools – usually of the boarding school type – also include the term in their name, with a notable example being the Colégio Militar (Military College). The term colégio interno (literally "internal college") is used specifically as a generic reference to a boarding school. Until the 19th century, a colégio was usually a secondary or pre-university school, of public or religious nature, where the students usually lived together. A model for these colleges was the Royal College of Arts and Humanities, founded in Coimbra by King John III of Portugal in 1542. United Kingdom Secondary education and further education Further education (FE) colleges and sixth form colleges are institutions providing further education to students over 16. Some of these also provide higher education courses (see below). In the context of secondary education, 'college' is used in the names of some private schools, e.g. Eton College and Winchester College. Higher education In higher education, a college is normally a provider that does not hold university status, although it can also refer to a constituent part of a collegiate or federal university or a grouping of academic faculties or departments within a university. Traditionally the distinction between colleges and universities was that colleges did not award degrees while universities did, but this is no longer the case with NCG having gained taught degree awarding powers (the same as some universities) on behalf of its colleges, and many of the colleges of the University of London holding full degree awarding powers and being effectively universities. Most colleges, however, do not hold their own degree awarding powers and continue to offer higher education courses that are validated by universities or other institutions that can award degrees. In England, , over 60% of the higher education providers directly funded by HEFCE (208/340) are sixth-form or further education colleges, often termed colleges of further and higher education, along with 17 colleges of the University of London, one university college, 100 universities, and 14 other providers (six of which use 'college' in their name). Overall, this means over two-thirds of state-supported higher education providers in England are colleges of one form or another. Many private providers are also called colleges, e.g. the New College of the Humanities and St Patrick's College, London. Colleges within universities vary immensely in their responsibilities. The large constituent colleges of the University of London are effectively universities in their own right; colleges in some universities, including those of the University of the Arts London and smaller colleges of the University of London, run their own degree courses but do not award degrees; those at the University of Roehampton provide accommodation and pastoral care as well as delivering the teaching on university courses; those at Oxford and Cambridge deliver some teaching on university courses as well as providing accommodation and pastoral care; and those in Durham, Kent, Lancaster and York provide accommodation and pastoral care but do not normally participate in formal teaching. The legal status of these colleges also varies widely, with University of London colleges being independent corporations and recognised bodies, Oxbridge colleges, colleges of the University of the Highlands and Islands (UHI) and some Durham colleges being independent corporations and listed bodies, most Durham colleges being owned by the university but still listed bodies, and those of other collegiate universities not having formal recognition. When applying for undergraduate courses through UCAS, University of London colleges are treated as independent providers, colleges of Oxford, Cambridge, Durham and UHI are treated as locations within the universities that can be selected by specifying a 'campus code' in addition to selecting the university, and colleges of other universities are not recognised. The UHI and the University of Wales Trinity Saint David (UWTSD) both include further education colleges. However, while the UHI colleges integrate FE and HE provision, UWTSD maintains a separation between the university campuses (Lampeter, Carmarthen and Swansea) and the two colleges (Coleg Sir Gâr and Coleg Ceredigion; n.b. coleg is Welsh for college), which although part of the same group are treated as separate institutions rather than colleges within the university. A university college is an independent institution with the power to award taught degrees, but which has not been granted university status. University College is a protected title that can only be used with permission, although note that University College London, University College, Oxford and University College, Durham are colleges within their respective universities and not university colleges (in the case of UCL holding full degree awarding powers that set it above a university college), while University College Birmingham is a university in its own right and also not a university college. Oceania Australia In Australia a college may be an institution of tertiary education that is smaller than a university, run independently or as part of a university. Following a reform in the 1980s many of the formerly independent colleges now belong to a larger universities. Referring to parts of a university, there are residential colleges which provide residence for students, both undergraduate and postgraduate, called university colleges. These colleges often provide additional tutorial assistance, and some host theological study. Many colleges have strong traditions and rituals, so are a combination of dormitory style accommodation and fraternity or sorority culture. Most technical and further education institutions (TAFEs), which offer certificate and diploma vocational courses, are styled "TAFE colleges" or "Colleges of TAFE". In some places, such as Tasmania, college refers to a type of school for Year 11 and 12 students, e.g. Don College. New Zealand The constituent colleges of the former University of New Zealand (such as Canterbury University College) have become independent universities. Some halls of residence associated with New Zealand universities retain the name of "college", particularly at the University of Otago (which although brought under the umbrella of the University of New Zealand, already possessed university status and degree awarding powers). The institutions formerly known as "Teacher-training colleges" now style themselves "College of education". Some universities, such as the University of Canterbury, have divided their university into constituent administrative "Colleges" – the College of Arts containing departments that teach Arts, Humanities and Social Sciences, College of Science containing Science departments, and so on. This is largely modelled on the Cambridge model, discussed above. Like the United Kingdom some professional bodies in New Zealand style themselves as "colleges", for example, the Royal Australasian College of Surgeons, the Royal Australasian College of Physicians. In some parts of the country, secondary school is often referred to as college and the term is used interchangeably with high school. This sometimes confuses people from other parts of New Zealand. But in all parts of the country many secondary schools have "College" in their name, such as Rangitoto College, New Zealand's largest secondary. Notes References External links See also Community college Residential college University college Vocational university Madrasa Ashrama (stage) Educational stages Higher education Types of university or college Youth
5690
https://en.wikipedia.org/wiki/Chalmers%20University%20of%20Technology
Chalmers University of Technology
Chalmers University of Technology (, commonly referred to as Chalmers) is a private research university located in Gothenburg, Sweden. Chalmers focuses on engineering and science, but more broadly it also conducts research and offers education in shipping, architecture and management. The university has approximately 3100 employees and 10,000 students. Chalmers is a highly reputable university internationally known for its engineering education and research, and is consistently ranked among the world's top 100 universities in engineering and technology, as well as considered one of Europe's leading technical universities. Chalmers is coordinating the Graphene Flagship, the European Union's biggest research initiative to bring graphene innovation out of the lab and into commercial applications, and leading the development of a Swedish quantum computer. Chalmers is a member of the UNITECH International program, the IDEA League, the Nordic Five Tech, and the ENHANCE alliances as well as the EURECOM consortium and the CESAER network. History Chalmers was founded in 1829 following a donation by William Chalmers, a director of the Swedish East India Company. He donated part of his fortune for the establishment of an "industrial school". The university was run as a private institution until 1937 when it became the second state-owned technical university. In 1994 the government of Sweden reorganised Chalmers into a private company (aktiebolag) owned by a government-controlled foundation. Chalmers is one of only three universities in Sweden which are named after a person, the other two being Karolinska Institutet and Linnaeus University. Departments Chalmers University of Technology has the following 13 departments: Architecture and Civil Engineering Chemistry and Chemical Engineering Communication and Learning in Science Computer Science and Engineering Electrical Engineering Industrial and Materials Science Life Sciences Mathematical Sciences Mechanics and Maritime Sciences Microtechnology and Nanoscience Physics Space, Earth and Environment Technology Management and Economics Furthermore, Chalmers is home to six Areas of Advance and six national competence centers in key fields such as materials, mathematical modelling, environmental science, and vehicle safety. Research infrastructure Chalmers University of Technology's research infrastructure includes everything from advanced real or virtual labs to large databases, computer capacity for large-scale calculations and research facilities. Chalmers AI Research Centre, CHAIR Chalmers Centre for Computational Science and Engineering, C3SE Chalmers Mass Spectrometry Infrastructure, CMSI Chalmers Power Central Chalmers Materials Analysis Laboratory Chalmers Simulator Centre Chemical Imaging Infrastructure Facility for Computational Systems Biology HSB Living Lab Nanofabrication Laboratory Onsala Space Observatory Revere – Chalmers Resource for Vehicle Research The National laboratory in terahertz characterisation SAFER - Vehicle and Traffic Safety Centre at Chalmers Rankings and reputation Since 2012, Chalmers has achieved the highest reputation for Swedish Universities by the Kantar Sifo's Reputation Index. According to the survey, Chalmers is the most well-known university in Sweden regarded as a successful and competitive high-class institution with a large contribution to society and credibility in media. Moreover, the European Commission has recognized Chalmers as one of Europe's top universities, while based on the U-Multirank 2022, Chalmers characterized as a top performing university across various indicators (i.e., teaching & learning, research, knowledge transfer and international orientation) with the highest number of ‘A’ (very good) scores on the institutional level for Sweden. Additionally, in 2018, a benchmarking report from MIT ranked Chalmers top 10 in the world of engineering education, while in 2020, the World University Research Rankings placed Chalmers 12th in the world based on the evaluation of three key research aspects, namely research multi-disciplinarity, research impact, and research cooperativeness. Finally, in 2011, the International Professional Ranking of Higher Education Institutions, which is established on the basis of the number of alumni holding a post of Chief Executive Officer (CEO) or equivalent in one of the Fortune Global 500 companies, Chalmers ranked 38th in the world, ranking 1st in Sweden and 15th in Europe. Ties and partnerships Chalmers is a member of the IDEA League network, a strategic alliance between five leading European universities of science and technology. The scope of the network is to provide the environment for students, researchers and staff to share knowledge, experience and resources. Moreover, Chalmers is a partner of the UNITECH International, an organization consisting of distinguished technical universities and multinational companies across Europe. UNITECH helps bridge the gap between the industrial and academic world offering exchange programs consisting of studies as well as an integrated internship at one of the corporate partners. Chalmers is also a member of the Nordic Five Tech network, a strategic alliance of the five leading technical universities in Denmark, Finland, Norway and Sweden. The Nordic Five Tech universities are amongst the top international technical universities with the goal of creating synergies within education, research and innovation. Additionally, Chalmers is a member of the ENHANCE, an alliance of ten leading Universities of Technology shaping the future of Europe and driving transformation in science and society. The partner institutions have a history of solid cooperation in EU programmes and joint research projects. Furthermore, Chalmers is a member of CESAER, a European association of universities of science and technology. Among others, the requirements for a university to be a member of CESAER is to provide excellent science and technology research, education and innovation as well as to have a leading position in their region, their country and beyond. Additionally, Chalmers has established formal agreements with three leading materials science centers: University of California, Santa Barbara, ETH Zurich and Stanford University. Within the framework of the agreements, a yearly bilateral workshop is organized, and exchange of researchers is supported. Chalmers has general exchange agreements with many European and U.S. universities and maintains a special exchange program agreement with National Chiao Tung University (NCTU) in Taiwan where the exchange students from the two universities maintain offices for, among other things, helping local students with applying and preparing for an exchange year as well as acting as representatives. Finally, Chalmers has strong partnerships with major industries such as Ericsson, Volvo, Saab AB and AstraZeneca. Students Approximately 40% of Sweden's graduate engineers and architects are educated at Chalmers. Each year, around 250 postgraduate degrees are awarded as well as 850 graduate degrees. About 1,000 post-graduate students attend programmes at the university, and many students are taking Master of Science engineering programmes and the Master of Architecture programme. Since 2007, all master's programmes are taught in English for both national and international students. This was a result of the adaptation to the Bologna process that started in 2004 at Chalmers (as the first technical university in Sweden). Currently, about 10% of all students at Chalmers come from countries outside Sweden to enrol in a master's or PhD program. Around 2,700 students also attend Bachelor of Science engineering programmes, merchant marine and other undergraduate courses at Campus Lindholmen. Chalmers also shares some students with Gothenburg University in the joint IT University project. The IT University focuses exclusively on information technology and offers bachelor's and master's programmes with degrees issued from either Chalmers or Gothenburg University, depending on the programme. Chalmers confers honorary doctoral degrees to people outside the university who have shown great merit in their research or in society. Organization Chalmers is an aktiebolag with 100 shares à 1,000 SEK, all of which are owned by the Chalmers University of Technology Foundation, a private foundation, which appoints the university board and the president. The foundation has its members appointed by the Swedish government (4 to 8 seats), the departments appoint one member, the student union appoints one member and the president automatically gains one chair. Each department is led by a department head, usually a member of the faculty of that department. The faculty senate represents members of the faculty when decisions are taken. Campuses In 1937, the school moved from the city centre to the new Gibraltar Campus, named after the mansion which owned the grounds, where it is now located. The Lindholmen College Campus was created in the early 1990s and is located on the island Hisingen. Campus Johanneberg and Campus Lindholmen, as they are now called, are connected by bus lines. Student societies and traditions Traditions include the graduation ceremony and the Cortège procession, an annual public event. Chalmers Students' Union Chalmers Aerospace Club – founded in 1981. In Swedish frequently also referred to as Chalmers rymdgrupp (roughly Chalmers Space Group). Members of CAC led the ESA funded CACTEX (Chalmers Aerospace Club Thermal EXperiment) project where the thermal conductivity of alcohol at zero gravity was investigated using a sounding rocket. Chalmers Alternative Sports – Student association organizing trips and other activities working to promote alternative sports. Every year the Chalmers Wake arranges a pond wakeboard contest in the fountain outside the architecture building at Chalmers. Chalmersbaletten Chalmers Ballong Corps Chalmers Baroque Ensemble Chalmers Business Society (CBS) CETAC Chalmers Choir Chalmers Formula Student ETA - (E-sektionens Teletekniska Avdelning) Founded in 1935, it's a student-run amateur radio society that also engages in hobby electronics. Chalmers Film and Photography Committee (CFFC) Chalmersspexet – Amateur theater group which has produced new plays since 1948 Chalmers International Reception Committee (CIRC) XP – Committee that is responsible for the experimental workshop, a workshop open for students Chalmers Program Committee – PU Chalmers Students for Sustainability (CSS) – promoting sustainable development among the students and runs projects, campaigns and lectures Föreningen Chalmers Skeppsbyggare, Chalmers Naval Architecture Students' Society (FCS) Chalmers Sailing Society RANG – Chalmers Indian Association Caster – Developing and operating a Driver in the Loop (DIL) simulator, which is used in various courses and projects Notable alumni Christopher Ahlberg, computer scientist and entrepreneur, Spotfire and Recorded Future founder Rune Andersson, Swedish Industrialist, owner of Mellby Gård AB and billionaire Abbas Anvari, former chancellor of Sharif University of Technology Linn Berggren, artist and former member of Ace of Base Gustaf Dalén, Nobel Prize in Physics Sigfrid Edström, director ASEA, president IOC Claes-Göran Granqvist, physicist Margit Hall, first female architect in Sweden Harald Hammarström, linguist Krister Holmberg, professor of Surface Chemistry at Chalmers University of Technology. Mats Hillert, metallurgist Ivar Jacobson, computer scientist Erik Johansson, photographic surrealist Jan Johansson, jazz musician Leif Johansson, former CEO Volvo Olav Kallenberg, probability theorist Marianne Kärrholm, chemical engineer and Chalmers professor Hjalmar Kumlien, architect Abraham Langlet, chemist Martin Lorentzon, Spotify and TradeDoubler founder Ingemar Lundström, physicist, chairman of the Nobel Committee for Physics Carl Magnusson, industrial designer and inventor Semir Mahjoub, businessman and entrepreneur Peter Nordin, computer scientist and entrepreneur Åke Öberg, biomedical scientist Leif Östling, CEO Scania AB PewDiePie (Felix Arvid Ulf Kjellberg), YouTuber (no degree) Carl Abraham Pihl, engineer and director of first Norwegian railroad (Hovedbanen) Richard Soderberg, businessman, inventor and professor at Massachusetts Institute of Technology Hans Stråberg, former President and CEO of Electrolux Ludvig Strigeus, computer scientist and entrepreneur Per Håkan Sundell, computer scientist and entrepreneur Jan Wäreby, businessman Gert Wingårdh, architect Vera Sandberg, engineer Anna von Hausswolff, musician Anita Schjøll Brede, entrepreneur Presidents Although the official Swedish title for the head is "rektor", the university now uses "President" as the English translation. See also Chalmers School of Entrepreneurship IT University of Göteborg List of universities in Sweden Marie Rådbo, astronomer The International Science Festival in Gothenburg University of Gothenburg (Göteborg University) References External links Chalmers University of Technology – official site Chalmers Student Union Chalmers Alumni Association Educational institutions established in 1829 Technical universities and colleges in Sweden Higher education in Gothenburg Engineering universities and colleges in Sweden 1829 establishments in Sweden
5691
https://en.wikipedia.org/wiki/Codex
Codex
The codex (: codices ) was the historical ancestor of the modern book. Instead of being composed of sheets of paper, it used sheets of vellum, papyrus, or other materials. The term codex is often used for ancient manuscript books, with handwritten contents. A codex, much like the modern book, is bound by stacking the pages and securing one set of edges by a variety of methods over the centuries, yet in a form analogous to modern bookbinding. Modern books are divided into paperback (or softback) and those bound with stiff boards, called hardbacks. Elaborate historical bindings are called treasure bindings. At least in the Western world, the main alternative to the paged codex format for a long document was the continuous scroll, which was the dominant form of document in the ancient world. Some codices are continuously folded like a concertina, in particular the Maya codices and Aztec codices, which are actually long sheets of paper or animal skin folded into pages. In Japan, concertina-style codices called orihon developed during the Heian period (794–1185) were made of paper. The Ancient Romans developed the form from wax tablets. The gradual replacement of the scroll by the codex has been called the most important advance in book making before the invention of the printing press. The codex transformed the shape of the book itself, and offered a form that has lasted ever since. The spread of the codex is often associated with the rise of Christianity, which early on adopted the format for the Bible. First described in the 1st century of the Common Era, when the Roman poet Martial praised its convenient use, the codex achieved numerical parity with the scroll around 300 CE, and had completely replaced it throughout what was by then a Christianized Greco-Roman world by the 6th century. Etymology and origins The word codex comes from the Latin word caudex, meaning "trunk of a tree", "block of wood" or "book". The codex began to replace the scroll almost as soon as it was invented, although new finds add three centuries to its history (see below). In Egypt, by the fifth century, the codex outnumbered the scroll by ten to one based on surviving examples. By the sixth century, the scroll had almost vanished as a medium for literature. The change from rolls to codices roughly coincides with the transition from papyrus to parchment as the preferred writing material, but the two developments are unconnected. In fact, any combination of codices and scrolls with papyrus and parchment is technically feasible and common in the historical record. Technically, even modern notebooks and paperbacks are codices, but publishers and scholars reserve the term for manuscript (hand-written) books produced from Late antiquity until the Middle Ages. The scholarly study of these manuscripts is sometimes called codicology. The study of ancient documents in general is called paleography. The codex provided considerable advantages over other book formats, primarily its compactness, sturdiness, economic use of materials by using both sides (recto and verso), and ease of reference (a codex accommodates random access, as opposed to a scroll, which uses sequential access). History The Romans used precursors made of reusable wax-covered tablets of wood for taking notes and other informal writings. Two ancient polyptychs, a pentaptych and octoptych excavated at Herculaneum, used a unique connecting system that presages later sewing on of thongs or cords. A first evidence of the use of papyrus in codex form comes from the Ptolemaic period in Egypt, as a find at the University of Graz shows. Julius Caesar may have been the first Roman to reduce scrolls to bound pages in the form of a note-book, possibly even as a papyrus codex. At the turn of the 1st century AD, a kind of folded parchment notebook called pugillares membranei in Latin became commonly used for writing in the Roman Empire. Theodore Cressy Skeat theorized that this form of notebook was invented in Rome and then spread rapidly to the Near East. Codices are described in certain works by the Classical Latin poet, Martial. He wrote a series of five couplets meant to accompany gifts of literature that Romans exchanged during the festival of Saturnalia. Three of these books are specifically described by Martial as being in the form of a codex; the poet praises the compendiousness of the form (as opposed to the scroll), as well as the convenience with which such a book can be read on a journey. In another poem by Martial, the poet advertises a new edition of his works, specifically noting that it is produced as a codex, taking less space than a scroll and being more comfortable to hold in one hand. According to Theodore Cressy Skeat, this might be the first recorded known case of an entire edition of a literary work (not just a single copy) being published in codex form, though it was likely an isolated case and was not a common practice until a much later time. In his discussion of one of the earliest parchment codices to survive from Oxyrhynchus in Egypt, Eric Turner seems to challenge Skeat's notion when stating, "its mere existence is evidence that this book form had a prehistory", and that "early experiments with this book form may well have taken place outside of Egypt." Early codices of parchment or papyrus appear to have been widely used as personal notebooks, for instance in recording copies of letters sent (Cicero Fam. 9.26.1). Early codices were not always cohesive. They often contained multiple languages, various topics and even multiple authors. "Such codices formed libraries in their own right." The parchment notebook pages were "more durable, and could withstand being folded and stitched to other sheets". Parchments whose writing was no longer needed were commonly washed or scraped for re-use, creating a palimpsest; the erased text, which can often be recovered, is older and usually more interesting than the newer text which replaced it. Consequently, writings in a codex were often considered informal and impermanent. Parchment (animal skin) was expensive, and therefore it was used primarily by the wealthy and powerful, who were also able to pay for textual design and color. "Official documents and deluxe manuscripts [in the late Middle Ages] were written in gold and silver ink on parchment...dyed or painted with costly purple pigments as an expression of imperial power and wealth." As early as the early 2nd century, there is evidence that a codex—usually of papyrus—was the preferred format among Christians. In the library of the Villa of the Papyri, Herculaneum (buried in AD 79), all the texts (of Greek literature) are scrolls (see Herculaneum papyri). However, in the Nag Hammadi library, hidden about AD 390, all texts (Gnostic) are codices. Despite this comparison, a fragment of a non-Christian parchment codex of Demosthenes' De Falsa Legatione from Oxyrhynchus in Egypt demonstrates that the surviving evidence is insufficient to conclude whether Christians played a major or central role in the development of early codices—or if they simply adopted the format to distinguish themselves from Jews. The earliest surviving fragments from codices come from Egypt, and are variously dated (always tentatively) towards the end of the 1st century or in the first half of the 2nd. This group includes the Rylands Library Papyrus P52, containing part of St John's Gospel, and perhaps dating from between 125 and 160. In Western culture, the codex gradually replaced the scroll. Between the 4th century, when the codex gained wide acceptance, and the Carolingian Renaissance in the 8th century, many works that were not converted from scroll to codex were lost. The codex improved on the scroll in several ways. It could be opened flat at any page for easier reading, pages could be written on both front and back (recto and verso), and the protection of durable covers made it more compact and easier to transport. The ancients stored codices with spines facing inward, and not always vertically. The spine could be used for the incipit, before the concept of a proper title developed in medieval times. Though most early codices were made of papyrus, papyrus was fragile and supplied from Egypt, the only place where papyrus grew. The more durable parchment and vellum gained favor, despite the cost. The codices of pre-Columbian Mesoamerica (Mexico and Central America) had a similar appearance when closed to the European codex, but were instead made with long folded strips of either fig bark (amatl) or plant fibers, often with a layer of whitewash applied before writing. New World codices were written as late as the 16th century (see Maya codices and Aztec codices). Those written before the Spanish conquests seem all to have been single long sheets folded concertina-style, sometimes written on both sides of the amatl paper. There are significant codices produced in the colonial era, with pictorial and alphabetic texts in Spanish or an indigenous language such as Nahuatl. In East Asia, the scroll remained standard for far longer than in the Mediterranean world. There were intermediate stages, such as scrolls folded concertina-style and pasted together at the back and books that were printed only on one side of the paper. This replaced traditional Chinese writing mediums such as bamboo and wooden slips, as well as silk and paper scrolls. The evolution of the codex in China began with folded-leaf pamphlets in the 9th century, during the late Tang dynasty (618–907), improved by the 'butterfly' bindings of the Song dynasty (960–1279), the wrapped back binding of the Yuan dynasty (1271–1368), the stitched binding of the Ming (1368–1644) and Qing dynasties (1644–1912), and finally the adoption of Western-style bookbinding in the 20th century. The initial phase of this evolution, the accordion-folded palm-leaf-style book, most likely came from India and was introduced to China via Buddhist missionaries and scriptures. Judaism still retains the Torah scroll, at least for ceremonial use. Preparation The first stage in creating a codex is to prepare the animal skin. The skin is washed with water and lime but not together. The skin is soaked in the lime for a couple of days. The hair is removed, and the skin is dried by attaching it to a frame, called a herse. The parchment maker attaches the skin at points around the circumference. The skin attaches to the herse by cords. To prevent it from being torn, the maker wraps the area of the skin attached to the cord around a pebble called a pippin. After completing that, the maker uses a crescent shaped knife called a lunarium or lunellum to remove any remaining hairs. Once the skin completely dries, the maker gives it a deep clean and processes it into sheets. The number of sheets from a piece of skin depends on the size of the skin and the final product dimensions. For example, the average calfskin can provide three-and-a-half medium sheets of writing material, which can be doubled when they are folded into two conjoint leaves, also known as a bifolium. Historians have found evidence of manuscripts in which the scribe wrote down the medieval instructions now followed by modern membrane makers. Defects can often be found in the membrane, whether they are from the original animal, human error during the preparation period, or from when the animal was killed. Defects can also appear during the writing process. Unless the manuscript is kept in perfect condition, defects can also appear later in its life. Preparation of pages for writing Firstly, the membrane must be prepared. The first step is to set up the quires. The quire is a group of several sheets put together. Raymond Clemens and Timothy Graham point out, in "Introduction to Manuscript Studies", that "the quire was the scribe's basic writing unit throughout the Middle Ages": Pricking is the process of making holes in a sheet of parchment (or membrane) in preparation of it ruling. The lines were then made by ruling between the prick marks.... The process of entering ruled lines on the page to serve as a guide for entering text. Most manuscripts were ruled with horizontal lines that served as the baselines on which the text was entered and with vertical bounding lines that marked the boundaries of the columns. Forming quire From the Carolingian period to the end of the Middle Ages, different styles of folding the quire came about. For example, in continental Europe throughout the Middle Ages, the quire was put into a system in which each side folded on to the same style. The hair side met the hair side and the flesh side to the flesh side. This was not the same style used in the British Isles, where the membrane was folded so that it turned out an eight-leaf quire, with single leaves in the third and sixth positions. The next stage was tacking the quire. Tacking is when the scribe would hold together the leaves in quire with thread. Once threaded together, the scribe would then sew a line of parchment up the "spine" of the manuscript to protect the tacking. Materials The materials codices are made with are their support, and include papyrus, parchment (sometimes referred to as membrane or vellum), and paper. They are written and drawn on with metals, pigments and ink. The quality, size, and choice of support determine the status of a codex. Papyrus is found only in late antiquity and the early Middle Ages. Codices intended for display were bound with more durable materials than vellum. Parchment varied widely due to animal species and finish, and identification of animals used to make it has only begun to be studied in the 21st century. How manufacturing influenced the final products, technique, and style, is little understood. However, changes in style are underpinned more by variation in technique. Before the 14th and 15th century, paper was expensive, and its use may mark off the deluxe copy. Structure The structure of a codex includes its size, format/ordinatio(its quires or gatherings), consisting of sheets folded a number of times, often twice- a bifolio), sewing, bookbinding and rebinding. A quire consisted of a number of folded sheets inserting into one another- at least three, but most commonly four bifolia, that is eight sheets and sixteen pages: Latin quaternio or Greek tetradion, which became a synonym for quires. Unless an exemplar (text to be copied) was copied exactly, format differed. In preparation for writing codices, ruling patterns were used that determined the layout of each page. Holes were prickled with a spiked lead wheel and a circle. Ruling was then applied separately on each page or once through the top folio. Ownership markings, decorations and illumination are also a part of it. They are specific to the scriptoria, or any production center, and libraries of codices. Pages Watermarks may provide, although often approximate, dates for when the copying occurred. The layout– size of the margin and the number of lines– is determined. There may be textual articulations, running heads, openings, chapters and paragraphs. Space was reserved for illustrations and decorated guide letters. The apparatus of books for scholars became more elaborate during the 13th and 14th centuries when chapter, verse, page numbering, marginalia finding guides, indexes, glossaries and tables of contents were developed. The libraire By a close examination of the physical attributes of a codex, it is sometimes possible to match up long-separated elements originally from the same book. In 13th-century book publishing, due to secularization, stationers or libraires emerged. They would receive commissions for texts, which they would contract out to scribes, illustrators, and binders, to whom they supplied materials. Due to the systematic format used for assembly by the libraire, the structure can be used to reconstruct the original order of a manuscript. However, complications can arise in the study of a codex. Manuscripts were frequently rebound, and this resulted in a particular codex incorporating works of different dates and origins, thus different internal structures. Additionally, a binder could alter or unify these structures to ensure a better fit for the new binding. Completed quires or books of quires might constitute independent book units- booklets, which could be returned to the stationer, or combined with other texts to make anthologies or miscellanies. Exemplars were sometimes divided into quires for simultaneous copying and loaned out to students for study. To facilitate this, catchwords were used- a word at the end of a page providing the next page's first word. See also Grimoire History of books History of scrolls List of codices List of florilegia and botanical codices List of New Testament papyri List of New Testament uncials Traditional Chinese bookbinding Volume (bibliography) Index (publishing) Citations General and cited references External links Centre for the History of the Book The Codex and Canon Consciousness – Draft paper by Robert Kraft on the change from scroll to codex The Construction of the Codex In Classic- and Postclassic-Period Maya Civilization Maya Codex and Paper Making Encyclopaedia Romana: "Scroll and codex" K. C. Hanson, Catalogue of New Testament Papyri & Codices 2nd—10th Centuries Medieval and Renaissance manuscripts, including Vulgates, Breviaries, Contracts, and Herbal Texts from 12 -17th century, Center for Digital Initiatives, University of Vermont Libraries 1st-century introductions Books by type Codicology Italian inventions Manuscripts by type
5692
https://en.wikipedia.org/wiki/Calf%20%28animal%29
Calf (animal)
A calf (: calves) is a young domestic cow or bull. Calves are reared to become adult cattle or are slaughtered for their meat, called veal, and their hide. The term calf is also used for some other species. See "Other animals" below. Terminology "Calf" is the term used from birth to weaning, when it becomes known as a weaner or weaner calf, though in some areas the term "calf" may be used until the animal is a yearling. The birth of a calf is known as calving. A calf that has lost its mother is an orphan calf, also known as a poddy or poddy-calf in British. Bobby calves are young calves which are to be slaughtered for human consumption. A vealer is a calf weighing less than about which is at about eight to nine months of age. A young female calf from birth until she has had a calf of her own is called a heifer (). In the American Old West, a motherless or small, runty calf was sometimes referred to as a dodie. The term "calf" is also used for some other species. See "Other animals" below. Early development Calves may be produced by natural means, or by artificial breeding using artificial insemination or embryo transfer. Calves are born after nine months. They usually stand within a few minutes of calving, and suckle within an hour. However, for the first few days they are not easily able to keep up with the rest of the herd, so young calves are often left hidden by their mothers, who visit them several times a day to suckle them. By a week old the calf is able to follow the mother all the time. Some calves are ear tagged soon after birth, especially those that are stud cattle in order to correctly identify their dams (mothers), or in areas (such as the EU) where tagging is a legal requirement for cattle. Typically when the calves are about two months old they are branded, ear marked, castrated and vaccinated. Calf rearing systems The single suckler system of rearing calves is similar to that occurring naturally in wild cattle, where each calf is suckled by its own mother until it is weaned at about nine months old. This system is commonly used for rearing beef cattle throughout the world. Cows kept on poor forage (as is typical in subsistence farming) produce a limited amount of milk. A calf left with such a mother all the time can easily drink all the milk, leaving none for human consumption. For dairy production under such circumstances, the calf's access to the cow must be limited, for example by penning the calf and bringing the mother to it once a day after partly milking her. The small amount of milk available for the calf under such systems may mean that it takes a longer time to rear, and in subsistence farming it is therefore common for cows to calve only in alternate years. In more intensive dairy farming, cows can easily be bred and fed to produce far more milk than one calf can drink. In the multi-suckler system, several calves are fostered onto one cow in addition to her own, and these calves' mothers can then be used wholly for milk production. More commonly, calves of dairy cows are fed formula milk from soon after birth, usually from a bottle or bucket. Purebred female calves of dairy cows are reared as replacement dairy cows. Most purebred dairy calves are produced by artificial insemination (AI). By this method each bull can serve many cows, so only a very few of the purebred dairy male calves are needed to provide bulls for breeding. The remainder of the male calves may be reared for beef or veal; Only a proportion of purebred heifers are needed to provide replacement cows, so often some of the cows in dairy herds are put to a beef bull to produce crossbred calves suitable for rearing as beef. Veal calves may be reared entirely on milk formula and killed at about 18 or 20 weeks as "white" veal, or fed on grain and hay and killed at 22 to 35 weeks to produce red or pink veal. Growth A commercial steer or bull calf is expected to put on about per month. A nine-month-old steer or bull is therefore expected to weigh about . Heifers will weigh at least at eight months of age. Calves are usually weaned at about eight to nine months of age, but depending on the season and condition of the dam, they might be weaned earlier. They may be paddock weaned, often next to their mothers, or weaned in stockyards. The latter system is preferred by some as it accustoms the weaners to the presence of people and they are trained to take feed other than grass. Small numbers may also be weaned with their dams with the use of weaning nose rings or nosebands which results in the mothers rejecting the calves' attempts to suckle. Many calves are also weaned when they are taken to the large weaner auction sales that are conducted in the south eastern states of Australia. Victoria and New South Wales have (sale yard numbers) of up to 8,000 weaners (calves) for auction sale in one day. The best of these weaners may go to the butchers. Others will be purchased by re-stockers to grow out and fatten on grass or as potential breeders. In the United States these weaners may be known as feeders and would be placed directly into feedlots. At about 12 months old a beef heifer reaches puberty if she is well grown. Diseases Calves suffer from few congenital abnormalities but the Akabane virus is widely distributed in temperate to tropical regions of the world. The virus is a teratogenic pathogen which causes abortions, stillbirths, premature births and congenital abnormalities, but occurs only during some years. Calves commonly face on-farm acquired diseases, often of infectious nature. Preweaned calves most commonly experience conditions such as diarrhea, omphalitis, lameness and respiratory diseases. Diarrhea, Omphalitis and Lameness are most common in calves aged up to two weeks, while the frequency of respiratory diseases tends to increase with age. These conditions also display seasonal patterns, with omphalitis being more common in the summer months, and respiratory diseases and diarrhea occurring more frequently in the fall. Uses Calf meat for human consumption is called veal, and is usually produced from the male calves of Dairy cattle. Also eaten are calf's brains and calf liver. The hide is used to make calfskin, or tanned into leather and called calf leather, or sometimes in the US "novillo", the Spanish term. The fourth compartment of the stomach of slaughtered milk-fed calves is the source of rennet. The intestine is used to make Goldbeater's skin, and is the source of Calf Intestinal Alkaline Phosphatase (CIP). Dairy cows can only produce milk after having calved, and dairy cows need to produce one calf each year in order to remain in production. Female calves will become a replacement dairy cow. Male dairy calves are generally reared for beef or veal; relatively few are kept for breeding purposes. Other animals In English the term "calf" is used by extension for the young of various other large species of mammal. In addition to other bovid species (such as bison, yak and water buffalo), these include the young of camels, dolphins, elephants, giraffes, hippopotamuses, deer (such as moose, elk (wapiti) and red deer), rhinoceroses, porpoises, whales, walruses and larger seals. (Generally, the adult males of these same species are called "bulls" and the adult females "cows".) However, common domestic species tend to have their own specific names, such as lamb, foal used for all Equidae, or piglet used for all suidae. References External links Weaning-beef-calves Calving on Ropin' the Web, Agriculture and Food, Government of Alberta, Canada Winter Feeding Sites and Calf Scours, Kansas State University Cattle Vertebrate developmental biology Articles containing video clips
5693
https://en.wikipedia.org/wiki/Claude%20Shannon
Claude Shannon
Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist and cryptographer known as the "father of information theory". He is credited alongside George Boole for laying the foundations of the Information Age. As a 21-year-old master's degree student at the Massachusetts Institute of Technology (MIT), he wrote his thesis demonstrating that electrical applications of Boolean algebra could construct any logical numerical relationship. Shannon contributed to the field of cryptanalysis for national defense of the United States during World War II, including his fundamental work on codebreaking and secure telecommunications, writing a paper which would be considered one of the foundational pieces of modern cryptography. His mathematical theory of information laid the foundations for the field of information theory, with his famous paper being called the "Magna Carta of the Information Age" by Scientific American. He also made contributions to artificial intelligence. His achievements are said to be on par with those of Albert Einstein and Alan Turing in their fields. Biography Childhood The Shannon family lived in Gaylord, Michigan, and Claude was born in a hospital in nearby Petoskey. His father, Claude Sr. (1862–1934), was a businessman and, for a while, a judge of probate in Gaylord. His mother, Mabel Wolf Shannon (1890–1945), was a language teacher, who also served as the principal of Gaylord High School. Claude Sr. was a descendant of New Jersey settlers, while Mabel was a child of German immigrants. Shannon's family was active in their Methodist Church during his youth. Most of the first 16 years of Shannon's life were spent in Gaylord, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things. His best subjects were science and mathematics. At home, he constructed such devices as models of planes, a radio-controlled model boat and a barbed-wire telegraph system to a friend's house a half-mile away. While growing up, he also worked as a messenger for the Western Union company. Shannon's childhood hero was Thomas Edison, whom he later learned was a distant cousin. Both Shannon and Edison were descendants of John Ogden (1609–1682), a colonial leader and an ancestor of many distinguished people. Logic circuits In 1932, Shannon entered the University of Michigan, where he was introduced to the work of George Boole. He graduated in 1936 with two bachelor's degrees: one in electrical engineering and the other in mathematics. In 1936, Shannon began his graduate studies in electrical engineering at MIT, where he worked on Vannevar Bush's differential analyzer, an early analog computer. While studying the complicated ad hoc circuits of this analyzer, Shannon designed switching circuits based on Boole's concepts. In 1937, he wrote his master's degree thesis, A Symbolic Analysis of Relay and Switching Circuits. A paper from this thesis was published in 1938. In this work, Shannon proved that his switching circuits could be used to simplify the arrangement of the electromechanical relays that were used during that time in telephone call routing switches. Next, he expanded this concept, proving that these circuits could solve all problems that Boolean algebra could solve. In the last chapter, he presented diagrams of several circuits, including a 4-bit full adder. Using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Shannon's work became the foundation of digital circuit design, as it became widely known in the electrical engineering community during and after World War II. The theoretical rigor of Shannon's work superseded the ad hoc methods that had prevailed previously. Howard Gardner called Shannon's thesis "possibly the most important, and also the most noted, master's thesis of the century." Shannon received his PhD in mathematics from MIT in 1940. Vannevar Bush had suggested that Shannon should work on his dissertation at the Cold Spring Harbor Laboratory, in order to develop a mathematical formulation for Mendelian genetics. This research resulted in Shannon's PhD thesis, called An Algebra for Theoretical Genetics. In 1940, Shannon became a National Research Fellow at the Institute for Advanced Study in Princeton, New Jersey. In Princeton, Shannon had the opportunity to discuss his ideas with influential scientists and mathematicians such as Hermann Weyl and John von Neumann, and he also had occasional encounters with Albert Einstein and Kurt Gödel. Shannon worked freely across disciplines, and this ability may have contributed to his later development of mathematical information theory. Wartime research Shannon then joined Bell Labs to work on fire-control systems and cryptography during World War II, under a contract with section D-2 (Control Systems section) of the National Defense Research Committee (NDRC). Shannon is credited with the invention of signal-flow graphs, in 1942. He discovered the topological gain formula while investigating the functional operation of an analog computer. For two months early in 1943, Shannon came into contact with the leading British mathematician Alan Turing. Turing had been posted to Washington to share with the U.S. Navy's cryptanalytic service the methods used by the British Government Code and Cypher School at Bletchley Park to break the cyphers used by the Kriegsmarine U-boats in the north Atlantic Ocean. He was also interested in the encipherment of speech and to this end spent time at Bell Labs. Shannon and Turing met at teatime in the cafeteria. Turing showed Shannon his 1936 paper that defined what is now known as the "universal Turing machine". This impressed Shannon, as many of its ideas complemented his own. In 1945, as the war was coming to an end, the NDRC was issuing a summary of technical reports as a last step prior to its eventual closing down. Inside the volume on fire control, a special essay titled Data Smoothing and Prediction in Fire-Control Systems, coauthored by Shannon, Ralph Beebe Blackman, and Hendrik Wade Bode, formally treated the problem of smoothing the data in fire-control by analogy with "the problem of separating a signal from interfering noise in communications systems." In other words, it modeled the problem in terms of data and signal processing and thus heralded the coming of the Information Age. Shannon's work on cryptography was even more closely related to his later publications on communication theory. At the close of the war, he prepared a classified memorandum for Bell Telephone Labs entitled "A Mathematical Theory of Cryptography", dated September 1945. A declassified version of this paper was published in 1949 as "Communication Theory of Secrecy Systems" in the Bell System Technical Journal. This paper incorporated many of the concepts and mathematical formulations that also appeared in his A Mathematical Theory of Communication. Shannon said that his wartime insights into communication theory and cryptography developed simultaneously, and that "they were so close together you couldn't separate them". In a footnote near the beginning of the classified report, Shannon announced his intention to "develop these results … in a forthcoming memorandum on the transmission of information." While he was at Bell Labs, Shannon proved that the cryptographic one-time pad is unbreakable in his classified research that was later published in 1949. The same article also proved that any unbreakable system must have essentially the same characteristics as the one-time pad: the key must be truly random, as large as the plaintext, never reused in whole or part, and kept secret. Information theory In 1948, the promised memorandum appeared as "A Mathematical Theory of Communication", an article in two parts in the July and October issues of the Bell System Technical Journal. This work focuses on the problem of how best to encode the message a sender wants to transmit. Shannon developed information entropy as a measure of the information content in a message, which is a measure of uncertainty reduced by the message. In so doing, he essentially invented the field of information theory. The book The Mathematical Theory of Communication reprints Shannon's 1948 article and Warren Weaver's popularization of it, which is accessible to the non-specialist. Weaver pointed out that the word "information" in communication theory is not related to what you do say, but to what you could say. That is, information is a measure of one's freedom of choice when one selects a message. Shannon's concepts were also popularized, subject to his own proofreading, in John Robinson Pierce's Symbols, Signals, and Noise. Information theory's fundamental contribution to natural language processing and computational linguistics was further established in 1951, in his article "Prediction and Entropy of Printed English", showing upper and lower bounds of entropy on the statistics of English – giving a statistical foundation to language analysis. In addition, he proved that treating space as the 27th letter of the alphabet actually lowers uncertainty in written language, providing a clear quantifiable link between cultural practice and probabilistic cognition. Another notable paper published in 1949 is "Communication Theory of Secrecy Systems", a declassified version of his wartime work on the mathematical theory of cryptography, in which he proved that all theoretically unbreakable cyphers must have the same requirements as the one-time pad. He is also credited with the introduction of sampling theory, which is concerned with representing a continuous-time signal from a (uniform) discrete set of samples. This theory was essential in enabling telecommunications to move from analog to digital transmissions systems in the 1960s and later. He returned to MIT to hold an endowed chair in 1956. Teaching at MIT In 1956 Shannon joined the MIT faculty to work in the Research Laboratory of Electronics (RLE). He continued to serve on the MIT faculty until 1978. Later life Shannon developed Alzheimer's disease and spent the last few years of his life in a nursing home; he died in 2001, survived by his wife, a son and daughter, and two granddaughters. Hobbies and inventions Outside of Shannon's academic pursuits, he was interested in juggling, unicycling, and chess. He also invented many devices, including a Roman numeral computer called THROBAC, and juggling machines. He built a device that could solve the Rubik's Cube puzzle. Shannon designed the Minivac 601, a digital computer trainer to teach business people about how computers functioned. It was sold by the Scientific Development Corp starting in 1961. He is also considered the co-inventor of the first wearable computer along with Edward O. Thorp. The device was used to improve the odds when playing roulette. Personal life Shannon married Norma Levor, a wealthy, Jewish, left-wing intellectual in January 1940. The marriage ended in divorce after about a year. Levor later married Ben Barzman. Shannon met his second wife, Betty Shannon (née Mary Elizabeth Moore), when she was a numerical analyst at Bell Labs. They were married in 1949. Betty assisted Claude in building some of his most famous inventions. They had three children. Shannon presented himself as apolitical and an atheist. Tributes There are six statues of Shannon sculpted by Eugene Daub: one at the University of Michigan; one at MIT in the Laboratory for Information and Decision Systems; one in Gaylord, Michigan; one at the University of California, San Diego; one at Bell Labs; and another at AT&T Shannon Labs. The statue in Gaylord is located in the Claude Shannon Memorial Park. After the breakup of the Bell System, the part of Bell Labs that remained with AT&T Corporation was named Shannon Labs in his honor. According to Neil Sloane, an AT&T Fellow who co-edited Shannon's large collection of papers in 1993, the perspective introduced by Shannon's communication theory (now called information theory) is the foundation of the digital revolution, and every device containing a microprocessor or microcontroller is a conceptual descendant of Shannon's publication in 1948: "He's one of the great men of the century. Without him, none of the things we know today would exist. The whole digital revolution started with him." The cryptocurrency unit shannon (a synonym for gwei) is named after him. A Mind at Play, a biography of Shannon written by Jimmy Soni and Rob Goodman, was published in 2017. They described Shannon as "the most important genius you’ve never heard of, a man whose intellect was on par with Albert Einstein and Isaac Newton". On April 30, 2016, Shannon was honored with a Google Doodle to celebrate his life on what would have been his 100th birthday. The Bit Player, a feature film about Shannon directed by Mark Levinson premiered at the World Science Festival in 2019. Drawn from interviews conducted with Shannon in his house in the 1980s, the film was released on Amazon Prime in August 2020. The Mathematical Theory of Communication Weaver's Contribution Shannon's The Mathematical Theory of Communication, begins with an interpretation of his own work by Warren Weaver. Although Shannon's entire work is about communication itself, Warren Weaver communicated his ideas in such a way that those not acclimated to complex theory and mathematics could comprehend the fundamental laws he put forth. The coupling of their unique communicational abilities and ideas generated the Shannon-Weaver model, although the mathematical and theoretical underpinnings emanate entirely from Shannon's work after Weaver's introduction. For the layman, Weaver's introduction better communicates The Mathematical Theory of Communication, but Shannon's subsequent logic, mathematics, and expressive precision was responsible for defining the problem itself. Other work Shannon's mouse "Theseus", created in 1950, was a mechanical mouse controlled by an electromechanical relay circuit that enabled it to move around a labyrinth of 25 squares. The maze configuration was flexible and it could be modified arbitrarily by rearranging movable partitions. The mouse was designed to search through the corridors until it found the target. Having travelled through the maze, the mouse could then be placed anywhere it had been before, and because of its prior experience it could go directly to the target. If placed in unfamiliar territory, it was programmed to search until it reached a known location and then it would proceed to the target, adding the new knowledge to its memory and learning new behavior. Shannon's mouse appears to have been the first artificial learning device of its kind. Shannon's estimate for the complexity of chess In 1949 Shannon completed a paper (published in March 1950) which estimates the game-tree complexity of chess, which is approximately 10120. This number is now often referred to as the "Shannon number", and is still regarded today as an accurate estimate of the game's complexity. The number is often cited as one of the barriers to solving the game of chess using an exhaustive analysis (i.e. brute force analysis). Shannon's computer chess program On March 9, 1949, Shannon presented a paper called "Programming a Computer for playing Chess". The paper was presented at the National Institute for Radio Engineers Convention in New York. He described how to program a computer to play chess based on position scoring and move selection. He proposed basic strategies for restricting the number of possibilities to be considered in a game of chess. In March 1950 it was published in Philosophical Magazine, and is considered one of the first articles published on the topic of programming a computer for playing chess, and using a computer to solve the game. His process for having the computer decide on which move to make was a minimax procedure, based on an evaluation function of a given chess position. Shannon gave a rough example of an evaluation function in which the value of the black position was subtracted from that of the white position. Material was counted according to the usual chess piece relative value (1 point for a pawn, 3 points for a knight or bishop, 5 points for a rook, and 9 points for a queen). He considered some positional factors, subtracting ½ point for each doubled pawn, backward pawn, and isolated pawn; mobility was incorporated by adding 0.1 point for each legal move available. Shannon's maxim Shannon formulated a version of Kerckhoffs' principle as "The enemy knows the system". In this form it is known as "Shannon's maxim". Commemorations Shannon centenary The Shannon centenary, 2016, marked the life and influence of Claude Elwood Shannon on the hundredth anniversary of his birth on April 30, 1916. It was inspired in part by the Alan Turing Year. An ad hoc committee of the IEEE Information Theory Society including Christina Fragouli, Rüdiger Urbanke, Michelle Effros, Lav Varshney and Sergio Verdú, coordinated worldwide events. The initiative was announced in the History Panel at the 2015 IEEE Information Theory Workshop Jerusalem and the IEEE Information Theory Society newsletter. A detailed listing of confirmed events was available on the website of the IEEE Information Theory Society. Some of the planned activities included: Bell Labs hosted the First Shannon Conference on the Future of the Information Age on April 28–29, 2016, in Murray Hill, New Jersey, to celebrate Claude Shannon and the continued impact of his legacy on society. The event includes keynote speeches by global luminaries and visionaries of the information age who will explore the impact of information theory on society and our digital future, informal recollections, and leading technical presentations on subsequent related work in other areas such as bioinformatics, economic systems, and social networks. There is also a student competition Bell Labs launched a Web exhibit on April 30, 2016, chronicling Shannon's hiring at Bell Labs (under an NDRC contract with US Government), his subsequent work there from 1942 through 1957, and details of Mathematics Department. The exhibit also displayed bios of colleagues and managers during his tenure, as well as original versions of some of the technical memoranda which subsequently became well known in published form. The Republic of Macedonia is planning a commemorative stamp. A USPS commemorative stamp is being proposed, with an active petition. A documentary on Claude Shannon and on the impact of information theory, The Bit Player, is being produced by Sergio Verdú and Mark Levinson. A trans-Atlantic celebration of both George Boole's bicentenary and Claude Shannon's centenary that is being led by University College Cork and the Massachusetts Institute of Technology. A first event was a workshop in Cork, When Boole Meets Shannon, and will continue with exhibits at the Boston Museum of Science and at the MIT Museum. Many organizations around the world are holding observance events, including the Boston Museum of Science, the Heinz-Nixdorf Museum, the Institute for Advanced Study, Technische Universität Berlin, University of South Australia (UniSA), Unicamp (Universidade Estadual de Campinas), University of Toronto, Chinese University of Hong Kong, Cairo University, Telecom ParisTech, National Technical University of Athens, Indian Institute of Science, Indian Institute of Technology Bombay, Indian Institute of Technology Kanpur, Nanyang Technological University of Singapore, University of Maryland, University of Illinois at Chicago, École Polytechnique Federale de Lausanne, The Pennsylvania State University (Penn State), University of California Los Angeles, Massachusetts Institute of Technology, Chongqing University of Posts and Telecommunications, and University of Illinois at Urbana-Champaign. A logo that appears on this page was crowdsourced on Crowdspring. The Math Encounters presentation of May 4, 2016, at the National Museum of Mathematics in New York, titled Saving Face: Information Tricks for Love and Life, focused on Shannon's work in information theory. A video recording and other material are available. Awards and honors list The Claude E. Shannon Award was established in his honor; he was also its first recipient, in 1972. Stuart Ballantine Medal of the Franklin Institute, 1955 Member of the American Academy of Arts and Sciences, 1957 Harvey Prize, the Technion of Haifa, Israel, 1972 Alfred Noble Prize, 1939 (award of civil engineering societies in the US) National Medal of Science, 1966, presented by President Lyndon B. Johnson Kyoto Prize, 1985 Morris Liebmann Memorial Prize of the Institute of Radio Engineers, 1949 United States National Academy of Sciences, 1956 Medal of Honor of the Institute of Electrical and Electronics Engineers, 1966 Golden Plate Award of the American Academy of Achievement, 1967 Royal Netherlands Academy of Arts and Sciences (KNAW), foreign member, 1975 Member of the American Philosophical Society, 1983 Basic Research Award, Eduard Rhein Foundation, Germany, 1991 Marconi Society Lifetime Achievement Award, 2000 Donnor Professor of Science, MIT, 1958–1979 Selected works Claude E. Shannon: A Symbolic Analysis of Relay and Switching Circuits, master's thesis, MIT, 1937. Claude E. Shannon: "A Mathematical Theory of Communication", Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, 1948 (abstract). Claude E. Shannon and Warren Weaver: The Mathematical Theory of Communication. The University of Illinois Press, Urbana, Illinois, 1949. See also Entropy power inequality Error-correcting codes with feedback List of pioneers in computer science Models of communication n-gram Noisy channel coding theorem Nyquist–Shannon sampling theorem One-time pad Product cipher Pulse-code modulation Rate distortion theory Sampling Shannon capacity Shannon entropy Shannon index Shannon multigraph Shannon security Shannon switching game Shannon–Fano coding Shannon–Hartley law Shannon–Hartley theorem Shannon's expansion Shannon's source coding theorem Shannon-Weaver model of communication Whittaker–Shannon interpolation formula References Further reading Rethnakaran Pulikkoonattu — Eric W. Weisstein: Mathworld biography of Shannon, Claude Elwood (1916–2001) Shannon, Claude Elwood (1916–2001) – from Eric Weisstein's World of Scientific Biography Claude E. Shannon: Programming a Computer for Playing Chess, Philosophical Magazine, Ser.7, Vol. 41, No. 314, March 1950. (Available online under External links below) David Levy: Computer Gamesmanship: Elements of Intelligent Game Design, Simon & Schuster, 1983. Mindell, David A., "Automation's Finest Hour: Bell Labs and Automatic Control in World War II", IEEE Control Systems, December 1995, pp. 72–80. Poundstone, William, Fortune's Formula, Hill & Wang, 2005, Gleick, James, The Information: A History, A Theory, A Flood, Pantheon, 2011, Jimmy Soni and Rob Goodman, A Mind at Play: How Claude Shannon Invented the Information Age, Simon and Schuster, 2017, Nahin, Paul J., The Logician and the Engineer: How George Boole and Claude Shannon Create the Information Age, Princeton University Press, 2013, Everett M. Rogers, Claude Shannon's Cryptography Research During World War II and the Mathematical Theory of Communication, 1994 Proceedings of IEEE International Carnahan Conference on Security Technology, pp. 1–5, 1994. Claude Shannon's cryptography research during World War II and the mathematical theory of communication External links Guide to the Claude Elwood Shannon papers at the Library of Congress Claude Elwood Shannon (1916–2001) at the Notices of the American Mathematical Society 1916 births 2001 deaths 20th-century American engineers 20th-century American essayists 20th-century American male writers 20th-century American mathematicians 20th-century American non-fiction writers 20th-century atheists 21st-century atheists American atheists American electronics engineers American geneticists American information theorists American male essayists American male non-fiction writers American people of World War II Burials at Mount Auburn Cemetery Combinatorial game theorists Communication theorists Computer chess people Control theorists Deaths from Alzheimer's disease Foreign Members of the Royal Society Harvey Prize winners IEEE Medal of Honor recipients Information theory Institute for Advanced Study visiting scholars Internet pioneers Jugglers Kyoto laureates in Basic Sciences Massachusetts Institute of Technology alumni Members of the Royal Netherlands Academy of Arts and Sciences Members of the United States National Academy of Sciences MIT School of Engineering faculty Modern cryptographers National Medal of Science laureates Neurological disease deaths in Massachusetts People from Petoskey, Michigan People of the Cold War Pre-computer cryptographers Probability theorists Scientists at Bell Labs Unicyclists University of Michigan alumni Members of the American Philosophical Society Scientists from Michigan Mathematicians from Michigan
5694
https://en.wikipedia.org/wiki/Cracking
Cracking
Cracking may refer to: Cracking, the formation of a fracture or partial fracture in a solid material studied as fracture mechanics Performing a sternotomy Fluid catalytic cracking, a catalytic process widely used in oil refineries for cracking large hydrocarbon molecules into smaller molecules Cracking (chemistry), the decomposition of complex organic molecules into smaller ones Cracking joints, the practice of manipulating one's bone joints to make a sharp sound Cracking codes, see cryptanalysis Whip cracking Safe cracking Crackin, band featuring Lester Abrams Packing and cracking, a method of creating voting districts to give a political party an advantage In computing': Another name for security hacking; the practice of defeating computer security. Password cracking, the process of discovering the plaintext of an encrypted computer password. Software cracking, the defeating of software copy protection. See also Crack (disambiguation) Cracker (disambiguation) Cracklings (solid material remaining after rendering fat) Cracker (pejorative)
5695
https://en.wikipedia.org/wiki/Community
Community
A community is a social unit (a group of living things) with a shared socially significant characteristic, such as place, set of norms, culture, religion, values, customs, or identity. Communities may share a sense of place situated in a given geographical area (e.g. a country, village, town, or neighbourhood) or in virtual space through communication platforms. Durable good relations that extend beyond immediate genealogical ties also define a sense of community, important to their identity, practice, and roles in social institutions such as family, home, work, government, TV network, society, or humanity at large. Although communities are usually small relative to personal social ties, "community" may also refer to large group affiliations such as national communities, international communities, and virtual communities. The English-language word "community" derives from the Old French (Modern French: ), which comes from the Latin communitas "community", "public spirit" (from Latin communis, "common"). Human communities may have intent, belief, resources, preferences, needs, and risks in common, affecting the identity of the participants and their degree of cohesiveness. Perspectives of various disciplines Archaeology Archaeological studies of social communities use the term "community" in two ways, paralleling usage in other areas. The first is an informal definition of community as a place where people used to live. In this sense it is synonymous with the concept of an ancient settlement—whether a hamlet, village, town, or city. The second meaning resembles the usage of the term in other social sciences: a community is a group of people living near one another who interact socially. Social interaction on a small scale can be difficult to identify with archaeological data. Most reconstructions of social communities by archaeologists rely on the principle that social interaction in the past was conditioned by physical distance. Therefore, a small village settlement likely constituted a social community and spatial subdivisions of cities and other large settlements may have formed communities. Archaeologists typically use similarities in material culture—from house types to styles of pottery—to reconstruct communities in the past. This classification method relies on the assumption that people or households will share more similarities in the types and styles of their material goods with other members of a social community than they will with outsiders. Sociology Ecology In ecology, a community is an assemblage of populations—potentially of different species—interacting with one another. Community ecology is the branch of ecology that studies interactions between and among species. It considers how such interactions, along with interactions between species and the abiotic environment, affect social structure and species richness, diversity and patterns of abundance. Species interact in three ways: competition, predation and mutualism: Competition typically results in a double negative—that is both species lose in the interaction. Predation involves a win/lose situation, with one species winning. Mutualism sees both species co-operating in some way, with both winning. The two main types of ecological communities are major communities, which are self-sustaining and self-regulating (such as a forest or a lake), and minor communities, which rely on other communities (like fungi decomposing a log) and are the building blocks of major communities. Moreover, we can establish other non-taxonomic subdivisions of biocenosis, such as guilds. Semantics The concept of "community" often has a positive semantic connotation, exploited rhetorically by populist politicians and by advertisers to promote feelings and associations of mutual well-being, happiness and togetherness—veering towards an almost-achievable utopian community. In contrast, the epidemiological term "community transmission" can have negative implications, and instead of a "criminal community" one often speaks of a "criminal underworld" or of the "criminal fraternity". Key concepts Gemeinschaft and Gesellschaft In (1887), German sociologist Ferdinand Tönnies described two types of human association: (usually translated as "community") and ("society" or "association"). Tönnies proposed the – dichotomy as a way to think about social ties. No group is exclusively one or the other. stress personal social interactions, and the roles, values, and beliefs based on such interactions. stress indirect interactions, impersonal roles, formal values, and beliefs based on such interactions. Sense of community In a seminal 1986 study, McMillan and Chavis identify four elements of "sense of community": membership: feeling of belonging or of sharing a sense of personal relatedness, influence: mattering, making a difference to a group and of the group mattering to its members reinforcement: integration and fulfillment of needs, shared emotional connection. A "sense of community index" (SCI) was developed by Chavis and colleagues, and revised and adapted by others. Although originally designed to assess sense of community in neighborhoods, the index has been adapted for use in schools, the workplace, and a variety of types of communities. Studies conducted by the APPA indicate that young adults who feel a sense of belonging in a community, particularly small communities, develop fewer psychiatric and depressive disorders than those who do not have the feeling of love and belonging. Socialization The process of learning to adopt the behavior patterns of the community is called socialization. The most fertile time of socialization is usually the early stages of life, during which individuals develop the skills and knowledge and learn the roles necessary to function within their culture and social environment. For some psychologists, especially those in the psychodynamic tradition, the most important period of socialization is between the ages of one and ten. But socialization also includes adults moving into a significantly different environment where they must learn a new set of behaviors. Socialization is influenced primarily by the family, through which children first learn community norms. Other important influences include schools, peer groups, people, mass media, the workplace, and government. The degree to which the norms of a particular society or community are adopted determines one's willingness to engage with others. The norms of tolerance, reciprocity, and trust are important "habits of the heart", as de Tocqueville put it, in an individual's involvement in community. Community development Community development is often linked with community work or community planning, and may involve stakeholders, foundations, governments, or contracted entities including non-government organisations (NGOs), universities or government agencies to progress the social well-being of local, regional and, sometimes, national communities. More grassroots efforts, called community building or community organizing, seek to empower individuals and groups of people by providing them with the skills they need to effect change in their own communities. These skills often assist in building political power through the formation of large social groups working for a common agenda. Community development practitioners must understand both how to work with individuals and how to affect communities' positions within the context of larger social institutions. Public administrators, in contrast, need to understand community development in the context of rural and urban development, housing and economic development, and community, organizational and business development. Formal accredited programs conducted by universities, as part of degree granting institutions, are often used to build a knowledge base to drive curricula in public administration, sociology and community studies. The General Social Survey from the National Opinion Research Center at the University of Chicago and the Saguaro Seminar at the Harvard Kennedy School are examples of national community development in the United States. The Maxwell School of Citizenship and Public Affairs at Syracuse University in New York State offers core courses in community and economic development, and in areas ranging from non-profit development to US budgeting (federal to local, community funds). In the United Kingdom, the University of Oxford has led in providing extensive research in the field through its Community Development Journal, used worldwide by sociologists and community development practitioners. At the intersection between community development and community building are a number of programs and organizations with community development tools. One example of this is the program of the Asset Based Community Development Institute of Northwestern University. The institute makes available downloadable tools to assess community assets and make connections between non-profit groups and other organizations that can help in community building. The Institute focuses on helping communities develop by "mobilizing neighborhood assets" – building from the inside out rather than the outside in. In the disability field, community building was prevalent in the 1980s and 1990s with roots in John McKnight's approaches. Community building and organizing In The Different Drum: Community-Making and Peace (1987) Scott Peck argues that the almost accidental sense of community that exists at times of crisis can be consciously built. Peck believes that conscious community building is a process of deliberate design based on the knowledge and application of certain rules. He states that this process goes through four stages: Pseudocommunity: When people first come together, they try to be "nice" and present what they feel are their most personable and friendly characteristics. Chaos: People move beyond the inauthenticity of pseudo-community and feel safe enough to present their "shadow" selves. Emptiness: Moves beyond the attempts to fix, heal and convert of the chaos stage, when all people become capable of acknowledging their own woundedness and brokenness, common to human beings. True community: Deep respect and true listening for the needs of the other people in this community. In 1991, Peck remarked that building a sense of community is easy but maintaining this sense of community is difficult in the modern world. An interview with M. Scott Peck by Alan Atkisson. In Context #29, p. 26. The three basic types of community organizing are grassroots organizing, coalition building, and "institution-based community organizing", (also called "broad-based community organizing", an example of which is faith-based community organizing, or Congregation-based Community Organizing). Community building can use a wide variety of practices, ranging from simple events (e.g., potlucks, small book clubs) to larger-scale efforts (e.g., mass festivals, construction projects that involve local participants rather than outside contractors). Community building that is geared toward citizen action is usually termed "community organizing". In these cases, organized community groups seek accountability from elected officials and increased direct representation within decision-making bodies. Where good-faith negotiations fail, these constituency-led organizations seek to pressure the decision-makers through a variety of means, including picketing, boycotting, sit-ins, petitioning, and electoral politics. Community organizing can focus on more than just resolving specific issues. Organizing often means building a widely accessible power structure, often with the end goal of distributing power equally throughout the community. Community organizers generally seek to build groups that are open and democratic in governance. Such groups facilitate and encourage consensus decision-making with a focus on the general health of the community rather than a specific interest group. If communities are developed based on something they share in common, whether location or values, then one challenge for developing communities is how to incorporate individuality and differences. Rebekah Nathan suggests in her book, My Freshman Year, we are drawn to developing communities totally based on sameness, despite stated commitments to diversity, such as those found on university websites. Types of community A number of ways to categorize types of community have been proposed. One such breakdown is as follows: Location-based Communities: range from the local neighbourhood, suburb, village, town or city, region, nation or even the planet as a whole. These are also called communities of place. Identity-based Communities: range from the local clique, sub-culture, ethnic group, religious, multicultural or pluralistic civilisation, or the global community cultures of today. They may be included as communities of need or identity, such as disabled persons, or frail aged people. Organizationally-based Communities: range from communities organized informally around family or network-based guilds and associations to more formal incorporated associations, political decision-making structures, economic enterprises, or professional associations at a small, national or international scale. Intentional Communities: a mix of all three previous types, these are highly cohesive residential communities with a common social or spiritual purpose, ranging from monasteries and ashrams to modern ecovillages and housing cooperatives. The usual categorizations of community relations have a number of problems: (1) they tend to give the impression that a particular community can be defined as just this kind or another; (2) they tend to conflate modern and customary community relations; (3) they tend to take sociological categories such as ethnicity or race as given, forgetting that different ethnically defined persons live in different kinds of communities—grounded, interest-based, diasporic, etc. In response to these problems, Paul James and his colleagues have developed a taxonomy that maps community relations, and recognizes that actual communities can be characterized by different kinds of relations at the same time: Grounded community relations. This involves enduring attachment to particular places and particular people. It is the dominant form taken by customary and tribal communities. In these kinds of communities, the land is fundamental to identity. Life-style community relations. This involves giving primacy to communities coming together around particular chosen ways of life, such as morally charged or interest-based relations or just living or working in the same location. Hence the following sub-forms: community-life as morally bounded, a form taken by many traditional faith-based communities. community-life as interest-based, including sporting, leisure-based and business communities which come together for regular moments of engagement. community-life as proximately-related, where neighbourhood or commonality of association forms a community of convenience, or a community of place (see below). Projected community relations. This is where a community is self-consciously treated as an entity to be projected and re-created. It can be projected as through thin advertising slogan, for example gated community, or can take the form of ongoing associations of people who seek political integration, communities of practice based on professional projects, associative communities which seek to enhance and support individual creativity, autonomy and mutuality. A nation is one of the largest forms of projected or imagined community. In these terms, communities can be nested and/or intersecting; one community can contain another—for example a location-based community may contain a number of ethnic communities. Both lists above can used in a cross-cutting matrix in relation to each other. Internet communities In general, virtual communities value knowledge and information as currency or social resource. What differentiates virtual communities from their physical counterparts is the extent and impact of "weak ties", which are the relationships acquaintances or strangers form to acquire information through online networks. Relationships among members in a virtual community tend to focus on information exchange about specific topics. A survey conducted by Pew Internet and The American Life Project in 2001 found those involved in entertainment, professional, and sports virtual-groups focused their activities on obtaining information. An epidemic of bullying and harassment has arisen from the exchange of information between strangers, especially among teenagers, in virtual communities. Despite attempts to implement anti-bullying policies, Sheri Bauman, professor of counselling at the University of Arizona, claims the "most effective strategies to prevent bullying" may cost companies revenue. Virtual Internet-mediated communities can interact with offline real-life activity, potentially forming strong and tight-knit groups such as QAnon. See also Circles of Sustainability Communitarianism Community theatre Engaged theory Outline of community Wikipedia community Notes References Barzilai, Gad. 2003. Communities and Law: Politics and Cultures of Legal Identities. Ann Arbor: University of Michigan Press. Beck, U. 1992. Risk Society: Towards a New Modernity. London: Sage: 2000. What is globalization? Cambridge: Polity Press. Chavis, D.M., Hogge, J.H., McMillan, D.W., & Wandersman, A. 1986. "Sense of community through Brunswick's lens: A first look." Journal of Community Psychology, 14(1), 24–40. Chipuer, H.M., & Pretty, G.M.H. (1999). A review of the Sense of Community Index: Current uses, factor structure, reliability, and further development. Journal of Community Psychology, 27(6), 643–658. Christensen, K., et al. (2003). Encyclopedia of Community. 4 volumes. Thousand Oaks, CA: Sage. Cohen, A. P. 1985. The Symbolic Construction of Community. Routledge: New York. Durkheim, Émile. 1950 [1895] The Rules of Sociological Method. Translated by S.A. Solovay and J.H. Mueller. New York: The Free Press. Cox, F., J. Erlich, J. Rothman, and J. Tropman. 1970. Strategies of Community Organization: A Book of Readings. Itasca, IL: F.E. Peacock Publishers. Effland, R. 1998. The Cultural Evolution of Civilizations Mesa Community College. Giddens, A. 1999. "Risk and Responsibility" Modern Law Review 62(1): 1–10. Lenski, G. 1974. Human Societies: An Introduction to Macrosociology. New York: McGraw-Hill, Inc. Long, D.A., & Perkins, D.D. (2003). Confirmatory Factor Analysis of the Sense of Community Index and Development of a Brief SCI. Journal of Community Psychology, 31, 279–296. Lyall, Scott, ed. (2016). Community in Modern Scottish Literature. Brill | Rodopi: Leiden | Boston. Nancy, Jean-Luc. La Communauté désœuvrée – philosophical questioning of the concept of community and the possibility of encountering a non-subjective concept of it Newman, D. 2005. Sociology: Exploring the Architecture of Everyday Life, Chapter 5. "Building Identity: Socialization" Pine Forge Press. Retrieved: 2006-08-05. Putnam, R.D. 2000. Bowling Alone: The collapse and revival of American community. New York: Simon & Schuster Sarason, S.B. 1974. The psychological sense of community: Prospects for a community psychology. San Francisco: Jossey-Bass. 1986. "Commentary: The emergence of a conceptual center." Journal of Community Psychology, 14, 405–407. Smith, M.K. 2001. Community. Encyclopedia of informal education. Last updated: January 28, 2005. Retrieved: 2006-07-15. Types of organization
5696
https://en.wikipedia.org/wiki/Community%20college
Community college
A community college is a type of undergraduate higher education institution, generally leading to an associate degree, certificate, or diploma. The term can have different meanings in different countries: many community colleges have an "open enrollment" for students who have graduated from high school (also known as senior secondary school or upper secondary school). The term usually refers to a higher educational institution that provides workforce education and college transfer academic programs. Some institutions maintain athletic teams and dormitories similar to their university counterparts. Australia In Australia, the term "community college" refers to small private businesses running short (e.g. 6 weeks) courses generally of a self-improvement or hobbyist nature. Equivalent to the American notion of community colleges are Tertiary and Further Education colleges or TAFEs; these are institutions regulated mostly at state and territory level. There are also an increasing number of private providers colloquially called "colleges". TAFEs and other providers carry on the tradition of adult education, which was established in Australia around the mid-19th century, when evening classes were held to help adults enhance their numeracy and literacy skills. Most Australian universities can also be traced back to such forerunners, although obtaining a university charter has always changed their nature. In TAFEs and colleges today, courses are designed for personal development of an individual or for employment outcomes. Educational programs cover a variety of topics such as arts, languages, business and lifestyle. They usually are scheduled to run two, three or four days of the week, depending on the level of the course undertaken. A Certificate I may only run for 4 hours twice a week for a term of 9 weeks. A full-time Diploma course might have classes 4 days per week for a year (36 weeks). Some courses may be offered in the evenings or weekends to accommodate people working full-time. Funding for colleges may come from government grants and course fees. Many are not-for-profit organisations. Such TAFES are located in metropolitan, regional and rural locations of Australia. Education offered by TAFEs and colleges has changed over the years. By the 1980s, many colleges had recognised a community need for computer training. Since then thousands of people have increased skills through IT courses. The majority of colleges by the late 20th century had also become Registered Training Organisations. They offer individuals a nurturing, non-traditional education venue to gain skills that better prepare them for the workplace and potential job openings. TAFEs and colleges have not traditionally offered bachelor's degrees, instead providing pathway arrangements with universities to continue towards degrees. The American innovation of the associate degree is being developed at some institutions. Certificate courses I to IV, diplomas and advanced diplomas are typically offered, the latter deemed equivalent to an undergraduate qualification, albeit typically in more vocational areas. Recently, some TAFE institutes (and private providers) have also become higher education providers in their own right and are now starting to offer bachelor's degree programs. Canada In Canada, colleges are adult educational institutions that provide higher education and tertiary education, and grant certificates and diplomas. Alternatively, Canadian colleges are often called "institutes" or "polytechnic institutes". As well, in Ontario, the 24 colleges of applied arts and technology have been mandated to offer their own stand-alone degrees as well as to offer joint degrees with universities through "articulation agreements" that often result in students emerging with both a diploma and a degree. Thus, for example, the University of Guelph "twins" with Humber College and York University does the same with Seneca College. More recently, however, colleges have been offering a variety of their own degrees, often in business, technology, science, and other technical fields. Each province has its own educational system, as prescribed by the Canadian federalism model of governance. In the mid-1960s and early 1970s, most Canadian colleges began to provide practical education and training for the emerging and booming generation, and for immigrants from around the world who were entering Canada in increasing numbers at that time. A formative trend was the merging of the then separate vocational training and adult education (night school) institutions. Canadian colleges are either publicly funded or private post-secondary institutions (run for profit). In terms of academic pathways, Canadian colleges and universities collaborate with each other with the purpose of providing college students the opportunity to academically upgrade their education. Students can transfer their diplomas and earn transfer credits through their completed college credits towards undergraduate university degrees. The term associate degree is used in western Canada to refer to a two-year college arts or science degree, similar to how the term is used in the United States. In other parts of Canada, the term advanced degree is used to indicate a three- or four-year college program. In Quebec, three years is the norm for a university degree because a year of credit is earned in the CÉGEP (college) system. Even when speaking in English, people often refer to all colleges as Cégeps; however, the term is an acronym more correctly applied specifically to the French-language public system: Collège d'enseignement général et professionnel (CEGEP); in English: College of General and Vocational Education. The word "college" can also refer to a private high school in Quebec. Canadian community college systems List of colleges in Canada Colleges and Institutes Canada (CICan) – publicly funded educational institutions; formerly the Association of Canadian Community Colleges (ACCC) National Association of Career Colleges – privately funded educational institutions; formerly the Association of Canadian Career Colleges India In India, 98 community colleges are recognized by the University Grants Commission. The courses offered by these colleges are diplomas, advance diplomas and certificate courses. The duration of these courses usually ranges from six months to two years. Malaysia Community colleges in Malaysia are a network of educational institutions whereby vocational and technical skills training could be provided at all levels for school leavers before they entered the workforce. The community colleges also provide an infrastructure for rural communities to gain skills training through short courses as well as providing access to a post-secondary education. At the moment, most community colleges award qualifications up to Level 3 in the Malaysian Qualifications Framework (Certificate 3) in both the Skills sector (Sijil Kemahiran Malaysia or the Malaysian Skills Certificate) as well as the Vocational and Training sector but the number of community colleges that are starting to award Level 4 qualifications (Diploma) are increasing. This is two levels below a bachelor's degree (Level 6 in the MQF) and students within the system who intend to further their studies to that level will usually seek entry into Advanced Diploma programs in public universities, polytechnics or accredited private providers. Philippines In the Philippines, a community school functions as elementary or secondary school at daytime and towards the end of the day convert into a community college. This type of institution offers night classes under the supervision of the same principal, and the same faculty members who are given part-time college teaching load. The concept of community college dates back to the time of the former Minister of Education, Culture and Sports (MECS) that had under its wings the Bureaus of Elementary Education, Secondary Education, Higher Education and Vocational-Technical Education. MECS Secretary, Cecilio Putong, who in 1971 wrote that a community school is a school established in the community, by the community, and for the community itself. Pedro T. Orata of Pangasinan shared the same idea, hence the establishment of a community college, now called the City College of Urdaneta. A community college like the one in Abuyog, Leyte can operate with only a PHP 124,000 annual budget in a two-story structure housing more than 700 students. United Kingdom Except for Scotland, this term is rarely used in the United Kingdom. When it is, a community college is a school which not only provides education for the school-age population (11–18) of the locality, but also additional services and education to adults and other members of the community. This education includes but is not limited to sports, adult literacy and lifestyle education. Usually when students finish their secondary school studies at age 16, they move on to a sixth form college where they study for their A-levels (although some secondary schools have integrated sixth forms). After the two-year A-level period, they may proceed to a college of further education or a university. The former is also known as a technical college. United States In the United States, community colleges, sometimes called junior colleges, technical colleges, two-year colleges, or city colleges, are primarily public institutions providing tertiary education, also known as continuing education, that focuses on certificates, diplomas, and associate degrees. After graduating from a community college, some students transfer to a liberal arts college or university for two to three years to complete a bachelor's degree. Before the 1970s, community colleges in the United States were more commonly referred to as junior colleges. That term is still used at some institutions. Public community colleges primarily attract and accept students from the local community and are usually supported by local tax revenue. They usually work with local and regional businesses to ensure students are being prepared for the local workforce. Research Some research organizations and publications focus upon the activities of community college, junior college, and technical college institutions. Many of these institutions and organizations present the most current research and practical outcomes at annual community college conferences. The American Association of Community Colleges has provided oversight on community college research since the 1920s. AACC publishes a research journal called the Community College Journal. The Community College Research Center (CCRC) at Teachers College, Columbia University, has been conducting research on community colleges since 1996 to identify barriers to students' post-secondary access and promising solutions. CCRC's publishes research reports, briefs, and resources geared toward a variety of community college stakeholders, including college and college system leaders, faculty and support staff, policymakers, and institutional researchers. The Association of Community College Trustees (ACCT) has provided education for community college boards of directors and advocacy for community colleges since 1972. ACCT President and CEO J. Noah Brown published a book about the past, present, and future of community colleges, Charting a New Course for Community Colleges: Aligning Policies with Practice. The Center for Community College Student Engagement at the University of Texas at Austin administers surveys and provides data analysis support to member colleges regarding various factors of student engagement and involvement in community colleges in the United States and Canada. The Office of Community College Research and Leadership at the University of Illinois at Urbana–Champaign studies policies, programs, and practices designed to enhance outcomes for diverse youth and adults who seek to transition to and through college to employment. OCCRL's research spans the P-20 education continuum, with an intense focus on how community colleges impact education and employment outcomes for diverse learners. Results of OCCRL's studies of pathways and programs of study, extending from high school to community colleges and universities and to employment, are disseminated nationally and internationally. Reports and materials are derived from new knowledge captured and disseminated through OCCRL's website, scholarly publications, and other vehicles. Several peer-reviewed journals extensively publish research on community colleges: Community College Journal of Research and Practice Community College Review The College Quarterly Journal of Applied Research in the Community College Journal of Transformative Leadership and Policy Studies New Directions for Community Colleges See also Articulation (education) Distance learning E-learning Folk high school Junior college Lifelong learning In Australia Technical and further education Workers' Educational Association, also in the UK In the Philippines Association of Local Colleges and Universities Local college and university In the UK Further education References Further reading Baker, G. A. III (1994). A handbook on the community college in America: Its history, mission, and management. Westport, CT: Greenwood Press. Cohen, A.M., Brawer, F.B. (2003) The American Community College, 4th edition. San Francisco: Jossey Bass. Dougherty, K. J. (1994). The contradictory college: The conflicting origins, impacts, and futures of the community college. Albany, NY: State University of New York Press. Frye, J. H. (1992). The vision of the public junior college, 1900–1940. Westport, CT: Greenwood Press. Kasper, H.T. (2002). The changing role of community college. Occupational Outlook Quarterly, 46(4), 14–21. Types of university or college Vocational education
5697
https://en.wikipedia.org/wiki/Civil%20Rights%20Memorial
Civil Rights Memorial
The Civil Rights Memorial is an American memorial in Montgomery, Alabama, created by Maya Lin. The names of 41 people are inscribed on the granite fountain as martyrs who were killed in the civil rights movement. The memorial is sponsored by the Southern Poverty Law Center. Design The names included in the memorial belong to those who were killed between 1955 and 1968. The dates chosen represent a time when legalized segregation was prominent. In 1956 the U.S. Supreme Court ruled in Brown v. Board of Education that racial segregation in schools was unlawful and 1968 is the year of the assassination of Martin Luther King Jr. The monument was created by Maya Lin, who also created the Vietnam Veterans Memorial in Washington, D.C. The Civil Rights Memorial was dedicated in 1989. The concept of Lin's design is based on the soothing and healing effect of water. It was inspired by a passage from King's 1963 "I Have a Dream" speech "...we will not be satisfied "until justice rolls down like waters and righteousness like a mighty stream..." The quotation in the passage, which is inscribed on the memorial, is a direct paraphrase of Amos 5:24, as translated in the American Standard Version of the Bible. The memorial is a fountain in the form of an asymmetric inverted stone cone. A film of water flows over the base of the cone, which contains the 41 names included. It is possible to touch the smooth film of water and to alter it temporarily, which quickly returns to smoothness. The memorial is designed in a timeline manner. It begins with Brown v. Board in 1954, and ends with Martin Luther King Jr.'s assassination in 1968. Tours and location The memorial is in downtown Montgomery, at 400 Washington Avenue, in an open plaza in front of the Civil Rights Memorial Center, which was the offices of the Southern Poverty Law Center until it moved across the street into a new building in 2001. The memorial may be visited freely 24 hours a day, 7 days a week. The Civil Rights Memorial Center offers guided group tours, lasting approximately one hour. Tours are available by appointment, Monday to Saturday. The memorial is only a few blocks from other historic sites, including the Dexter Avenue King Memorial Baptist Church, the Alabama State Capitol, the Alabama Department of Archives and History, the corners where Claudette Colvin and Rosa Parks boarded buses in 1955 on which they would later refuse to give up their seats, and the Rosa Parks Library and Museum. Names included "Civil Rights Martyrs" The 41 names included in the Civil Rights Memorial are those of: Louis Allen Willie Brewster Benjamin Brown Johnnie Mae Chappell James Chaney Addie Mae Collins Vernon Dahmer Jonathan Daniels Henry Hezekiah Dee Roman Ducksworth Jr. Willie Edwards Medgar Evers Andrew Goodman Paul Guihard Samuel Hammond Jr. Jimmie Lee Jackson Wharlest Jackson Martin Luther King Jr. Bruce W. Klunder George W. Lee Herbert Lee Viola Liuzzo Denise McNair Delano Herman Middleton Charles Eddie Moore Oneal Moore William Lewis Moore Mack Charles Parker Lemuel Penn James Reeb John Earl Reese Carole Robertson Michael Schwerner Henry Ezekial Smith Lamar Smith Emmett Till Clarence Triggs Virgil Lamar Ware Cynthia Wesley Ben Chester White Sammy Younge Jr. "The Forgotten" "The Forgotten" are 74 people who are identified in a display at the Civil Rights Memorial Center. These names were not inscribed on the Memorial because there was insufficient information about their deaths at the time the Memorial was created. However, it is thought that these people were killed as a result of racially motivated violence between 1952 and 1968. Andrew Lee Anderson Frank Andrews Isadore Banks Larry Bolden James Brazier Thomas Brewer Hilliard Brooks Charles Brown Jessie Brown Carrie Brumfield Eli Brumfield Silas (Ernest) Caston Clarence Cloninger Willie Countryman Vincent Dahmon Woodrow Wilson Daniels Joseph Hill Dumas Pheld Evans J. E. Evanston Mattie Greene Jasper Greenwood Jimmie Lee Griffith A. C. Hall Rogers Hamilton Collie Hampton Alphonso Harris Izell Henry Arthur James Hill Ernest Hunter Luther Jackson Ernest Jells Joe Franklin Jeter Marshall Johnson John Lee Willie Henry Lee Richard Lillard George Love Robert McNair Maybelle Mahone Sylvester Maxwell Clinton Melton James Andrew Miller Booker T. Mixon Nehemiah Montgomery Frank Morris James Earl Motley Sam O'Quinn Hubert Orsby Larry Payne C. H. Pickett Albert Pitts David Pitts Ernest McPharland Jimmy Powell William Roy Prather Johnny Queen Donald Rasberry Fred Robinson Johnny Robinson Willie Joe Sanford Marshall Scott Jr. Jessie James Shelby W. G. Singleton Ed Smith Eddie James Stewart Isaiah Taylor Freddie Lee Thomas Saleam Triggs Hubert Varner Clifton Walker James Waymers John Wesley Wilder Rodell Williamson Archie Wooden See also Civil rights movement in popular culture History of fountains in the United States Title I of the Civil Rights Act of 1968 References External links Official Site Civil Rights Martyrs 1989 establishments in Alabama 1989 sculptures Buildings and structures in Montgomery, Alabama Fountains in Alabama History of civil rights in the United States History of Montgomery, Alabama Monuments and memorials in Alabama Monuments and memorials of the civil rights movement Southern Poverty Law Center Tourist attractions in Montgomery, Alabama Martyrs' monuments and memorials
5698
https://en.wikipedia.org/wiki/Charles%20Babbage
Charles Babbage
Charles Babbage (; 26 December 1791 – 18 October 1871) was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage originated the concept of a digital programmable computer. Babbage is considered by some to be "father of the computer". Babbage is credited with inventing the first mechanical computer, the Difference Engine, that eventually led to more complex electronic designs, though all the essential ideas of modern computers are to be found in Babbage's Analytical Engine, programmed using a principle openly borrowed from the Jacquard loom. Babbage had a broad range of interests in addition to his work on computers covered in his 1832 book Economy of Manufactures and Machinery. His varied work in other fields has led him to be described as "pre-eminent" among the many polymaths of his century. Babbage, who died before the complete successful engineering of many of his designs, including his Difference Engine and Analytical Engine, remained a prominent figure in the ideating of computing. Parts of Babbage's incomplete mechanisms are on display in the Science Museum in London. In 1991, a functioning difference engine was constructed from Babbage's original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage's machine would have worked. Early life Babbage's birthplace is disputed, but according to the Oxford Dictionary of National Biography he was most likely born at 44 Crosby Row, Walworth Road, London, England. A blue plaque on the junction of Larcom Street and Walworth Road commemorates the event. His date of birth was given in his obituary in The Times as 26 December 1792; but then a nephew wrote to say that Babbage was born one year earlier, in 1791. The parish register of St. Mary's, Newington, London, shows that Babbage was baptised on 6 January 1792, supporting a birth year of 1791. Babbage was one of four children of Benjamin Babbage and Betsy Plumleigh Teape. His father was a banking partner of William Praed in founding Praed's & Co. of Fleet Street, London, in 1801. In 1808, the Babbage family moved into the old Rowdens house in East Teignmouth. Around the age of eight, Babbage was sent to a country school in Alphington near Exeter to recover from a life-threatening fever. For a short time, he attended King Edward VI Grammar School in Totnes, South Devon, but his health forced him back to private tutors for a time. Babbage then joined the 30-student Holmwood Academy, in Baker Street, Enfield, Middlesex, under the Reverend Stephen Freeman. The academy had a library that prompted Babbage's love of mathematics. He studied with two more private tutors after leaving the academy. The first was a clergyman near Cambridge; through him Babbage encountered Charles Simeon and his evangelical followers, but the tuition was not what he needed. He was brought home, to study at the Totnes school: this was at age 16 or 17. The second was an Oxford tutor, under whom Babbage reached a level in Classics sufficient to be accepted by the University of Cambridge. At the University of Cambridge Babbage arrived at Trinity College, Cambridge, in October 1810. He was already self-taught in some parts of contemporary mathematics; he had read Robert Woodhouse, Joseph Louis Lagrange, and Marie Agnesi. As a result, he was disappointed in the standard mathematical instruction available at the university. Babbage, John Herschel, George Peacock, and several other friends formed the Analytical Society in 1812; they were also close to Edward Ryan. As a student, Babbage was also a member of other societies such as The Ghost Club, concerned with investigating supernatural phenomena, and the Extractors Club, dedicated to liberating its members from the madhouse, should any be committed to one. In 1812, Babbage transferred to Peterhouse, Cambridge. He was the top mathematician there, but did not graduate with honours. He instead received a degree without examination in 1814. He had defended a thesis that was considered blasphemous in the preliminary public disputation, but it is not known whether this fact is related to his not sitting the examination. After Cambridge Considering his reputation, Babbage quickly made progress. He lectured to the Royal Institution on astronomy in 1815, and was elected a Fellow of the Royal Society in 1816. After graduation, on the other hand, he applied for positions unsuccessfully, and had little in the way of a career. In 1816 he was a candidate for a teaching job at Haileybury College; he had recommendations from James Ivory and John Playfair, but lost out to Henry Walter. In 1819, Babbage and Herschel visited Paris and the Society of Arcueil, meeting leading French mathematicians and physicists. That year Babbage applied to be professor at the University of Edinburgh, with the recommendation of Pierre Simon Laplace; the post went to William Wallace. With Herschel, Babbage worked on the electrodynamics of Arago's rotations, publishing in 1825. Their explanations were only transitional, being picked up and broadened by Michael Faraday. The phenomena are now part of the theory of eddy currents, and Babbage and Herschel missed some of the clues to unification of electromagnetic theory, staying close to Ampère's force law. Babbage purchased the actuarial tables of George Barrett, who died in 1821 leaving unpublished work, and surveyed the field in 1826 in Comparative View of the Various Institutions for the Assurance of Lives. This interest followed a project to set up an insurance company, prompted by Francis Baily and mooted in 1824, but not carried out. Babbage did calculate actuarial tables for that scheme, using Equitable Society mortality data from 1762 onwards. During this whole period, Babbage depended awkwardly on his father's support, given his father's attitude to his early marriage, of 1814: he and Edward Ryan wedded the Whitmore sisters. He made a home in Marylebone in London and established a large family. On his father's death in 1827, Babbage inherited a large estate (value around £100,000, equivalent to £ or $ today), making him independently wealthy. After his wife's death in the same year he spent time travelling. In Italy he met Leopold II, Grand Duke of Tuscany, foreshadowing a later visit to Piedmont. In April 1828 he was in Rome, and relying on Herschel to manage the difference engine project, when he heard that he had become a professor at Cambridge, a position he had three times failed to obtain (in 1820, 1823 and 1826). Royal Astronomical Society Babbage was instrumental in founding the Royal Astronomical Society in 1820, initially known as the Astronomical Society of London. Its original aims were to reduce astronomical calculations to a more standard form, and to circulate data. These directions were closely connected with Babbage's ideas on computation, and in 1824 he won its Gold Medal, cited "for his invention of an engine for calculating mathematical and astronomical tables". Babbage's motivation to overcome errors in tables by mechanisation had been a commonplace since Dionysius Lardner wrote about it in 1834 in the Edinburgh Review (under Babbage's guidance). The context of these developments is still debated. Babbage's own account of the origin of the difference engine begins with the Astronomical Society's wish to improve The Nautical Almanac. Babbage and Herschel were asked to oversee a trial project, to recalculate some part of those tables. With the results to hand, discrepancies were found. This was in 1821 or 1822, and was the occasion on which Babbage formulated his idea for mechanical computation. The issue of the Nautical Almanac is now described as a legacy of a polarisation in British science caused by attitudes to Sir Joseph Banks, who had died in 1820. Babbage studied the requirements to establish a modern postal system, with his friend Thomas Frederick Colby, concluding there should be a uniform rate that was put into effect with the introduction of the Uniform Fourpenny Post supplanted by the Uniform Penny Post in 1839 and 1840. Colby was another of the founding group of the Society. He was also in charge of the Survey of Ireland. Herschel and Babbage were present at a celebrated operation of that survey, the remeasuring of the Lough Foyle baseline. British Lagrangian School The Analytical Society had initially been no more than an undergraduate provocation. During this period it had some more substantial achievements. In 1816 Babbage, Herschel and Peacock published a translation from French of the lectures of Sylvestre Lacroix, which was then the state-of-the-art calculus textbook. Reference to Lagrange in calculus terms marks out the application of what are now called formal power series. British mathematicians had used them from about 1730 to 1760. As re-introduced, they were not simply applied as notations in differential calculus. They opened up the fields of functional equations (including the difference equations fundamental to the difference engine) and operator (D-module) methods for differential equations. The analogy of difference and differential equations was notationally changing Δ to D, as a "finite" difference becomes "infinitesimal". These symbolic directions became popular, as operational calculus, and pushed to the point of diminishing returns. The Cauchy concept of limit was kept at bay. Woodhouse had already founded this second "British Lagrangian School" with its treatment of Taylor series as formal. In this context function composition is complicated to express, because the chain rule is not simply applied to second and higher derivatives. This matter was known to Woodhouse by 1803, who took from Louis François Antoine Arbogast what is now called Faà di Bruno's formula. In essence it was known to Abraham De Moivre (1697). Herschel found the method impressive, Babbage knew of it, and it was later noted by Ada Lovelace as compatible with the analytical engine. In the period to 1820 Babbage worked intensively on functional equations in general, and resisted both conventional finite differences and Arbogast's approach (in which Δ and D were related by the simple additive case of the exponential map). But via Herschel he was influenced by Arbogast's ideas in the matter of iteration, i.e. composing a function with itself, possibly many times. Writing in a major paper on functional equations in the Philosophical Transactions (1815/6), Babbage said his starting point was work of Gaspard Monge. Academic From 1828 to 1839, Babbage was Lucasian Professor of Mathematics at Cambridge. Not a conventional resident don, and inattentive to his teaching responsibilities, he wrote three topical books during this period of his life. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1832. Babbage was out of sympathy with colleagues: George Biddell Airy, his predecessor as Lucasian Professor of Mathematics at Trinity College, Cambridge, thought an issue should be made of his lack of interest in lecturing. Babbage planned to lecture in 1831 on political economy. Babbage's reforming direction looked to see university education more inclusive, universities doing more for research, a broader syllabus and more interest in applications; but William Whewell found the programme unacceptable. A controversy Babbage had with Richard Jones lasted for six years. He never did give a lecture. It was during this period that Babbage tried to enter politics. Simon Schaffer writes that his views of the 1830s included disestablishment of the Church of England, a broader political franchise, and inclusion of manufacturers as stakeholders. He twice stood for Parliament as a candidate for the borough of Finsbury. In 1832 he came in third among five candidates, missing out by some 500 votes in the two-member constituency when two other reformist candidates, Thomas Wakley and Christopher Temple, split the vote. In his memoirs Babbage related how this election brought him the friendship of Samuel Rogers: his brother Henry Rogers wished to support Babbage again, but died within days. In 1834 Babbage finished last among four. In 1832, Babbage, Herschel and Ivory were appointed Knights of the Royal Guelphic Order, however they were not subsequently made knights bachelor to entitle them to the prefix Sir, which often came with appointments to that foreign order (though Herschel was later created a baronet). "Declinarians", learned societies and the BAAS Babbage now emerged as a polemicist. One of his biographers notes that all his books contain a "campaigning element". His Reflections on the Decline of Science and some of its Causes (1830) stands out, however, for its sharp attacks. It aimed to improve British science, and more particularly to oust Davies Gilbert as President of the Royal Society, which Babbage wished to reform. It was written out of pique, when Babbage hoped to become the junior secretary of the Royal Society, as Herschel was the senior, but failed because of his antagonism to Humphry Davy. Michael Faraday had a reply written, by Gerrit Moll, as On the Alleged Decline of Science in England (1831). On the front of the Royal Society Babbage had no impact, with the bland election of the Duke of Sussex to succeed Gilbert the same year. As a broad manifesto, on the other hand, his Decline led promptly to the formation in 1831 of the British Association for the Advancement of Science (BAAS). The Mechanics' Magazine in 1831 identified as Declinarians the followers of Babbage. In an unsympathetic tone it pointed out David Brewster writing in the Quarterly Review as another leader; with the barb that both Babbage and Brewster had received public money. In the debate of the period on statistics (qua data collection) and what is now statistical inference, the BAAS in its Statistical Section (which owed something also to Whewell) opted for data collection. This Section was the sixth, established in 1833 with Babbage as chairman and John Elliot Drinkwater as secretary. The foundation of the Statistical Society followed. Babbage was its public face, backed by Richard Jones and Robert Malthus. On the Economy of Machinery and Manufactures Babbage published On the Economy of Machinery and Manufactures (1832), on the organisation of industrial production. It was an influential early work of operational research. John Rennie the Younger in addressing the Institution of Civil Engineers on manufacturing in 1846 mentioned mostly surveys in encyclopaedias, and Babbage's book was first an article in the Encyclopædia Metropolitana, the form in which Rennie noted it, in the company of related works by John Farey Jr., Peter Barlow and Andrew Ure. From An essay on the general principles which regulate the application of machinery to manufactures and the mechanical arts (1827), which became the Encyclopædia Metropolitana article of 1829, Babbage developed the schematic classification of machines that, combined with discussion of factories, made up the first part of the book. The second part considered the "domestic and political economy" of manufactures. The book sold well, and quickly went to a fourth edition (1836). Babbage represented his work as largely a result of actual observations in factories, British and abroad. It was not, in its first edition, intended to address deeper questions of political economy; the second (late 1832) did, with three further chapters including one on piece rate. The book also contained ideas on rational design in factories, and profit sharing. "Babbage principle" In Economy of Machinery was described what is now called the "Babbage principle". It pointed out commercial advantages available with more careful division of labour. As Babbage himself noted, it had already appeared in the work of Melchiorre Gioia in 1815. The term was introduced in 1974 by Harry Braverman. Related formulations are the "principle of multiples" of Philip Sargant Florence, and the "balance of processes". What Babbage remarked is that skilled workers typically spend parts of their time performing tasks that are below their skill level. If the labour process can be divided among several workers, labour costs may be cut by assigning only high-skill tasks to high-cost workers, restricting other tasks to lower-paid workers. He also pointed out that training or apprenticeship can be taken as fixed costs; but that returns to scale are available by his approach of standardisation of tasks, therefore again favouring the factory system. His view of human capital was restricted to minimising the time period for recovery of training costs. Publishing Another aspect of the work was its detailed breakdown of the cost structure of book publishing. Babbage took the unpopular line, from the publishers' perspective, of exposing the trade's profitability. He went as far as to name the organisers of the trade's restrictive practices. Twenty years later he attended a meeting hosted by John Chapman to campaign against the Booksellers Association, still a cartel. Influence It has been written that "what Arthur Young was to agriculture, Charles Babbage was to the factory visit and machinery". Babbage's theories are said to have influenced the layout of the 1851 Great Exhibition, and his views had a strong effect on his contemporary George Julius Poulett Scrope. Karl Marx argued that the source of the productivity of the factory system was exactly the combination of the division of labour with machinery, building on Adam Smith, Babbage and Ure. Where Marx picked up on Babbage and disagreed with Smith was on the motivation for division of labour by the manufacturer: as Babbage did, he wrote that it was for the sake of profitability, rather than productivity, and identified an impact on the concept of a trade. John Ruskin went further, to oppose completely what manufacturing in Babbage's sense stood for. Babbage also affected the economic thinking of John Stuart Mill. George Holyoake saw Babbage's detailed discussion of profit sharing as substantive, in the tradition of Robert Owen and Charles Fourier, if requiring the attentions of a benevolent captain of industry, and ignored at the time. Works by Babbage and Ure were published in French translation in 1830; On the Economy of Machinery was translated in 1833 into French by Édouard Biot, and into German the same year by Gottfried Friedenberg. The French engineer and writer on industrial organisation Léon Lalanne was influenced by Babbage, but also by the economist Claude Lucien Bergery, in reducing the issues to "technology". William Jevons connected Babbage's "economy of labour" with his own labour experiments of 1870. The Babbage principle is an inherent assumption in Frederick Winslow Taylor's scientific management. Mary Everest Boole claimed that there was profound influence – via her uncle George Everest – of Indian thought in general and Indian logic, in particular, on Babbage and on her husband George Boole, as well as on Augustus De Morgan: Think what must have been the effect of the intense Hinduizing of three such men as Babbage, De Morgan, and George Boole on the mathematical atmosphere of 1830–65. What share had it in generating the Vector Analysis and the mathematics by which investigations in physical science are now conducted? Natural theology In 1837, responding to the series of eight Bridgewater Treatises, Babbage published his Ninth Bridgewater Treatise, under the title On the Power, Wisdom and Goodness of God, as manifested in the Creation. In this work Babbage weighed in on the side of uniformitarianism in a current debate. He preferred the conception of creation in which a God-given natural law dominated, removing the need for continuous "contrivance". The book is a work of natural theology, and incorporates extracts from related correspondence of Herschel with Charles Lyell. Babbage put forward the thesis that God had the omnipotence and foresight to create as a divine legislator. In this book, Babbage dealt with relating interpretations between science and religion; on the one hand, he insisted that "there exists no fatal collision between the words of Scripture and the facts of nature;" on the other hand, he wrote that the Book of Genesis was not meant to be read literally in relation to scientific terms. Against those who said these were in conflict, he wrote "that the contradiction they have imagined can have no real existence, and that whilst the testimony of Moses remains unimpeached, we may also be permitted to confide in the testimony of our senses." The Ninth Bridgewater Treatise was quoted extensively in Vestiges of the Natural History of Creation. The parallel with Babbage's computing machines is made explicit, as allowing plausibility to the theory that transmutation of species could be pre-programmed. Jonar Ganeri, author of Indian Logic, believes Babbage may have been influenced by Indian thought; one possible route would be through Henry Thomas Colebrooke. Mary Everest Boole argues that Babbage was introduced to Indian thought in the 1820s by her uncle George Everest: Some time about 1825, [Everest] came to England for two or three years, and made a fast and lifelong friendship with Herschel and with Babbage, who was then quite young. I would ask any fair-minded mathematician to read Babbage's Ninth Bridgewater Treatise and compare it with the works of his contemporaries in England; and then ask himself whence came the peculiar conception of the nature of miracle which underlies Babbage's ideas of Singular Points on Curves (Chap, viii) – from European Theology or Hindu Metaphysic? Oh! how the English clergy of that day hated Babbage's book! Religious views Babbage was raised in the Protestant form of the Christian faith, his family having inculcated in him an orthodox form of worship. He explained: Rejecting the Athanasian Creed as a "direct contradiction in terms", in his youth he looked to Samuel Clarke's works on religion, of which Being and Attributes of God (1704) exerted a particularly strong influence on him. Later in life, Babbage concluded that "the true value of the Christian religion rested, not on speculative [theology] … but … upon those doctrines of kindness and benevolence which that religion claims and enforces, not merely in favour of man himself but of every creature susceptible of pain or of happiness." In his autobiography Passages from the Life of a Philosopher (1864), Babbage wrote a whole chapter on the topic of religion, where he identified three sources of divine knowledge: A priori or mystical experience From Revelation From the examination of the works of the Creator He stated, on the basis of the design argument, that studying the works of nature had been the more appealing evidence, and the one which led him to actively profess the existence of God. Advocating for natural theology, he wrote: Like Samuel Vince, Babbage also wrote a defence of the belief in divine miracles. Against objections previously posed by David Hume, Babbage advocated for the belief of divine agency, stating "we must not measure the credibility or incredibility of an event by the narrow sphere of our own experience, nor forget that there is a Divine energy which overrides what we familiarly call the laws of nature." He alluded to the limits of human experience, expressing: "all that we see in a miracle is an effect which is new to our observation, and whose cause is concealed. The cause may be beyond the sphere of our observation, and would be thus beyond the familiar sphere of nature; but this does not make the event a violation of any law of nature. The limits of man's observation lie within very narrow boundaries, and it would be arrogance to suppose that the reach of man's power is to form the limits of the natural world." Later life The British Association was consciously modelled on the Deutsche Naturforscher-Versammlung, founded in 1822. It rejected romantic science as well as metaphysics, and started to entrench the divisions of science from literature, and professionals from amateurs. Belonging as he did to the "Wattite" faction in the BAAS, represented in particular by James Watt the younger, Babbage identified closely with industrialists. He wanted to go faster in the same directions, and had little time for the more gentlemanly component of its membership. Indeed, he subscribed to a version of conjectural history that placed industrial society as the culmination of human development (and shared this view with Herschel). A clash with Roderick Murchison led in 1838 to his withdrawal from further involvement. At the end of the same year he sent in his resignation as Lucasian professor, walking away also from the Cambridge struggle with Whewell. His interests became more focussed, on computation and metrology, and on international contacts. Metrology programme A project announced by Babbage was to tabulate all physical constants (referred to as "constants of nature", a phrase in itself a neologism), and then to compile an encyclopaedic work of numerical information. He was a pioneer in the field of "absolute measurement". His ideas followed on from those of Johann Christian Poggendorff, and were mentioned to Brewster in 1832. There were to be 19 categories of constants, and Ian Hacking sees these as reflecting in part Babbage's "eccentric enthusiasms". Babbage's paper On Tables of the Constants of Nature and Art was reprinted by the Smithsonian Institution in 1856, with an added note that the physical tables of Arnold Henry Guyot "will form a part of the important work proposed in this article". Exact measurement was also key to the development of machine tools. Here again Babbage is considered a pioneer, with Henry Maudslay, William Sellers, and Joseph Whitworth. Engineer and inventor Through the Royal Society Babbage acquired the friendship of the engineer Marc Brunel. It was through Brunel that Babbage knew of Joseph Clement, and so came to encounter the artisans whom he observed in his work on manufactures. Babbage provided an introduction for Isambard Kingdom Brunel in 1830, for a contact with the proposed Bristol & Birmingham Railway. He carried out studies, around 1838, to show the superiority of the broad gauge for railways, used by Brunel's Great Western Railway. In 1838, Babbage invented the pilot (also called a cow-catcher), the metal frame attached to the front of locomotives that clears the tracks of obstacles; he also constructed a dynamometer car. His eldest son, Benjamin Herschel Babbage, worked as an engineer for Brunel on the railways before emigrating to Australia in the 1850s. Babbage also invented an ophthalmoscope, which he gave to Thomas Wharton Jones for testing. Jones, however, ignored it. The device only came into use after being independently invented by Hermann von Helmholtz. Cryptography Babbage achieved notable results in cryptography, though this was still not known a century after his death. Letter frequency was category 18 of Babbage's tabulation project. Joseph Henry later defended interest in it, in the absence of the facts, as relevant to the management of movable type. As early as 1845, Babbage had solved a cipher that had been posed as a challenge by his nephew Henry Hollier, and in the process, he made a discovery about ciphers that were based on Vigenère tables. Specifically, he realised that enciphering plain text with a keyword rendered the cipher text subject to modular arithmetic. During the Crimean War of the 1850s, Babbage broke Vigenère's autokey cipher as well as the much weaker cipher that is called Vigenère cipher today. His discovery was kept a military secret, and was not published. Credit for the result was instead given to Friedrich Kasiski, a Prussian infantry officer, who made the same discovery some years later. However, in 1854, Babbage published the solution of a Vigenère cipher, which had been published previously in the Journal of the Society of Arts. In 1855, Babbage also published a short letter, "Cypher Writing", in the same journal. Nevertheless, his priority was not established until 1985. Public nuisances Babbage involved himself in well-publicised but unpopular campaigns against public nuisances. He once counted all the broken panes of glass of a factory, publishing in 1857 a "Table of the Relative Frequency of the Causes of Breakage of Plate Glass Windows": Of 464 broken panes, 14 were caused by "drunken men, women or boys". Babbage's distaste for commoners (the Mob) included writing "Observations of Street Nuisances" in 1864, as well as tallying up 165 "nuisances" over a period of 80 days. He especially hated street music, and in particular the music of organ grinders, against whom he railed in various venues. The following quotation is typical: Babbage was not alone in his campaign. A convert to the cause was the MP Michael Thomas Bass. In the 1860s, Babbage also took up the anti-hoop-rolling campaign. He blamed hoop-rolling boys for driving their iron hoops under horses' legs, with the result that the rider is thrown and very often the horse breaks a leg. Babbage achieved a certain notoriety in this matter, being denounced in debate in Commons in 1864 for "commencing a crusade against the popular game of tip-cat and the trundling of hoops." Computing pioneer Babbage's machines were among the first mechanical computers. That they were not actually completed was largely because of funding problems and clashes of personality, most notably with George Biddell Airy, the Astronomer Royal. Babbage directed the building of some steam-powered machines that achieved some modest success, suggesting that calculations could be mechanised. For more than ten years he received government funding for his project, which amounted to £17,000, but eventually the Treasury lost confidence in him. While Babbage's machines were mechanical and unwieldy, their basic architecture was similar to that of a modern computer. The data and program memory were separated, operation was instruction-based, the control unit could make conditional jumps, and the machine had a separate I/O unit. Background on mathematical tables In Babbage's time, printed mathematical tables were calculated by human computers; in other words, by hand. They were central to navigation, science and engineering, as well as mathematics. Mistakes were known to occur in transcription as well as calculation. At Cambridge, Babbage saw the fallibility of this process, and the opportunity of adding mechanisation into its management. His own account of his path towards mechanical computation references a particular occasion: There was another period, seven years later, when his interest was aroused by the issues around computation of mathematical tables. The French official initiative by Gaspard de Prony, and its problems of implementation, were familiar to him. After the Napoleonic Wars came to a close, scientific contacts were renewed on the level of personal contact: in 1819 Charles Blagden was in Paris looking into the printing of the stalled de Prony project, and lobbying for the support of the Royal Society. In works of the 1820s and 1830s, Babbage referred in detail to de Prony's project. Difference engine Babbage began in 1822 with what he called the difference engine, made to compute values of polynomial functions. It was created to calculate a series of values automatically. By using the method of finite differences, it was possible to avoid the need for multiplication and division. For a prototype difference engine, Babbage brought in Joseph Clement to implement the design, in 1823. Clement worked to high standards, but his machine tools were particularly elaborate. Under the standard terms of business of the time, he could charge for their construction, and would also own them. He and Babbage fell out over costs around 1831. Some parts of the prototype survive in the Museum of the History of Science, Oxford. This prototype evolved into the "first difference engine". It remained unfinished and the finished portion is located at the Science Museum in London. This first difference engine would have been composed of around 25,000 parts, weighed , and would have been tall. Although Babbage received ample funding for the project, it was never completed. He later (1847–1849) produced detailed drawings for an improved version,"Difference Engine No. 2", but did not receive funding from the British government. His design was finally constructed in 1989–1991, using his plans and 19th-century manufacturing tolerances. It performed its first calculation at the Science Museum, London, returning results to 31 digits. Nine years later, in 2000, the Science Museum completed the printer Babbage had designed for the difference engine. Completed models The Science Museum has constructed two Difference Engines according to Babbage's plans for the Difference Engine No 2. One is owned by the museum. The other, owned by the technology multimillionaire Nathan Myhrvold, went on exhibition at the Computer History Museum in Mountain View, California on 10 May 2008. The two models that have been constructed are not replicas. Analytical Engine After the attempt at making the first difference engine fell through, Babbage worked to design a more complex machine called the Analytical Engine. He hired C. G. Jarvis, who had previously worked for Clement as a draughtsman. The Analytical Engine marks the transition from mechanised arithmetic to fully-fledged general purpose computation. It is largely on it that Babbage's standing as computer pioneer rests. The major innovation was that the Analytical Engine was to be programmed using punched cards: the Engine was intended to use loops of Jacquard's punched cards to control a mechanical calculator, which could use as input the results of preceding computations. The machine was also intended to employ several features subsequently used in modern computers, including sequential control, branching and looping. It would have been the first mechanical device to be, in principle, Turing-complete. The Engine was not a single physical machine, but rather a succession of designs that Babbage tinkered with until his death in 1871. Ada Lovelace and Italian followers Ada Lovelace, who corresponded with Babbage during his development of the Analytical Engine, is credited with developing an algorithm that would enable the Engine to calculate a sequence of Bernoulli numbers. Despite documentary evidence in Lovelace's own handwriting, some scholars dispute to what extent the ideas were Lovelace's own. For this achievement, she is often described as the first computer programmer; though no programming language had yet been invented. Lovelace also translated and wrote literature supporting the project. Describing the engine's programming by punch cards, she wrote: "We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves." Babbage visited Turin in 1840 at the invitation of Giovanni Plana, who had developed in 1831 an analog computing machine that served as a perpetual calendar. Here in 1840 in Turin, Babbage gave the only public explanation and lectures about the Analytical Engine. In 1842 Charles Wheatstone approached Lovelace to translate a paper of Luigi Menabrea, who had taken notes of Babbage's Turin talks; and Babbage asked her to add something of her own. Fortunato Prandi who acted as interpreter in Turin was an Italian exile and follower of Giuseppe Mazzini. Swedish followers Per Georg Scheutz wrote about the difference engine in 1830, and experimented in automated computation. After 1834 and Lardner's Edinburgh Review article he set up a project of his own, doubting whether Babbage's initial plan could be carried out. This he pushed through with his son, Edvard Scheutz. Another Swedish engine was that of Martin Wiberg (1860). Legacy In 2011, researchers in Britain proposed a multimillion-pound project, "Plan 28", to construct Babbage's Analytical Engine. Since Babbage's plans were continually being refined and were never completed, they intended to engage the public in the project and crowd-source the analysis of what should be built. It would have the equivalent of 675 bytes of memory, and run at a clock speed of about 7 Hz. They hoped to complete it by the 150th anniversary of Babbage's death, in 2021. Advances in MEMS and nanotechnology have led to recent high-tech experiments in mechanical computation. The benefits suggested include operation in high radiation or high temperature environments. These modern versions of mechanical computation were highlighted in The Economist in its special "end of the millennium" black cover issue in an article entitled "Babbage's Last Laugh". Due to his association with the town Babbage was chosen in 2007 to appear on the 5 Totnes pound note. An image of Babbage features in the British cultural icons section of the newly designed British passport in 2015. Family On 25 July 1814, Babbage married Georgiana Whitmore, sister of British parliamentarian William Wolryche-Whitmore, at St. Michael's Church in Teignmouth, Devon. The couple lived at Dudmaston Hall, Shropshire (where Babbage engineered the central heating system), before moving to 5 Devonshire Street, London in 1815. Charles and Georgiana had eight children, but only four – Benjamin Herschel, Georgiana Whitmore, Dugald Bromhead and Henry Prevost – survived childhood. Charles' wife Georgiana died in Worcester on 1 September 1827, the same year as his father, their second son (also named Charles) and their newborn son Alexander. Benjamin Herschel Babbage (1815–1878) Charles Whitmore Babbage (1817–1827) Georgiana Whitmore Babbage (1818 – 26 September 1834) Edward Stewart Babbage (1819–1821) Francis Moore Babbage (1821–????) Dugald Bromhead (Bromheald?) Babbage (1823–1901) (Maj-Gen) Henry Prevost Babbage (1824–1918) Alexander Forbes Babbage (1827–1827) His youngest surviving son, Henry Prevost Babbage (1824–1918), went on to create six small demonstration pieces for Difference Engine No. 1 based on his father's designs, one of which was sent to Harvard University where it was later discovered by Howard H. Aiken, pioneer of the Harvard Mark I. Henry Prevost's 1910 Analytical Engine Mill, previously on display at Dudmaston Hall, is now on display at the Science Museum. Death Babbage lived and worked for over 40 years at 1 Dorset Street, Marylebone, where he died, at the age of 79, on 18 October 1871; he was buried in London's Kensal Green Cemetery. According to Horsley, Babbage died "of renal inadequacy, secondary to cystitis." He had declined both a knighthood and baronetcy. He also argued against hereditary peerages, favouring life peerages instead. Autopsy report In 1983, the autopsy report for Charles Babbage was discovered and later published by his great-great-grandson. A copy of the original is also available. Half of Babbage's brain is preserved at the Hunterian Museum in the Royal College of Surgeons in London. The other half of Babbage's brain is on display in the Science Museum, London. Memorials There is a black plaque commemorating the 40 years Babbage spent at 1 Dorset Street, London. Locations, institutions and other things named after Babbage include: The Moon crater, Babbage The Charles Babbage Institute, an information technology archive and research center at the University of Minnesota Babbage River Falls, Yukon, Canada The Charles Babbage Premium, an annual computing award British Rail named a locomotive after Charles Babbage in the 1990s. Babbage Island, Western Australia The Babbage Building at the University of Plymouth, where the university's school of computing is based The Babbage programming language for GEC 4000 series minicomputers "Babbage", The Economist 's Science and Technology blog The former chain retail computer and video-games store "Babbage's" (now GameStop) was named after him. In fiction and film Babbage frequently appears in steampunk works; he has been called an iconic figure of the genre. Other works in which Babbage appears include: The 2008 short film Babbage, screened at the 2008 Cannes Film Festival, a 2009 finalist with Haydenfilms, and shown at the 2009 HollyShorts Film Festival and other international film festivals. The film shows Babbage at a dinner party, with guests discussing his life and work. Sydney Padua created The Thrilling Adventures of Lovelace and Babbage, a cartoon alternate history in which Babbage and Lovelace succeed in building the Analytical Engine. It quotes heavily from the writings of Lovelace, Babbage and their contemporaries. Kate Beaton, cartoonist of webcomic Hark! A Vagrant, devoted one of her comic strips to Charles and Georgiana Babbage. The Doctor Who episode "Spyfall, Part 2" (Season 12, episode 2) features Charles Babbage and Ada Gordon as characters who assist the Doctor when she's stuck in the year 1834. Publications (Reissued by Cambridge University Press 2009, .) (The LOCOMAT site contains a reconstruction of this table.) See also Babbage's congruence IEEE Computer Society Charles Babbage Award List of pioneers in computer science Notes References . External links The Babbage Papers The papers held by the Science Museum Library and Archives which relate mostly to Babbage's automatic calculating engines The Babbage Engine: Computer History Museum, Mountain View CA, US. Multi-page account of Babbage, his engines and his associates, including a video of the Museum's functioning replica of the Difference Engine No 2 in action Analytical Engine Museum John Walker's (of AutoCAD fame) comprehensive catalogue of the complete technical works relating to Babbage's machine. Charles Babbage A history at the School of Mathematics and Statistics, University of St Andrews Scotland. Mr. Charles Babbage: obituary from The Times (1871) The Babbage Pages Charles Babbage, The Online Books Page, University of Pennsylvania The Babbage Difference Engine: an overview of how it works "On a Method of Expressing by Signs the Action of Machinery", 1826. Original edition Charles Babbage Institute: pages on "Who Was Charles Babbage?" including biographical note, description of Difference Engine No. 2, publications by Babbage, archival and published sources on Babbage, sources on Babbage and Ada Lovelace Babbage's Ballet by Ivor Guest, Ballet Magazine, 1997 Babbage's Calculating Machine (1872) – full digital facsimile from Linda Hall Library Author profile in the database zbMATH The 'difference engine' built by Georg & Edvard Scheutz in 1843 1791 births 1871 deaths 19th-century English mathematicians Alumni of Peterhouse, Cambridge Alumni of Trinity College, Cambridge British business theorists Burials at Kensal Green Cemetery Corresponding members of the Saint Petersburg Academy of Sciences English Christians English computer scientists English engineers Fellows of the American Academy of Arts and Sciences Fellows of the Royal Astronomical Society Fellows of the Royal Society of Edinburgh Fellows of the Royal Society Lucasian Professors of Mathematics People educated at Totnes Grammar School People of the Industrial Revolution Recipients of the Gold Medal of the Royal Astronomical Society Mathematicians from London
5700
https://en.wikipedia.org/wiki/Cross-dressing
Cross-dressing
Cross-dressing is the act of wearing clothes traditionally or stereotypically associated with a different gender. From as early as pre-modern history, cross-dressing has been practiced in order to disguise, comfort, entertain, and express oneself. Almost every human society throughout history has had expected norms for each gender relating to style, color, or type of clothing they are expected to wear, and likewise most societies have had a set of guidelines, views or even laws defining what type of clothing is appropriate for each gender. Therefore, cross-dressing allows individuals to express themselves by acting beyond guidelines, views, or even laws defining what type of clothing is expected and appropriate for each gender. The term "cross-dressing" refers to an action or a behavior, without attributing or implying any specific causes or motives for that behavior. Cross-dressing is not synonymous with being transgender. Terminology The phenomenon of cross-dressing is seen throughout recorded history, being referred to as far back as the Hebrew Bible. The terms used to describe it have changed throughout history; the Anglo-Saxon-rooted term "cross-dresser" is viewed more favorably than the Latin-origin term "transvestite" in some circles, where it has come to be seen as outdated and derogatory. Its first mention originated in Magnus Hirschfeld's Die Transvestiten (The Transvestites) in 1910, originally associating cross-dressing with non-heterosexual behavior or derivations of sexual intent. Its connotations largely changed in the 20th century as its use was more frequently associated with sexual excitement, otherwise known as transvestic disorder. This term was historically used to diagnose psychiatric disorders (e.g. transvestic fetishism), but the former (cross-dressing) was coined by the transgender community. The Oxford English Dictionary gives 1911 as the earliest citation of the term "cross-dressing", by Edward Carpenter: "Cross-dressing must be taken as a general indication of, and a cognate phenomenon to, homosexuality". In 1928, Havelock Ellis used the two terms "cross-dressing" and "transvestism" interchangeably. The earliest citations for "cross-dress" and "cross-dresser" are 1966 and 1976, respectively. En femme The term en femme is a lexical borrowing of a French phrase. It is used in the transgender and crossdressing community to describe the act of wearing feminine clothing or expressing a stereotypically feminine personality. The term is borrowed from the modern French phrase en femme meaning "as a woman." Most crossdressers also use a female name whilst en femme; that is their "femme name". In the cross-dressing community the persona a man adopts when he dresses as a woman is known as his "femme self". Between 1987 and 1991, JoAnn Roberts and CDS published a magazine called "En Femme" that was "for the transvestite, transsexual, crossdresser, and female impersonator." En homme The term en homme is an anglicized adaptation of a French phrase. It is used in the transgender and crossdressing community to describe the act of wearing masculine clothing or expressing a stereotypically masculine personality. The term is derived from the modern colloquial French phrase en tant qu'homme meaning "as a man" and the anglicized adaptation en homme literally translates as "in man". Most crossdressers also use a homme (male) name whilst en homme. History Non-Western history Cross-dressing has been practiced throughout much of recorded history, in many societies, and for many reasons. Examples exist in Greek, Norse, and Hindu mythology. Cross-dressing can be found in theater and religion, such as kabuki, Noh, and Korean shamanism, as well as in folklore, literature, and music. For instance, in examining kabuki culture during Japan's edo period, cross-dressing was not only used for theater purposes but also because current societal trends: cross-dressing and the switching of genders was a familiar concept to the Japanese at the time which allowed them to interchange characters's genders easily and incorporate geisha fashion into men's wear. This was especially common in the story-telling of ancient stories such as the character Benten from Benten Kozō. He was a thief in the play cross-dressing as a woman. Cross-dressing was also exhibited in Japanese Noh for similar reasons. Societal standards at the time broke boundaries between gender. For example, ancient Japanese portraits of aristocrats have no clear differentiation in characteristics between male and female beauty. Thus, in Noh performance, the cross-dressing of actors was common; especially given the ease of disguising biological sex with the use of masks and heavy robes. In a non-entertainment context, cross-dressing is also exhibited in Korean shamanism for religious purposes. Specifically, this is displayed in chaesu-gut, a shamanistic rite gut in which a shaman offers a sacrifice to the spirits to intermediate in the fortunes of the intended humans for the gut. Here, cross-dressing serves many purposes. Firstly, the shaman (typically a woman) would cross-dress as both male and female spirits can occupy her. This allows her to represent the opposite sex and become a cross-sex icon in 75% of the time of the ritual. This also allows her to become a sexually liminal being. It is clear that in entertainment, literature, art, and religion, different civilizations have utilized cross-dressing for many different purposes. Western history In the British and European context, theatrical troupes ("playing companies") were all-male, with the female parts undertaken by boy players. The Rebecca Riots took place between 1839 and 1843 in West and Mid Wales. They were a series of protests undertaken by local farmers and agricultural workers in response to unfair taxation. The rioters, often men dressed as women, took their actions against toll-gates, as they were tangible representations of high taxes and tolls. The riots ceased prior to 1844 due to several factors, including increased troop levels, a desire by the protestors to avoid violence and the appearance of criminal groups using the guise of the biblical character Rebecca for their own purposes. In 1844 an Act of Parliament to consolidate and amend the laws relating to turnpike trusts in Wales was passed. A variety of historical figures are known to have cross-dressed to varying degrees. Many women found they had to disguise themselves as men in order to participate in the wider world. For example, it is postulated that Margaret King cross-dressed in the early 19th century to attend medical school, as universities at that time accepted only male students. A century later, Vita Sackville-West dressed as a young soldier in order to "walk out" with her girlfriend Violet Keppel, to avoid the street harassment that two women would have faced. The prohibition on women wearing male garb, once strictly applied, still has echoes today in some Western societies which require girls and women to wear skirts, for example as part of school uniform or office dress codes. In some countries, even in casual settings, women are still prohibited from wearing traditionally male clothing. Sometimes all trousers, no matter how loose and long, are automatically considered "indecent", which may render their wearer subject to severe punishment, as in the case of Lubna al-Hussein in Sudan in 2009. Legal issues In many countries, cross-dressing was illegal under laws that identified it as indecent or immoral. Many such laws were challenged in the late 1900s giving people the right to freedom of gender expression with regard to their clothing. For instance, from 1840 forward, United States saw state and city laws forbidding people from appearing in public while dressed in clothes not commonly associated with their assigned sex. The goal of this wave of policies was to create a tool that would enforce a normative gender narrative, targeting multiple gender identities across the gender spectrum. With the progression of time, styles, and societal trends, it became even more difficult to draw the line between what was cross-dressing or not. Only recently have these laws changed. As recently as 2011, it was still possible for a man to get arrested for "impersonating a woman" — a vestige of the 19th century laws. Even with this, legal issues surrounding cross-dressing perpetuated all throughout the mid 20th century. During this time period, police would often reference laws that did not exist or laws that have been repealed in order to target the LGBTQ+ community. This extends beyond the United States: There still remains 13 UN member States that explicitly criminalize transgender individuals, and there exist even more countries that use a great deal of diverse laws to target them. The third edition of the Trans Legal Mapping Report, done by the International Lesbian, Gay, Bisexual, Trans, and Intersex Association found that an especially common method to target these individuals is through cross-dressing regulations. For instance, only in 2014 did an appeal court in Malaysia finally overturned a state law prohibiting Muslim men from cross-dressing as women. In the Australian state of Tasmania, cross-dressing in public was made a criminal offence in 1935, and this law was only repealed in 2000. Varieties There are many different kinds of cross-dressing and many different reasons why an individual might engage in cross-dressing behavior. Some people cross-dress as a matter of comfort or style, a personal preference for clothing associated with the opposite gender. Some people cross-dress to shock others or challenge social norms; others will limit their cross-dressing to underwear, so that it is not apparent. Some people attempt to pass as a member of the opposite gender in order to gain access to places or resources they would not otherwise be able to reach. Theater and performance Single-sex theatrical troupes often have some performers who cross-dress to play roles written for members of the opposite sex (travesti and trouser roles). Cross-dressing, particularly the depiction of males wearing dresses, is often used for comic effect onstage and on-screen. Boy player refers to children who performed in Medieval and English Renaissance playing companies. Some boy players worked for the adult companies and performed the female roles as women did not perform on the English stage in this period. Others worked for children's companies in which all roles, not just the female ones, were played by boys.  In an effort to clamp down on kabuki's popularity, women's kabuki, known as , was banned in 1629 in Japan for being too erotic. Following this ban, young boys began performing in , which was also soon banned. Thus adult men play female roles in kabuki. Dan is the general name for female roles in Chinese opera, often referring to leading roles. They may be played by male or female actors. In the early years of Peking opera, all roles were played by men, but this practice is no longer common in any Chinese opera genre. Women have often been excluded from Noh, and men often play female characters in it. Drag is a special form of performance art based on the act of cross-dressing. A drag queen is usually a male-assigned person who performs as an exaggeratedly feminine character, in heightened costuming sometimes consisting of a showy dress, high-heeled shoes, obvious make-up, and wig. A drag queen may imitate famous female film or pop-music stars. A faux queen is a female-assigned person employing the same techniques. A drag king is a counterpart of the drag queen – a female-assigned person who adopts a masculine persona in performance or imitates a male film or pop-music star. Some female-assigned people undergoing gender reassignment therapy also self-identify as 'drag kings'.The modern activity of battle reenactments has raised the question of women passing as male soldiers. In 1989, Lauren Burgess dressed as a male soldier in a U.S. National Park Service reenactment of the Battle of Antietam, and was ejected after she was discovered to be a woman. Burgess sued the Park Service for sexual discrimination. The case spurred spirited debate among Civil War buffs. In 1993, a federal judge ruled in Burgess's favor. "Wigging" refers to the practice of male stunt doubles taking the place of an actress, parallel to "paint downs", where white stunt doubles are made up to resemble black actors. Female stunt doubles have begun to protest this norm of "historical sexism", saying that it restricts their already limited job possibilities. British pantomime, television and comedy Cross-dressing is a traditional popular trope in British comedy. The pantomime dame in British pantomime dates from the 19th century, which is part of the theatrical tradition of female characters portrayed by male actors in drag. Widow Twankey (Aladdin's mother) is a popular pantomime dame: in 2004 Ian McKellen played the role. The Monty Python comedy troupe donned frocks and makeup, playing female roles while speaking in falsetto. Character comics such as Benny Hill and Dick Emery drew upon several female identities. In the BBC's long-running sketch show The Dick Emery Show (broadcast from 1963 to 1981), Emery played Mandy, a busty peroxide blonde whose catchphrase, "Ooh, you are awful ... but I like you!", was given in response to a seemingly innocent remark made by her interviewer, but perceived by her as ribald double entendre. The popular tradition of cross dressing in British comedy extended to the 1984 music video for Queen's "I Want to Break Free" where the band parody several female characters from the soap opera Coronation Street. Sexual fetishes A transvestic fetishist is a person who cross-dresses as part of a sexual fetish. According to the fourth edition of Diagnostic and Statistical Manual of Mental Disorders, this fetishism was limited to heterosexual men; however, DSM-5 does not have this restriction, and opens it to women and men, regardless of their sexual orientation. Sometimes either member of a heterosexual couple will cross-dress in order to arouse the other. For example, the male might wear skirts or lingerie and/or the female will wear boxers or other male clothing. (See also forced feminization) Passing Some people who cross-dress may endeavor to project a complete impression of belonging to another gender, including mannerisms, speech patterns, and emulation of sexual characteristics. This is referred to as passing or "trying to pass", depending how successful the person is. An observer who sees through the cross-dresser's attempt to pass is said to have "read" or "clocked" them. There are videos, books, and magazines on how a man may look more like a woman. Others may choose to take a mixed approach, adopting some feminine traits and some masculine traits in their appearance. For instance, a man might wear both a dress and a beard. This is sometimes known as "genderfuck". In a broader context, cross-dressing may also refer to other actions undertaken to pass as a particular sex, such as packing (accentuating the male crotch bulge) or, the opposite, tucking (concealing the male crotch bulge). Gender disguise Gender disguise has been used by women and girls to pass as male, and by men and boys to pass as female. Gender disguise has also been used as a plot device in storytelling, particularly in narrative ballads, and is a recurring motif in literature, theater, and film. Historically, some women have cross-dressed to take up male-dominated or male-exclusive professions, such as military service. Conversely, some men have cross-dressed to escape from mandatory military service or as a disguise to assist in political or social protest, as men in Wales did in the Rebecca Riots and when conducting Ceffyl Pren as a form of mob justice. Sports Conversation surrounding exclusion and inequality in sports has been around for decades. While the fight for equality in sports has been going on, there are a couple of notable women who have dressed as men or hid their gender to insert themselves into the very gatekept world of sports. Roberta "Bobbi" Gibb Roberta "Bobbi" Gibb is the first woman to have competed in the Boston Marathon. In 1966 Bobbi Gibb wrote a letter to the Boston Athletic Association asking to participate in the race happening that year. When Gibb received her letter back in the mail she was faced with the news that her entry to the race was denied due to her gender. Rather than just accept her fate, Gibb did not take no for an answer and decided to run the marathon anyways—however, she would do it hidden as a man. On the day of the race Gibb showed up in an oversized sweatshirt, her brother's shorts, and men's running shoes. Gibb hid in the bushes until the race started and then joined in with the crowd. Eventually her fellow runners figured out Gibb's real gender but stated that they would make sure that she finished the race. Gibb ended up finishing her first Boston Marathon in 3 hours, 27 minutes and 40 seconds. She crossed the finish line with blistered, bleeding feet from the men's running shoes she was wearing. Gibb's act of defiance influenced other women marathon runners of the time like Katherine Switzer, who also registered under an alias to be able to run the race in 1967. It would not be until 1972 until there was an official women's race within the Boston Marathon. Sam Kerr Sam Kerr is a forward for the Australian Women's Soccer Team and Chelsea FC in the FA Women's Super League. Kerr has been regarded as one of the best forward players in the sport and has been one of the most highly paid players in women's soccer as well. While Kerr now shares the world state with other great women soccer players, as a young child she shared the field with young boys. Kerr grew up in a suburb of Perth where there was little to no access to young girls soccer teams in the direct area. Not having a girls team to play on did not bother Kerr though, she simply played on a youth boys team where all of her teammates just assumed she was also a boy. Kerr states in her book My Journey to the World Cup that she continued to hide her gender because she did not want to be treated any differently. In her book Kerr also reviled that when one of her teammates found out that she was, in fact a girl, he cried. While Kerr's act of hiding her gender was initially an accident, it is still an example of how women (and in the case a young girl) can create opportunities for themselves by looking or acting as a man. War One of the most common instances of gender disguise is in the instance of war/militaristic situations. From Joan of Arc in the 15th century to Mulan from the animated Disney Mulan to young girls in World War II, there have been many different people of many different sexes that disguise themselves as men in order to be able to fight in wars. Joan of Arc Born c. 1412, St Joan of Arc or the Maid of Orleans is one of the oldest examples of gender disguise. At 13, after receiving a revelation that she was supposed to lead the French to victory over the English in the 100 years war, Joan donned the clothing of a male soldier in the French army. Joan was able to convince King Charles the VIII to allow her to take the lead of some of the French armies in order to help him get his crown back. Ultimately, Joan of Arc was successful in claiming victory over the English but was captured in 1430 and found guilty of heretic, leading to her execution in 1431.The impact of her actions was seen even after Joan's death in 1431. During the suffragette movement, Joan of Arc was used as an inspiration for the movement, particularly in Britain where many used her actions as fuel for their fight for political reform. Deborah Sampson Born in 1760 in Plympton, Massachusetts, Deborah Sampson was the first female soldier in the US Army. The only woman in the Revolution to receive a full military pension, at age 18 Deborah took the name “Robert Shirtleff” and enlisted in union forces. In her capacity as a soldier, she was very successful, being named captain and leading an infantry in the capture of 15 enemy soldiers among other things. One and a half years into service, her true sex was revealed when she had to receive medical care. Following an honorable discharge, Deborah petitioned congress for her full pay that was withheld on the grounds of being an “invalid soldier” and eventually received it. She died in 1827 at age 66. Even after her death, Deborah Sampson continues to be a "hero of the American Revolution". In 2019, a diary from corporal Abner Weston shares about Deborah Sampson's previously unknown first attempt to enlist in the Continental Army. These women are just a few among many who have disguised themselves as men in order to be able to fight in many different wars. Others who have used gender disguise for this purpose include Kit Kavanaugh/Christian Davies, Hannah Snell, Sarah Emma Edmonds, Frances Clayton, Dorothy Lawrence, Zoya Smirnow, Brita Olofsdotter, and Margaret Ann Bulkly/James Barry. Journalism and culture In some instances, women in journalism deem is necessary to wear the identity of a man in order to gather information that is only accessible from the male point of view. In other cases, people cross-dress to cope with strict cultural norms and expectations. Norah Vincent Norah Vincent, author of the book Self-Made Man: One Woman's Journey Into Manhood and Back Again, used gender disguise in order to go undercover as a men to penetrate men’s social circles and experience life as a man. In 2003, Vincent put her life on pause to adopt a new masculine identity as Ned Vincent. She worked with a makeup artist and vocal coach in order to convincingly play the role of a biological man. She wore an undersized sports bra, a stuffed jock strap, and size 11½ shoes to deceive those around her. In her book, Vincent makes discoveries about socialization, romance, sex, and stress as a man that leads her to conclude that, “[Men] have different problems than women have, but they don't have it better.” However, Vincent developed controversial opinions about sex and gender, claiming that transgender people are not legitimate until they undergo hormone therapy and surgical intervention. After writing Self-Made Man, Vincent became a victim of depression; she died by medically assisted suicide in 2022. Bacha posh Bacha posh, an Afghani tradition, involves the crossdressing of young Afghan girls by their families so that they present to the public as boys. Families engage in bacha posh so that their daughters may avoid the oppression that women face under Afghanistan's deeply patriarchal society. Other reasons for having a bacha posh daughter include economic pressure, as girls and women are generally prohibited from work in contemporary Afghanistan, and social pressure, as families with boys tend to be more well regarded in Afghan society. While there isn’t a law that prohibits bacha posh, girls are expected to revert to traditional gender norms upon reaching puberty. According to Thomas Barfield, an anthropology professor at Boston University, bacha posh is "one of the most under-investigated" topics in the realm of gender studies, making difficult to determine exactly how common the practice is in Afghan society. However, some prominent female figures in Afghan society have admitted to being bacha posh in their youth. A more famous example of this is Afghan parliament member Azita Rafaat. Rafaat claims that bacha posh was a positive experience that built her self-confidence in Afghanistan's heavily patriarchal society and gave her a more well rounded understanding of women's issues in Afghanistan. Clothes The actual determination of cross-dressing is largely socially constructed. For example, in Western society, trousers have long been adopted for usage by women, and it is no longer regarded as cross-dressing. In cultures where men have traditionally worn skirt-like garments such as the kilt or sarong, these are not seen as women's clothing, and wearing them is not seen as cross-dressing for men. As societies are becoming more global in nature, both men's and women's clothing are adopting styles of dress associated with other cultures. Cosplaying may also involve cross-dressing, for some females may wish to dress as a male, and vice versa (see Crossplay (cosplay)). Breast binding (for females) is not uncommon and is one of the things likely needed to cosplay a male character. In most parts of the world, it remains socially disapproved for men to wear clothes traditionally associated with women. Attempts are occasionally made, e.g. by fashion designers, to promote the acceptance of skirts as everyday wear for men. Cross-dressers have complained that society permits women to wear pants or jeans and other masculine clothing, while condemning any man who wants to wear clothing sold for women. While creating a more feminine figure, male cross-dressers will often utilize different types and styles of breast forms, which are silicone or foam prostheses traditionally used by women who have undergone mastectomies to recreate the visual appearance of a breast. Some male cross-dressers may also use hip or butt pads to create a profile that appears more stereotypically feminine. While most male cross-dressers utilize clothing associated with modern women, some are involved in subcultures that involve dressing as little girls or in vintage clothing. Some such men have written that they enjoy dressing as femininely as possible, so they wear frilly dresses with lace and ribbons, bridal gowns complete with veils, as well as multiple petticoats, corsets, girdles and/or garter belts with nylon stockings. The term underdressing is used by male cross-dressers to describe wearing female undergarments such as panties under their male clothes. The famous low-budget film-maker Edward D. Wood, Jr. (who also went out in public dressed in drag as "Shirley", his female alter ego) said he often wore women's underwear under his military uniform as a Marine during World War II. Female masking is a form of cross-dressing in which men wear masks that present them as female. Social issues Cross-dressers may begin wearing clothing associated with the opposite sex in childhood, using the clothes of a sibling, parent, or friend. Some parents have said they allowed their children to cross-dress and, in many cases, the child stopped when they became older. The same pattern often continues into adulthood, where there may be confrontations with a spouse, partner, family member or friend. Married cross-dressers can experience considerable anxiety and guilt if their spouse objects to their behavior. Sometimes because of guilt or other reasons cross-dressers dispose of all their clothing, a practice called "purging", only to start collecting the other gender's clothing again. Festivals Celebrations of cross-dressing occur in widespread cultures. The Abissa festival in Côte d'Ivoire, Ofudamaki in Japan, and Kottankulangara Festival in India are all examples of this. Analysis Advocacy for social change has done much to relax the constrictions of gender roles on men and women, but they are still subject to prejudice from some people. It is noticeable that as being transgender becomes more socially accepted as a normal human condition, the prejudices against cross-dressing are changing quite quickly, just as the similar prejudices against homosexuals have changed rapidly in recent decades. The reason it is so hard to have statistics for female cross-dressers is that the line where cross-dressing stops and cross-dressing begins has become blurred, whereas the same line for men is as well defined as ever. This is one of the many issues being addressed by third wave feminism as well as the modern-day masculist movement. The general culture has very mixed views about cross-dressing. A woman who wears her husband's shirt to bed is considered attractive, while a man who wears his wife's nightgown to bed may be considered transgressive. Marlene Dietrich in a tuxedo was considered very erotic; Jack Lemmon in a dress was considered ridiculous. All this may result from an overall gender role rigidity for males; that is, because of the prevalent gender dynamic throughout the world, men frequently encounter discrimination when deviating from masculine gender norms, particularly violations of heteronormativity. A man's adoption of feminine clothing is often considered a going down in the gendered social order whereas a woman's adoption of what are traditionally men's clothing (at least in the English-speaking world) has less of an impact because women have been traditionally subordinate to men, unable to affect serious change through style of dress. Thus when a male cross-dresser puts on his clothes, he transforms into the quasi-female and thereby becomes an embodiment of the conflicted gender dynamic. Following the work of Judith Butler, gender proceeds along through ritualized performances, but in male cross-dressing it becomes a performative "breaking" of the masculine and a "subversive repetition" of the feminine. Psychoanalysts today do not regard cross-dressing by itself as a psychological problem, unless it interferes with a person's life. "For instance," said Joseph Merlino, senior editor of Freud at 150: 21st Century Essays on a Man of Genius, "[suppose that]...I'm a cross-dresser and I don't want to keep it confined to my circle of friends, or my party circle, and I want to take that to my wife and I don't understand why she doesn't accept it, or I take it to my office and I don't understand why they don't accept it, then it's become a problem because it's interfering with my relationships and environment." Cross-dressing in the 21st century Fashion trends Cross-dressing today is much more common and normalized thanks to trends such as camp fashion and androgynous fashion. These trends have long histories but have recently been popularized thanks to major designers, fashion media, and celebrities today. Camp is a style of fashion that has had a long history extending all the way back to the Victorian era to the modern era. During the Victorian era up until the mid-20th century, it was defined as an exaggerated and flamboyant style of dressing. This was typically associated with ideas of effeminacy, de-masculization, and homosexuality. As the trend entered the 20th century, it also developed an association with a lack of conduct, creating the connotation that those who engaged in Camp are unrefined, improper, distasteful, and, essentially, undignified. Though this was its former understanding, Camp has now developed a new role in the fashion industry. It is considered a fashion style that has "failed seriousness" and has instead become a fun way of self-expression. Thanks to its integration with high fashion and extravagance, Camp is now seen as a high art form of absurdity: including loud, vibrant, bold, fun, and empty frivolity. Camp is often used in drag culture as a method of exaggerating or inversing traditional conceptions of what it means to be feminine. In actuality, the QTPOC community has had a large impact on Camp. This is exhibited by ballroom culture, camp/glamour queens, Black '70s funk, Caribbean Carnival costumes, Blaxploitation movies, "pimp/player fashion", and more. This notion has also been materialized by camp icons such as Josephine Baker and RuPaul. Androgynous fashion is described as neither masculine nor feminine rather it is the embodiment of a gender inclusive and sexually neutral fashion of expression. The general understanding of androgynous fashion is mixing both masculine and feminine pieces with the goal of producing a look that has no visual differentiations between one gender or another. This look is achieved by masking the general body so that one cannot identify the biological sex of an individual given the silhouette of the clothing pieces: Therefore, many androgynous looks include looser, baggier clothing that can conceal curves in the female body or using more "feminine" fabrics and prints for men. Both of these style forms have been normalized and popularized by celebrities such as Harry Styles, Timothée Chalamet, Billie Eilish, Princess Diana, and more. Societal changes Beyond fashion, cross-dressing in non-Western countries have not fully outgrown the negative connotations that it has in the West. For instance, many Eastern and Southeastern Asian countries have a narrative of discrimination and stigma against LGBTQ and cross-dressing individuals. This is especially evident in the post-pandemic world. During this time, it was clear to see the failures of these governments to provide sufficient support to these individuals due to a lack of legal services, lack of job opportunity, and more. For instance, to be able to receive government aid, these individuals need to be able to quickly change their legal name, gender, and other information on official ID documents. This fault augmented the challenges of income loss, food insecurity, safe housing, healthcare, and more for many trans and cross-dressing individuals. This was especially pertinent as many of these individuals relied on entertainment and sex work for income. With the pandemic removing these job opportunities, the stigmatisation and discrimination against these individuals only increased, especially in Southeast Asian countries. On the other hand, some Asian countries have grown to be more accepting of cross-dressing as modernization has increased. For instance, among Japan's niche communities there exists the otokonoko. This is a group of male-assigned individuals who engage in female cross-dressing as a form of gender expression. This trend originated with manga and grew with an increase in maid cafes, cosplaying, and more in the 2010s. With the normalization of this through cosplay, cross-dressing has become a large part of otaku and anime culture. Across media Women dressed as men, and less often men dressed as women, is a common trope in fiction and folklore. For example, in Norse myth, Thor disguised himself as Freya. These disguises were also popular in Gothic fiction, such as in works by Charles Dickens, Alexandre Dumas, père, and Eugène Sue, and in a number of Shakespeare's plays, such as Twelfth Night. In The Wind in the Willows, Toad dresses as a washerwoman, and in The Lord of the Rings, Éowyn pretends to be a man. In science fiction, fantasy and women's literature, this literary motif is occasionally taken further, with literal transformation of a character from male to female or vice versa. Virginia Woolf's Orlando: A Biography focuses on a man who becomes a woman, as does a warrior in Peter S. Beagle's The Innkeeper's Song; while in Geoff Ryman's The Warrior Who Carried Life, Cara magically transforms herself into a man. Other popular examples of gender disguise include Madame Doubtfire (published as Alias Madame Doubtfire in the United States) and its movie adaptation Mrs. Doubtfire, featuring a man disguised as a woman. Similarly, the movie Tootsie features Dustin Hoffman disguised as a woman, while the movie The Associate features Whoopi Goldberg disguised as a man. Medical views The 10th edition of the International Statistical Classification of Diseases and Related Health Problems lists dual-role transvestism (non-sexual cross-dressing) and fetishistic transvestism (cross-dressing for sexual pleasure) as disorders. Both listings were removed for the 11th edition. Transvestic fetishism is a paraphilia and a psychiatric diagnosis in the DSM-5 version of the Diagnostic and Statistical Manual of Mental Disorders. See also Androgyny Breeches role Breeching (boys) Cross-dressing ball Cross-gender acting Drag (clothing) Effeminacy Femme Femminiello Gender bender Gender identity Gender variance List of transgender-related topics List of transgender-rights organizations List of wartime crossdressers Otokonoko, male crossdressing in Japan Queer heterosexuality Sex and gender distinction Social construction of gender Sexual orientation hypothesis Transvestism Travesti (theatre) Tri-Ess Womanless wedding Notes References Further reading Anders, Charles. The Lazy Crossdresser, Greenery Press, 2002. . Boyd, Helen. My Husband Betty, Thunder's Mouth Press, 2003 Clute, John & Grant, John. The Encyclopedia of Fantasy, Orbit Books, 1997. "Lynne". "A Cross-Dressing-Perspective" External links The Gender Centre (Australia) Crossdressing Support Group (Canada) The EnFemme Archives En Femme Magazine No. 1, Digital Transgender Archive Clothing controversies
5702
https://en.wikipedia.org/wiki/Channel%20Tunnel
Channel Tunnel
The Channel Tunnel (), also known as the Chunnel, is a underwater railway tunnel that connects Folkestone (Kent, England) with Coquelles (Pas-de-Calais, France) beneath the English Channel at the Strait of Dover. It is the only fixed link between the island of Great Britain and the European mainland. At its lowest point, it is below the sea bed and below sea level. At , it has the longest underwater section of any tunnel in the world and is the third longest railway tunnel in the world. The speed limit for trains through the tunnel is . The tunnel is owned and operated by the company Getlink, formerly "Groupe Eurotunnel". The tunnel carries high-speed Eurostar passenger trains, LeShuttle services for road vehicles and freight trains. It connects end-to-end with high-speed railway lines: the LGV Nord in France and High Speed 1 in England. In 2017, rail services carried 10.3 million passengers and 1.22 million tonnes of freight, and the Shuttle carried 10.4 million passengers, 2.6 million cars, 51,000 coaches, and 1.6 million lorries (equivalent to 21.3 million tonnes of freight), compared with 11.7 million passengers, 2.6 million lorries and 2.2 million cars by sea through the Port of Dover. Plans to build a cross-Channel fixed link appeared as early as 1802, but British political and media pressure motivated by fears of compromising national security had disrupted attempts to build one. An early unsuccessful attempt was made in the late 19th century, on the English side, "in the hope of forcing the hand of the English Government". The eventual successful project, organised by Eurotunnel, began construction in 1988 and opened in 1994. Estimated to cost £5.5 billion in 1985, it was at the time the most expensive construction project ever proposed. The cost finally amounted to £9 billion (equivalent to £ billion in ), well over budget. Since its opening, the tunnel has experienced mechanical problems. Both fires and cold weather have temporarily disrupted its operation. Since at least 1997, aggregations of migrants around Calais seeking entry to the United Kingdom, such as through the tunnel, have prompted deterrence and countermeasures. Origins Earlier proposals In 1802, Albert Mathieu-Favier, a French mining engineer, put forward a proposal to tunnel under the English Channel, with illumination from oil lamps, horse-drawn coaches, and an artificial island positioned mid-Channel for changing horses. His design envisaged a bored two-level tunnel with the top tunnel used for transport and the bottom one for groundwater flows. In 1839, Aimé Thomé de Gamond, a Frenchman, performed the first geological and hydrographical surveys on the Channel between Calais and Dover. He explored several schemes and, in 1856, presented a proposal to Napoleon III for a mined railway tunnel from Cap Gris-Nez to East Wear Point with a port/airshaft on the Varne sandbank at a cost of 170 million francs, or less than £7 million. In 1865, a deputation led by George Ward Hunt proposed the idea of a tunnel to the Chancellor of the Exchequer of the day, William Ewart Gladstone. In 1866, Henry Marc Brunel made a survey of the floor of the Strait of Dover. By his results, he proved that the floor was composed of chalk, like the adjoining cliffs, and thus a tunnel was feasible. For this survey, he invented the gravity corer, which is still used in geology. Around 1866, William Low and Sir John Hawkshaw promoted tunnel ideas, but apart from preliminary geological studies, none were implemented. An official Anglo-French protocol was established in 1876 for a cross-Channel railway tunnel. In 1881, British railway entrepreneur Sir Edward Watkin and Alexandre Lavalley, a French Suez Canal contractor, were in the Anglo-French Submarine Railway Company that conducted exploratory work on both sides of the Channel. From June 1882 to March 1883, the British tunnel boring machine tunneled, through chalk, a total of , while Lavalley used a similar machine to drill from Sangatte on the French side. However, the cross-Channel tunnel project was abandoned in 1883, despite this success, after fears raised by the British military that an underwater tunnel might be used as an invasion route. Nevertheless, in 1883, this TBM was used to bore a railway ventilation tunnel— in diameter and long—between Birkenhead and Liverpool, England, through sandstone under the Mersey River. These early works were encountered more than a century later during the TML project. A 1907 film, Tunnelling the English Channel by pioneer filmmaker Georges Méliès, depicts King Edward VII and President Armand Fallières dreaming of building a tunnel under the English Channel. In 1919, during the Paris Peace Conference, British prime minister David Lloyd George repeatedly brought up the idea of a Channel tunnel as a way of reassuring France about British willingness to defend against another German attack. The French did not take the idea seriously, and nothing came of the proposal. In the 1920s, Winston Churchill advocated for the Channel Tunnel, using that exact name in his essay "Should Strategists Veto The Tunnel?" It was published on 27 July 1924 in the Weekly Dispatch, and argued vehemently against the idea that the tunnel could be used by a Continental enemy in an invasion of Britain. Churchill expressed his enthusiasm for the project again in an article for the Daily Mail on 12 February 1936, "Why Not A Channel Tunnel?" There was another proposal in 1929, but nothing came of this discussion and the idea was shelved. Proponents estimated the construction cost at US$150 million. The engineers had addressed the concerns of both nations' military leaders by designing two sumps—one near the coast of each country—that could be flooded at will to block the tunnel but this did not appease military leaders, or dispel concerns about hordes of tourists who would disrupt English life. Military fears continued during the Second World War. After the fall of France, as Britain prepared for an expected German invasion, a Royal Navy officer in the Directorate of Miscellaneous Weapons Development calculated that Hitler could use slave labour to build two Channel tunnels in 18 months. The estimate caused rumours that Germany had already begun digging. A British film from Gaumont Studios, The Tunnel (also called TransAtlantic Tunnel), was released in 1935 as a science-fiction project concerning the creation of a transatlantic tunnel. It referred briefly to its protagonist, a Mr. McAllan, as having completed a British Channel tunnel successfully in 1940, five years into the future of the film's release. By 1955, defence arguments had become less relevant due to the dominance of air power, and both the British and French governments supported technical and geological surveys. In 1958 the 1881 workings were cleared in preparation for a £100,000 geological survey by the Channel Tunnel Study Group. 30% of the funding came from Channel Tunnel Co Ltd, the largest shareholder of which was the British Transport Commission, as successor to the South Eastern Railway. A detailed geological survey was carried out in 1964 and 1965. Although the two countries agreed to build a tunnel in 1964, the phase 1 initial studies and signing of a second agreement to cover phase 2 took until 1973. The plan described a government-funded project to create two tunnels to accommodate car shuttle wagons on either side of a service tunnel. Construction started on both sides of the Channel in 1974. On 20 January 1975, to the dismay of their French partners, the then-governing Labour Party in Britain cancelled the project due to uncertainty about EEC membership, doubling cost estimates and the general economic crisis at the time. By this time the British tunnel boring machine was ready and the Ministry of Transport had conducted a experimental drive. (This short tunnel, called Adit A1, was eventually reused as the starting and access point for tunnelling operations from the British side, and remains an access point to the service tunnel.) The cancellation costs were estimated at £17 million. On the French side, a tunnel-boring machine had been installed underground in a stub tunnel. It lay there for 14 years until 1988, when it was sold, dismantled, refurbished and shipped to Turkey, where it was used to drive the Moda tunnel for the Istanbul Sewerage Scheme. Initiation of project In 1979, the "Mouse-hole Project" was suggested when the Conservatives came to power in Britain. The concept was a single-track rail tunnel with a service tunnel but without shuttle terminals. The British government took no interest in funding the project, but British Prime Minister Margaret Thatcher did not object to a privately funded project, although she said she assumed it would be for cars rather than trains. In 1981, Thatcher and French president François Mitterrand agreed to establish a working group to evaluate a privately funded project. In June 1982 the Franco-British study group favoured a twin tunnel to accommodate conventional trains and a vehicle shuttle service. In April 1985 promoters were invited to submit scheme proposals. Four submissions were shortlisted: Channel Tunnel, a rail proposal based on the 1975 scheme presented by Channel Tunnel Group/France–Manche (CTG/F–M). Eurobridge, a suspension bridge with a series of spans with a roadway in an enclosed tube. Euroroute, a tunnel between artificial islands approached by bridges. Channel Expressway, a set of large-diameter road tunnels with mid-Channel ventilation towers. The cross-Channel ferry industry protested under the name "Flexilink". In 1975 there was no campaign protesting a fixed link, with one of the largest ferry operators (Sealink) being state-owned. Flexilink continued rousing opposition throughout 1986 and 1987. Public opinion strongly favoured a drive-through tunnel, but concerns about ventilation, accident management and driver mesmerisation led to the only shortlisted rail submission, CTG/F-M, being awarded the project in January 1986. Reasons given for the selection included that it caused least disruption to shipping in the Channel and least environmental disruption, was the best protected against terrorism, and was the most likely to attract sufficient private finance. Arrangement The British Channel Tunnel Group consisted of two banks and five construction companies, while their French counterparts, France–Manche, consisted of three banks and five construction companies. The banks' role was to advise on financing and secure loan commitments. On 2 July 1985, the groups formed Channel Tunnel Group/France–Manche (CTG/F–M). Their submission to the British and French governments was drawn from the 1975 project, including 11 volumes and a substantial environmental impact statement. The Anglo-French Treaty on the Channel Tunnel was signed by both governments in Canterbury Cathedral. The Treaty of Canterbury (1986) prepared the Concession for the construction and operation of the Fixed Link by privately owned companies and outlined arbitration methods to be used in the event of disputes. It set up the Intergovernmental Commission (IGC), responsible for monitoring all matters associated with the Tunnel's construction and operation on behalf of the British and French governments, and a Safety Authority to advise the IGC. It drew a land frontier between the two countries in the middle of the Channel tunnel—the first of its kind. Design and construction were done by the ten construction companies in the CTG/F-M group. The French terminal and boring from Sangatte were done by the five French construction companies in the joint venture group GIE Transmanche Construction. The English Terminal and boring from Shakespeare Cliff were done by the five British construction companies in the Translink Joint Venture. The two partnerships were linked by a bi-national project organisation, TransManche Link (TML). The Maître d'Oeuvre was a supervisory engineering body employed by Eurotunnel under the terms of the concession that monitored the project and reported to the governments and banks. In France, with its long tradition of infrastructure investment, the project had widespread approval. The French National Assembly approved it unanimously in April 1987, and after a public inquiry, the Senate approved it unanimously in June. In Britain, select committees examined the proposal, making history by holding hearings away from Westminster, in Kent. In February 1987, the third reading of the Channel Tunnel Bill took place in the House of Commons, and passed by 94 votes to 22. The Channel Tunnel Act gained Royal assent and passed into law in July. Parliamentary support for the project came partly from provincial members of Parliament on the basis of promises of regional Eurostar through train services that never materialised; the promises were repeated in 1996 when the contract for construction of the Channel Tunnel Rail Link was awarded. Cost The tunnel is a build-own-operate-transfer (BOOT) project with a concession. TML would design and build the tunnel, but financing was through a separate legal entity, Eurotunnel. Eurotunnel absorbed CTG/F-M and signed a construction contract with TML, but the British and French governments controlled final engineering and safety decisions, now in the hands of the Channel Tunnel Safety Authority. The British and French governments gave Eurotunnel a 55-year operating concession (from 1987; extended by 10 years to 65 years in 1993) to repay loans and pay dividends. A Railway Usage Agreement was signed between Eurotunnel, British Rail and SNCF guaranteeing future revenue in exchange for the railways obtaining half of the tunnel's capacity. Private funding for such a complex infrastructure project was of unprecedented scale. Initial equity of £45 million was raised by CTG/F-M, increased by £206 million private institutional placement, £770 million was raised in a public share offer that included press and television advertisements, a syndicated bank loan and letter of credit arranged £5 billion. Privately financed, the total investment costs at 1985 prices were £2.6 billion. At the 1994 completion actual costs were, in 1985 prices, £4.65 billion: an 80% cost overrun. The cost overrun was partly due to enhanced safety, security, and environmental demands. Financing costs were 140% higher than forecast. Construction Working from both the English and French sides of the Channel, eleven tunnel boring machines or TBMs cut through chalk marl to construct two rail tunnels and a service tunnel. The vehicle shuttle terminals are at Cheriton (part of Folkestone) and Coquelles, and are connected to the English M20 and French A16 motorways respectively. Tunnelling commenced in 1988, and the tunnel began operating in 1994. In 1985 prices, the total construction cost was £4.65 billion (equivalent to £ billion in 2015), an 80% cost overrun. At the peak of construction 15,000 people were employed with daily expenditure over £3 million. Ten workers, eight of them British, were killed during construction between 1987 and 1993, most in the first few months of boring. Completion A diameter pilot hole allowed the service tunnel to break through without ceremony on 30 October 1990. On 1 December 1990, Englishman Graham Fagg and Frenchman Phillippe Cozette broke through the service tunnel with the media watching. Eurotunnel completed the tunnel on time. (A BBC TV television commentator called Graham Fagg "the first man to cross the Channel by land for 8000 years".) The two tunnelling efforts met each other with an offset of only . A Paddington Bear soft toy was chosen by British tunnellers as the first item to pass through to their French counterparts when the two sides met. The tunnel was officially opened, one year later than originally planned, by Queen Elizabeth II and the French president, François Mitterrand, in a ceremony held in Calais on 6 May 1994. The Queen travelled through the tunnel to Calais on a Eurostar train, which stopped nose to nose with the train that carried President Mitterrand from Paris. Following the ceremony President Mitterrand and the Queen travelled on Le Shuttle to a similar ceremony in Folkestone. A full public service did not start for several months. The first freight train, however, ran on 1 June 1994 and carried Rover and Mini cars being exported to Italy. The Channel Tunnel Rail Link (CTRL), now called High Speed 1, runs from St Pancras railway station in London to the tunnel portal at Folkestone in Kent. It cost £5.8 billion. On 16 September 2003 the prime minister, Tony Blair, opened the first section of High Speed 1, from Folkestone to north Kent. On 6 November 2007, the Queen officially opened High Speed 1 and St Pancras International station, replacing the original slower link to Waterloo International railway station. High Speed 1 trains travel at up to , the journey from London to Paris taking 2 hours 15 minutes, to Brussels 1 hour 51 minutes. In 1994, the American Society of Civil Engineers elected the tunnel as one of the seven modern Wonders of the World. In 1995, the American magazine Popular Mechanics published the results. Opening dates The opening was phased for various services offered as the Channel Tunnel Safety Authority, the IGC, gave permission for various services to begin at several dates over the period 1994/1995 but start-up dates were a few days later. Engineering Site investigation undertaken in the 20 years before construction confirmed earlier speculations that a tunnel could be bored through a chalk marl stratum. The chalk marl is conducive to tunnelling, with impermeability, ease of excavation and strength. The chalk marl runs along the entire length of the English side of the tunnel, but on the French side a length of has variable and difficult geology. The tunnel consists of three bores: two diameter rail tunnels, apart, in length with a diameter service tunnel in between. The three bores are connected by cross-passages and piston relief ducts. The service tunnel was used as a pilot tunnel, boring ahead of the main tunnels to determine the conditions. English access was provided at Shakespeare Cliff and French access from a shaft at Sangatte. The French side used five tunnel boring machines (TBMs), and the English side six. The service tunnel uses Service Tunnel Transport System (STTS) and Light Service Tunnel Vehicles (LADOGS). Fire safety was a critical design issue. Between the portals at Beussingue and Castle Hill the tunnel is long, with under land on the French side and on the UK side, and under sea. It is the third-longest rail tunnel in the world, behind the Gotthard Base Tunnel in Switzerland and the Seikan Tunnel in Japan, but with the longest under-sea section. The average depth is below the seabed. On the UK side, of the expected of spoil approximately was used for fill at the terminal site, and the remainder was deposited at Lower Shakespeare Cliff behind a seawall, reclaiming of land. This land was then made into the Samphire Hoe Country Park. Environmental impact assessment did not identify any major risks for the project, and further studies into safety, noise, and air pollution were overall positive. However, environmental objections were raised over a high-speed link to London. Geology Successful tunnelling required a sound understanding of topography and geology and the selection of the best rock strata through which to dig. The geology of this site generally consists of northeasterly dipping Cretaceous strata, part of the northern limb of the Wealden-Boulonnais dome. Characteristics include: Continuous chalk on the cliffs on either side of the Channel containing no major faulting, as observed by Verstegan in 1605. Four geological strata, marine sediments laid down 90–100 million years ago; pervious upper and middle chalk above slightly pervious lower chalk and finally impermeable Gault Clay. A sandy stratum, glauconitic marl (tortia), is in between the chalk marl and gault clay. A layer of chalk marl (French: craie bleue) in the lower third of the lower chalk appeared to present the best tunnelling medium. The chalk has a clay content of 30–40% providing impermeability to groundwater yet relatively easy excavation with strength allowing minimal support. Ideally, the tunnel would be bored in the bottom of the chalk marl, allowing water inflow from fractures and joints to be minimised, but above the gault clay that would increase stress on the tunnel lining and swell and soften when wet. On the English side, the stratum dip is less than 5°; on the French side, this increases to 20°. Jointing and faulting are present on both sides. On the English side, only minor faults of displacement less than exist; on the French side, displacements of up to are present owing to the Quenocs anticlinal fold. The faults are of limited width, filled with calcite, pyrite and remolded clay. The increased dip and faulting restricted the selection of routes on the French side. To avoid confusion, microfossil assemblages were used to classify the chalk marl. On the French side, particularly near the coast, the chalk was harder, more brittle and more fractured than on the English side. This led to the adoption of different tunnelling techniques on the two sides. The Quaternary undersea valley Fosse Dangeard, and Castle Hill landslip at the English portal, caused concerns. Identified by the 1964–65 geophysical survey, the Fosse Dangeard is an infilled valley system extending below the seabed, south of the tunnel route in mid-channel. A 1986 survey showed that a tributary crossed the path of the tunnel, and so the tunnel route was made as far north and deep as possible. The English terminal had to be located in the Castle Hill landslip, which consists of displaced and tipping blocks of lower chalk, glauconitic marl and gault debris. Thus the area was stabilised by buttressing and inserting drainage adits. The service tunnel acted as a pilot preceding the main ones, so that the geology, areas of crushed rock, and zones of high water inflow could be predicted. Exploratory probing took place in the service tunnel, in the form of extensive forward probing, vertical downward probes and sideways probing. Site investigation Marine soundings and samplings by Thomé de Gamond were carried out during 1833–67, establishing the seabed depth at a maximum of and the continuity of geological strata (layers). Surveying continued over many years, with 166 marine and 70 land-deep boreholes being drilled and over 4,000 line kilometres of the marine geophysical survey completed. Surveys were undertaken in 1958–1959, 1964–1965, 1972–1974 and 1986–1988. The surveying in 1958–59 catered for immersed tube and bridge designs, as well as a bored tunnel, and thus a wide area was investigated. At this time, marine geophysics surveying for engineering projects was in its infancy, with poor positioning and resolution from seismic profiling. The 1964–65 surveys concentrated on a northerly route that left the English coast at Dover harbour; using 70 boreholes, an area of deeply weathered rock with high permeability was located just south of Dover harbour. Given the previous survey results and access constraints, a more southerly route was investigated in the 1972–73 survey, and the route was confirmed to be feasible. Information for the tunnelling project also came from work before the 1975 cancellation. On the French side at Sangatte, a deep shaft with adits was made. On the English side at Shakespeare Cliff, the government allowed of diameter tunnel to be driven. The actual tunnel alignment, method of excavation and support were essentially the same as the 1975 attempt. In the 1986–87 survey, previous findings were reinforced, and the characteristics of the gault clay and the tunnelling medium (chalk marl that made up 85% of the route) were investigated. Geophysical techniques from the oil industry were employed. Tunnelling Tunnelling was a major engineering challenge, with the only precedent being the undersea Seikan Tunnel in Japan, which opened in 1988. A serious health and safety risk with building tunnels underwater is major water inflow due to the high hydrostatic pressure from the sea above, under weak ground conditions. The tunnel also had the challenge of time: being privately funded, the early financial return was paramount. The objective was to construct two rail tunnels, apart, in length; a service tunnel between the two main ones; pairs of -diameter cross-passages linking the rail tunnels to the service one at spacing; piston relief ducts in diameter connecting the rail tunnels apart; two undersea crossover caverns to connect the rail tunnels, with the service tunnel always preceding the main ones by at least to ascertain the ground conditions. There was plenty of experience with excavating through chalk in the mining industry, while the undersea crossover caverns were a complex engineering problem. The French one was based on the Mount Baker Ridge freeway tunnel in Seattle; the UK cavern was dug from the service tunnel ahead of the main ones, to avoid delay. Precast segmental linings in the main TBM drives were used, but two different solutions were used. On the French side, neoprene and grout sealed bolted linings made of cast iron or high-strength reinforced concrete were used; on the English side, the main requirement was for speed so bolting of cast-iron lining segments was only carried out in areas of poor geology. In the UK rail tunnels, eight lining segments plus a key segment were used; in the French side, five segments plus a key. On the French side, a diameter deep grout-curtained shaft at Sangatte was used for access. On the English side, a marshalling area was below the top of Shakespeare Cliff, the New Austrian Tunnelling method (NATM) was first applied in the chalk marl here. On the English side, the land tunnels were driven from Shakespeare Cliff—the same place as the marine tunnels—not from Folkestone. The platform at the base of the cliff was not large enough for all of the drives and, despite environmental objections, tunnel spoil was placed behind a reinforced concrete seawall, on condition of placing the chalk in an enclosed lagoon, to avoid wide dispersal of chalk fines. Owing to limited space, the precast lining factory was on the Isle of Grain in the Thames estuary, which used Scottish granite aggregate delivered by ship from the Foster Yeoman coastal super quarry at Glensanda in Loch Linnhe on the west coast of Scotland. On the French side, owing to the greater permeability to water, earth pressure balance TBMs with open and closed modes was used. The TBMs were of a closed nature during the initial , but then operated as open, boring through the chalk marl stratum. This minimised the impact to the ground, allowed high water pressures to be withstood and it also alleviated the need to grout ahead of the tunnel. The French effort required five TBMs: two main marine machines, one mainland machine (the short land drives of allowed one TBM to complete the first drive then reverse direction and complete the other), and two service tunnel machines. On the English side, the simpler geology allowed faster open-faced TBMs. Six machines were used; all commenced digging from Shakespeare Cliff, three marine-bound and three for the land tunnels. Towards the completion of the undersea drives, the UK TBMs were driven steeply downwards and buried clear of the tunnel. These buried TBMs were then used to provide an electrical earth. The French TBMs then completed the tunnel and were dismantled. A gauge railway was used on the English side during construction. In contrast to the English machines, which were given technical names, the French tunnelling machines were all named after women: Brigitte, Europa, Catherine, Virginie, Pascaline, Séverine. At the end of the tunnelling, one machine was on display at the side of the M20 motorway in Folkestone until Eurotunnel sold it on eBay for £39,999 to a scrap metal merchant. Another machine (T4 "Virginie") still survives on the French side, adjacent to Junction 41 on the A16, in the middle of the D243E3/D243E4 roundabout. On it are the words "hommage aux bâtisseurs du tunnel", meaning "tribute to the builders of the tunnel". Tunnel boring machines The eleven tunnel boring machines were designed and manufactured through a joint venture between the Robbins Company of Kent, Washington, United States; Markham & Co. of Chesterfield, England; and Kawasaki Heavy Industries of Japan. The TBMs for the service tunnels and main tunnels on the UK side were designed and manufactured by James Howden & Company Ltd, Scotland. Railway design Loading gauge The loading gauge height is . Communications There are three communication systems: Concession radio (CR) for mobile vehicles and personnel within Eurotunnel's Concession (terminals, tunnels, coastal shafts) Track-to-train radio (TTR) for secure speech and data between trains and the railway control centre Shuttle internal radio (SIR) for communication between shuttle crew and to passengers over car radios Power supply Power is delivered to the locomotives via an overhead line at . with a normal overhead clearance of . All tunnel services run on electricity, shared equally from English and French sources. There are two substations fed at 400 kV at each terminal, but in an emergency, the tunnel's lighting (about 20,000 light fittings) and the plant can be powered solely from either England or France. The traditional railway south of London uses a 750 V DC third rail to deliver electricity, but since the opening of High Speed 1 there is no longer any need for tunnel trains to use it. High Speed 1, the tunnel and the LGV Nord all have power provided via overhead catenary at 25 kV 50 Hz. The railways on "classic" lines in Belgium are also electrified by overhead wires, but at 3000 V DC. Signalling A cab signalling system gives information directly to train drivers on a display. There is a train protection system that stops the train if the speed exceeds that indicated on the in-cab display. TVM430, as used on LGV Nord and High Speed 1, is used in the tunnel. The TVM signalling is interconnected with the signalling on the high-speed lines on either side, allowing trains to enter and exit the tunnel system without stopping. The maximum speed is . Signalling in the tunnel is coordinated from two control centres: The main control centre at the Folkestone terminal, and a backup at the Calais terminal, which is staffed at all times and can take over all operations in the event of a breakdown or emergency. Track system Conventional ballasted tunnel track was ruled out owing to the difficulty of maintenance and lack of stability and precision. The Sonneville International Corporation's track system was chosen based on reliability and cost-effectiveness based on a good performance in Swiss tunnels and worldwide. The type of track used is known as Low Vibration Track (LVT). Like a ballasted track, the LVT is of the free-floating type, held in place by gravity and friction. Reinforced concrete blocks of 100 kg support the rails every 60 cm and are held by 12 mm thick closed-cell polymer foam pads placed at the bottom of rubber boots. The latter separates the blocks' mass movements from the lean encasement concrete. The ballastless track provides extra overhead clearance necessary for the passage of larger trains. The corrugated rubber walls of the boots add a degree of isolation of horizontal wheel-rail vibrations and are insulators of the track signal circuit in the humid tunnel environment. UIC60 (60 kg/m) rails of 900A grade rest on rail pads, which fit the RN/Sonneville bolted dual leaf-springs. The rails, LVT-blocks and their boots with pads were assembled outside the tunnel, in a fully automated process developed by the LVT inventor, Mr. Roger Sonneville. About 334,000 Sonneville blocks were made on the Sangatte site. Maintenance activities are less than projected. Initially, the rails were ground on a yearly basis or after approximately 100MGT of traffic. Ride quality continues to be noticeably smooth and of low noise. Maintenance is facilitated by the existence of two tunnel junctions or crossover facilities, allowing for two-way operation in each of the six tunnel segments, and providing safe access for maintenance of one isolated tunnel segment at a time. The two crossovers are the largest artificial undersea caverns ever built, at150 m long, 10 m high and 18 m wide. The English crossover is from Shakespeare Cliff, and the French crossover is from Sangatte. Ventilation, cooling and drainage The ventilation system maintains the air pressure in the service tunnel higher than in the rail tunnels, so that in the event of a fire, smoke does not enter the service tunnel from the rail tunnels. Two cooling water pipes in each rail tunnel circulate chilled water to remove heat generated by the rail traffic. Pumping stations remove water in the tunnels from rain, seepage, and so on. During the design stage of the tunnel, engineers found that its aerodynamic properties and the heat generated by high-speed trains as they passed through it would raise the temperature inside the tunnel to . As well as making the trains "unbearably warm" for passengers, this also presented a risk of equipment failure and track distortion. To cool the tunnel to below , engineers installed of diameter cooling pipes carrying of water. The network—Europe's largest cooling system—was supplied by eight York Titan chillers running on R22, a hydrochlorofluorocarbon (HCFC) refrigerant gas. Due to R22's ozone depletion potential (ODP) and high global warming potential (GWP), its use is being phased out in developed countries. Since 1 January 2015, it has been illegal in Europe to use HCFCs to service air-conditioning equipment; broken equipment that used HCFCs must be replaced with equipment that does not use it. In 2016, Trane was selected to provide replacement chillers for the tunnel's cooling network. The York chillers were decommissioned and four "next generation" Trane Series E CenTraVac large-capacity (2600 kW to 14,000 kW) chillers were installed—two in Sangatte, France, and two at Shakespeare Cliff, UK. The energy-efficient chillers, using Honeywell's non-flammable, ultra-low GWP R1233zd(E) refrigerant, maintain temperatures at , and in their first year of operation generated savings of 4.8 GWh—approximately 33%, equating to €500,000 ($585,000)—for tunnel operator Getlink. Rolling stock Rolling stock used previously Operators LeShuttle Getlink operates the LeShuttle, a vehicle shuttle service, through the tunnel. Car shuttle sets have two separate halves: single and double deck. Each half has two loading/unloading wagons and 12 carrier wagons. Eurotunnel's original order was for nine car shuttle sets. Heavy goods vehicle (HGV) shuttle sets also have two halves, with each half containing one loading wagon, one unloading wagon and 14 carrier wagons. There is a club car behind the leading locomotive, where drivers must stay during the journey. Eurotunnel originally ordered six HGV shuttle sets. Initially 38 LeShuttle locomotives were commissioned, with one at each end of a shuttle train. Freight locomotives Forty-six Class 92 locomotives for hauling freight trains and overnight passenger trains (the Nightstar project, which was abandoned) were commissioned, running on both overhead AC and third-rail DC power. However, RFF does not let these run on French railways, so there are plans to certify Alstom Prima II locomotives for use in the tunnel. International passenger Thirty-one Eurostar trains, based on the French TGV, built to UK loading gauge with many modifications for safety within the tunnel, were commissioned, with ownership split between British Rail, French national railways (SNCF) and Belgian national railways (NMBS/SNCB). British Rail ordered seven more for services north of London. Around 2010, Eurostar ordered ten trains from Siemens based on its Velaro product. The Class 374 entered service in 2016 and has been operating through the Channel Tunnel ever since alongside the current Class 373. Germany (DB) has since around 2005 tried to get permission to run train services to London. At the end of 2009, extensive fire-proofing requirements were dropped and DB received permission to run German Intercity-Express (ICE) test trains through the tunnel. In June 2013 DB was granted access to the tunnel, but these plans were ultimately dropped. In October 2021, Renfe, the Spanish state railway company, expressed interest in operating a cross-Channel route between Paris and London using some of their existing trains with the intention of competing with Eurostar. No details have been revealed as to which trains would be used. Service locomotives Diesel locomotives for rescue and shunting work are Eurotunnel Class 0001 and Eurotunnel Class 0031. Operation The following chart presents the estimated number of passengers and tonnes of freight, respectively, annually transported through the Channel Tunnel since 1994 (M = million). Usage and services Transport services offered by the tunnel are as follows: Eurotunnel Le Shuttle roll-on roll-off shuttle service for road vehicles and their drivers and passengers, Eurostar passenger trains, through freight trains. Both the freight and passenger traffic forecasts that led to the construction of the tunnel were overestimated; in particular, Eurotunnel's commissioned forecasts were over-predictions. Although the captured share of Channel crossings was forecast correctly, high competition (especially from budget airlines which expanded rapidly in the 1990s and 2000s) and reduced tariffs led to low revenue. Overall cross-Channel traffic was overestimated. With the EU's liberalisation of international rail services, the tunnel and High Speed 1 have been open to competition since 2010. There have been a number of operators interested in running trains through the tunnel and along High Speed 1 to London. In June 2013, after several years, DB obtained a license to operate Frankfurt – London trains, not expected to run before 2016 because of delivery delays of the custom-made trains. Plans for the service to Frankfurt seem to have been shelved in 2018. Passenger traffic volumes Cross-tunnel passenger traffic volumes peaked at 18.4 million in 1998, dropped to 14.9 million in 2003 and has increased substantially since then. At the time of the decision about building the tunnel, 15.9 million passengers were predicted for Eurostar trains in the opening year. In 1995, the first full year, actual numbers were a little over 2.9 million, growing to 7.1 million in 2000, then dropping to 6.3 million in 2003. Eurostar was initially limited by the lack of a high-speed connection on the British side. After the completion of High Speed 1 in two stages in 2003 and 2007, traffic increased. In 2008, Eurostar carried 9,113,371 passengers, a 10% increase over the previous year, despite traffic limitations due to the 2008 Channel Tunnel fire. Eurostar passenger numbers continued to increase. Freight traffic volumes Freight volumes have been erratic, with a major decrease during 1997 due to a closure caused by a fire in a freight shuttle. Freight crossings increased over the period, indicating the substitutability of the tunnel by sea crossings. The tunnel has achieved a market share close to or above Eurotunnel's 1980s predictions but Eurotunnel's 1990 and 1994 predictions were overestimates. For through freight trains, the first year prediction was 7.2 million tonnes; the actual 1995 figure was 1.3M tonnes. Through freight volumes peaked in 1998 at 3.1M tonnes. This fell back to 1.21M tonnes in 2007, increasing slightly to 1.24M tonnes in 2008. Together with that carried on freight shuttles, freight growth has occurred since opening, with 6.4M tonnes carried in 1995, 18.4M tonnes recorded in 2003 and 19.6M tonnes in 2007. Numbers fell back in the wake of the 2008 fire. Eurotunnel's freight subsidiary is Europorte 2. In September 2006 EWS, the UK's largest rail freight operator, announced that owing to the cessation of UK-French government subsidies of £52 million per annum to cover the tunnel "Minimum User Charge" (a subsidy of around £13,000 per train, at a traffic level of 4,000 trains per annum), freight trains would stop running after 30 November. Economic performance Shares in Eurotunnel were issued at £3.50 per share on 9 December 1987. By mid-1989 the price had risen to £11.00. Delays and cost overruns led to the price dropping; during demonstration runs in October 1994, it reached an all-time low. Eurotunnel suspended payment on its debt in September 1995 to avoid bankruptcy. In December 1997 the British and French governments extended Eurotunnel's operating concession by 34 years, to 2086. The financial restructuring of Eurotunnel occurred in mid-1998, reducing debt and financial charges. Despite the restructuring, The Economist reported in 1998 that to break even Eurotunnel would have to increase fares, traffic and market share for sustainability. A cost-benefit analysis of the tunnel indicated that there were few impacts on the wider economy and few developments associated with the project and that the British economy would have been better off if it had not been constructed. Under the terms of the Concession, Eurotunnel was obliged to investigate a cross-Channel road tunnel. In December 1999 road and rail tunnel proposals were presented to the British and French governments, but it was stressed that there was not enough demand for a second tunnel. A three-way treaty between the United Kingdom, France and Belgium governs border controls, with the establishment of control zones wherein the officers of the other nation may exercise limited customs and law enforcement powers. For most purposes, these are at either end of the tunnel, with the French border controls on the UK side of the tunnel and vice versa. For some city-to-city trains, the train is a control zone. A binational emergency plan coordinates UK and French emergency activities. In 1999 Eurostar posted its first net profit, having made a loss of £925m in 1995. In 2005 Eurotunnel was described as being in a serious situation. In 2013, operating profits rose 4 percent from 2012, to £54 million. Security There is a need for full passport controls, as the tunnel acts as a border between the Schengen Area and the Common Travel Area. There are juxtaposed controls, meaning that passports are checked before boarding by officials of the departing country, and on arrival by officials of the destination country. These control points are only at the main Eurostar stations: French officials operate at London St Pancras, Ebbsfleet International and Ashford International, while British officials operate at Calais-Fréthun, Lille-Europe, Marne-la-Vallée–Chessy, Brussels-South and Paris-Gare du Nord. There are security checks before boarding as well. For the shuttle road-vehicle trains, there are juxtaposed passport controls before boarding the trains. For Eurostar trains originating south of Paris, there is no passport and security check before departure, and those trains must stop in Lille at least 30 minutes to allow all passengers to be checked. No checks are performed on board. There have been plans for services from Amsterdam, Frankfurt and Cologne to London, but a major reason to cancel them was the need for a stop in Lille. Direct service from London to Amsterdam started on 4 April 2018; following the building of check-in terminals at Amsterdam and Rotterdam and the intergovernmental agreement, a direct service from the two Dutch cities to London started on 30 April 2020. Terminals The terminals' sites are at Cheriton (near Folkestone in the United Kingdom) and Coquelles (near Calais in France). The UK site uses the M20 motorway for access. The terminals are organised with the frontier controls juxtaposed with the entry to the system to allow travellers to go onto the motorway at the destination country immediately after leaving the shuttle. To achieve design output at the French terminal, the shuttles accept cars on double-deck wagons; for flexibility, ramps were placed inside the shuttles to provide access to the top decks. At Folkestone there are of the main-line track, 45 turnouts and eight platforms. At Calais there are of track and 44 turnouts. At the terminals, the shuttle trains traverse a figure eight to reduce uneven wear on the wheels. There is a freight marshalling yard west of Cheriton at Dollands Moor Freight Yard. Regional impact A 1996 report from the European Commission predicted that Kent and Nord-Pas de Calais had to face increased traffic volumes due to the general growth of cross-Channel traffic and traffic attracted by the tunnel. In Kent, a high-speed rail line to London would transfer traffic from road to rail. Kent's regional development would benefit from the tunnel, but being so close to London restricts the benefits. Gains are in the traditional industries and are largely dependent on the development of Ashford International railway station, without which Kent would be totally dependent on London's expansion. Nord-Pas-de-Calais enjoys a strong internal symbolic effect of the Tunnel which results in significant gains in manufacturing. The removal of a bottleneck by means like the tunnel does not necessarily induce economic gains in all adjacent regions. The image of a region being connected to European high-speed transport and active political response is more important for regional economic development. Some small-medium enterprises located in the immediate vicinity of the terminal have used the opportunity to re-brand the profile of their business with positive effects, such as The New Inn at Etchinghill which was able to commercially exploit its unique selling point as being 'the closest pub to the Channel Tunnel'. Tunnel-induced regional development is small compared to general economic growth. The South East of England is likely to benefit developmentally and socially from faster and cheaper transport to continental Europe, but the benefits are unlikely to be equally distributed throughout the region. The overall environmental impact is almost certainly negative. Since the opening of the tunnel, small positive impacts on the wider economy have been felt, but it is difficult to identify major economic successes directly attributed to the tunnel. The Eurotunnel does operate profitably, offering an alternative transportation mode unaffected by poor weather. High costs of construction did delay profitability, however, and companies involved in the tunnel's construction and operation early in operation relied on government aid to deal with the accumulated debt. Illegal immigration Illegal immigrants and would-be asylum seekers have used the tunnel to attempt to enter Britain. By 1997, the problem had attracted international press attention, and by 1999, the French Red Cross opened the first migrant centre at Sangatte, using a warehouse once used for tunnel construction; by 2002, it housed up to 1,500 people at a time, most of them trying to get to the UK. In 2001, most came from Afghanistan, Iraq, and Iran, but African countries were also represented. Eurotunnel, the company that operates the crossing, said that more than 37,000 migrants were intercepted between January and July 2015. Approximately 3,000 migrants, mainly from Ethiopia, Eritrea, Sudan and Afghanistan, were living in the temporary camps erected in Calais at the time of an official count in July 2015. An estimated 3,000 to 5,000 migrants were waiting in Calais for a chance to get to England. Britain and France operate a system of juxtaposed controls on immigration and customs, where investigations happen before travel. France is part of the Schengen immigration zone, removing border checks in normal times between most EU member states; Britain and the Republic of Ireland form their own separate Common Travel Area immigration zone. Most illegal immigrants and would-be asylum seekers who got into Britain found some way to ride a freight train. Trucks are loaded onto freight trains. In a few instances, migrants stowed away in a liquid chocolate tanker and managed to survive, spread across several attempts. Although the facilities were fenced, airtight security was deemed impossible; migrants would even jump from bridges onto moving trains. In several incidents people were injured during the crossing; others tampered with railway equipment, causing delays and requiring repairs. Eurotunnel said it was losing £5m per month because of the problem. In 2001 and 2002, several riots broke out at Sangatte, and groups of migrants (up to 550 in a December 2001 incident) stormed the fences and attempted to enter en masse. Other migrants seeking permanent UK settlement use the Eurostar passenger train. They may purport to be visitors (whether to be issued with a required visit visa, or deny and falsify their true intentions to obtain a maximum of 6-months-in-a-year at-port stamp); purport to be someone else whose documents they hold, or used forged or counterfeit passports. Such breaches result in refusal of permission to enter the UK, affected by Border Force after such a person's identity is fully established, assuming they persist in their application to enter the UK. Diplomatic efforts Local authorities in both France and the UK called for the closure of the Sangatte migrant camp, and Eurotunnel twice sought an injunction against the centre. As of 2006 the United Kingdom blamed France for allowing Sangatte to open, and France blamed both the UK for its then lax asylum rules/law, and the EU for not having a uniform immigration policy. The problem's cause célèbre nature even lead to journalists being detained as they followed migrants onto railway property. In 2002, the European Commission told France that it was in breach of European Union rules on the free transfer of goods because of the delays and closures as a result of its poor security. The French government built a double fence, at a cost of £5 million, reducing the numbers of migrants detected each week reaching Britain on goods trains from 250 to almost none. Other measures included CCTV cameras and increased police patrols. At the end of 2002, the Sangatte centre was closed after the UK agreed to absorb some migrants. On 23 and 30 June 2015, striking workers associated with MyFerryLink damaged sections of track by burning car tires, cancelling all trains and creating a backlog of vehicles. Hundreds seeking to reach Britain attempted to stow away inside and underneath transport trucks destined for the UK. Extra security measures included a £2 million upgrade of detection technology, £1 million extra for dog searches, and £12 million (over three years) towards a joint fund with France for security surrounding the Port of Calais. Illegal attempts to cross and deaths In 2002, a dozen migrants died in crossing attempts. In the two months from June to July 2015, ten migrants died near the French tunnel terminal, during a period when 1,500 attempts to evade security precautions were being made each day. On 6 July 2015, a migrant died while attempting to climb onto a freight train while trying to reach Britain from the French side of the Channel. The previous month an Eritrean man was killed under similar circumstances. During the night of 28 July 2015, one person, aged 25–30, was found dead after a night in which 1,500–2,000 migrants had attempted to enter the Eurotunnel terminal. The body of a Sudanese migrant was subsequently found inside the tunnel. On 4 August 2015, another Sudanese migrant walked nearly the entire length of one of the tunnels. He was arrested close to the British side, after having walked about through the tunnel. Mechanical incidents Fires There have been three fires in the tunnel, all on the heavy goods vehicle (HGV) shuttles, that were significant enough to close the tunnel, as well as other minor incidents. On 9 December 1994, during an "invitation only" testing phase, a fire broke out in a Ford Escort car while its owner was loading it onto the upper deck of a tourist shuttle. The fire started at about 10:00, with the shuttle train stationary in the Folkestone terminal, and was put out about 40 minutes later with no passenger injuries. On 18 November 1996, a fire broke out on an HGV shuttle wagon in the tunnel, but nobody was seriously hurt. The exact cause is unknown, although it was neither a Eurotunnel equipment nor rolling stock problem; it may have been due to arson of a heavy goods vehicle. It is estimated that the heart of the fire reached , with the tunnel severely damaged over , with some affected to some extent. Full operation recommenced six months after the fire. On 21 August 2006, the tunnel was closed for several hours when a truck on an HGV shuttle train caught fire. On 11 September 2008, a fire occurred in the Channel Tunnel at 13:57 GMT. The incident started on an HGV shuttle train travelling towards France. The event occurred from the French entrance to the tunnel. No one was killed but several people were taken to hospitals suffering from smoke inhalation, and minor cuts and bruises. The tunnel was closed to all traffic, with the undamaged South Tunnel reopening for limited services two days later. Full service resumed on 9 February 2009 after repairs costing €60 million. On 29 November 2012, the tunnel was closed for several hours after a truck on an HGV shuttle caught fire. On 17 January 2015, both tunnels were closed following a lorry fire that filled the midsection of Running Tunnel North with smoke. Eurostar cancelled all services. The shuttle train had been heading from Folkestone to Coquelles and stopped adjacent to cross-passage CP 4418 just before 12:30 UTC. 38 passengers and four members of Eurotunnel staff were evacuated into the service tunnel and transported to France in special STTS road vehicles. They were taken to the Eurotunnel Fire/Emergency Management Centre close to the French portal. Train failures On the night of 19/20 February 1996, about 1,000 passengers became trapped in the Channel Tunnel when Eurostar trains from London broke down owing to failures of electronic circuits caused by snow and ice being deposited and then melting on the circuit boards. On 3 August 2007, an electrical failure lasting six hours caused passengers to be trapped in the tunnel on a shuttle. On the evening of 18 December 2009, during the December 2009 European snowfall, five London-bound Eurostar trains failed inside the tunnel, trapping 2,000 passengers for approximately 16 hours, during the coldest temperatures in eight years. A Eurotunnel spokesperson explained that snow had evaded the train's winterisation shields, and the transition from cold air outside to the tunnel's warm atmosphere had melted the snow, resulting in electrical failures. One train was turned back before reaching the tunnel; two trains were hauled out of the tunnel by Eurotunnel Class 0001 diesel locomotives. The blocking of the tunnel led to the implementation of Operation Stack, the transformation of the M20 motorway into a linear car park. The occasion was the first time that a Eurostar train was evacuated inside the tunnel; the failing of four at once was described as "unprecedented". The Channel Tunnel reopened the following morning. Nirj Deva, Member of the European Parliament for South East England, had called for Eurostar chief executive Richard Brown to resign over the incidents. An independent report by Christopher Garnett (former CEO of Great North Eastern Railway) and Claude Gressier (a French transport expert) on the 18/19 December 2009 incidents was issued in February 2010, making 21 recommendations. On 7 January 2010, a Brussels–London Eurostar broke down in the tunnel. The train had 236 passengers on board and was towed to Ashford; other trains that had not yet reached the tunnel were turned back. Safety The Channel Tunnel Safety Authority is responsible for some aspects of safety regulation in the tunnel; it reports to the Intergovernmental Commission (IGC). The service tunnel is used for access to technical equipment in cross-passages and equipment rooms, to provide fresh-air ventilation and for emergency evacuation. The Service Tunnel Transport System (STTS) allows fast access to all areas of the tunnel. The service vehicles are rubber-tired with a buried wire guidance system. The 24 STTS vehicles are used mainly for maintenance but also for firefighting and emergencies. "Pods" with different purposes, up to a payload of , are inserted into the side of the vehicles. The vehicles cannot turn around within the tunnel and are driven from either end. The maximum speed is when the steering is locked. A fleet of 15 Light Service Tunnel Vehicles (LADOGS) was introduced to supplement the STTSs. The LADOGS has a short wheelbase with a turning circle, allowing two-point turns within the service tunnel. Steering cannot be locked like the STTS vehicles, and maximum speed is . Pods up to can be loaded onto the rear of the vehicles. Drivers in the tunnel sit on the right, and the vehicles drive on the left. Owing to the risk of French personnel driving on their native right side of the road, sensors in the vehicles alert the driver if the vehicle strays to the right side. The three tunnels contain of air that needs to be conditioned for comfort and safety. Air is supplied from ventilation buildings at Shakespeare Cliff and Sangatte, with each building capable of providing 100% standby capacity. Supplementary ventilation also exists on either side of the tunnel. In the event of a fire, ventilation is used to keep smoke out of the service tunnel and move smoke in one direction in the main tunnel to give passengers clean air. The tunnel was the first main-line railway tunnel to have special cooling equipment. Heat is generated from traction equipment and drag. The design limit was set at , using a mechanical cooling system with refrigeration plants on both sides that run chilled water circulating in pipes within the tunnel. Trains travelling at high speed create piston effect pressure changes that can affect passenger comfort, ventilation systems, tunnel doors, fans and the structure of the trains, and which drag on the trains. Piston relief ducts of diameter were chosen to solve the problem, with 4 ducts per kilometre to give close to optimum results. However, this design led to extreme lateral forces on the trains, so a reduction in train speed was required and restrictors were installed in the ducts. The safety issue of a possible fire on a passenger-vehicle shuttle garnered much attention, with Eurotunnel noting that fire was the risk attracting the most attention in a 1994 safety case for three reasons: the opposition of ferry companies to passengers being allowed to remain with their cars; Home Office statistics indicating that car fires had doubled in ten years; and the long length of the tunnel. Eurotunnel commissioned the UK Fire Research Station—now part of the Building Research Establishment—to give reports of vehicle fires, and liaised with Kent Fire Brigade to gather vehicle fire statistics over one year. Fire tests took place at the French Mines Research Establishment with a mock wagon used to investigate how cars burned. The wagon door systems are designed to withstand fire inside the wagon for 30 minutes, longer than the transit time of 27 minutes. Wagon air conditioning units help to purge dangerous fumes from inside the wagon before travel. Each wagon has a fire detection and extinguishing system, with sensing of ions or ultraviolet radiation, smoke and gases that can trigger halon gas to quench a fire. Since the HGV wagons are not covered, fire sensors are located on the loading wagon and in the tunnel. A water main in the service tunnel provides water to the main tunnels at intervals. The ventilation system can control smoke movement. Special arrival sidings accept a train that is on fire, as the train is not allowed to stop whilst on fire in the tunnel unless continuing its journey would lead to a worse outcome. Eurotunnel has banned a wide range of hazardous goods from travelling in the tunnel. Two STTS (Service Tunnel Transportation System) vehicles with firefighting pods are on duty at all times, with a maximum delay of 10 minutes before they reach a burning train. Unusual traffic Trains In 1999, the Kosovo Train for Life passed through the tunnel en route to Pristina, in Kosovo. Other In 2009, former F1 racing champion John Surtees drove a Ginetta G50 EV electric sports car prototype from England to France, using the service tunnel, as part of a charity event. He was required to keep to the speed limit. To celebrate the 2014 Tour de France's transfer from its opening stages in Britain to France in July of that year, Chris Froome of Team Sky rode a bicycle through the service tunnel, becoming the first solo rider to do so. The crossing took under an hour, reaching speeds of —faster than most cross-channel ferries. Mobile network coverage Since 2012, French operators Bouygues Telecom, Orange and SFR have covered Running Tunnel South, the tunnel bore normally used for travel from France to Britain. In January 2014, UK operators EE and Vodafone signed ten-year contracts with Eurotunnel for Running Tunnel North. The agreements will enable both operators' subscribers to use 2G and 3G services. Both EE and Vodafone planned to offer LTE services on the route; EE said it expected to cover the route with LTE connectivity by the summer of 2014. EE and Vodafone will offer Channel Tunnel network coverage for travellers from the UK to France. Eurotunnel said it also held talks with Three UK but has yet to reach an agreement with the operator. In May 2014, Eurotunnel announced that they had installed equipment from Alcatel-Lucent to cover Running Tunnel North and simultaneously to provide mobile service (GSM 900/1800 MHz and UMTS 2100 MHz) by EE, O2 and Vodafone. The service of EE and Vodafone commenced on the same date as the announcement. O2 service was expected to be available soon afterwards. In November 2014, EE announced that it had previously switched on LTE earlier in September 2014. O2 turned on 2G, 3G and 4G services in November 2014, whilst Vodafone's 4G was due to go live later. Other (non-transport) services The tunnel also houses the 1,000 MW ElecLink interconnector to transfer power between the British and French electricity networks. During the night of 31 August/1 September 2021, the 51km-long 320 kV DC cable was switched into service for the first time. See also France–UK border British Rail Class 373 Irish Sea tunnel Japan–Korea Undersea Tunnel List of transport megaprojects Marmaray Tunnel Samphire Hoe Strait of Gibraltar crossing References Sources Further reading Article on a post-WW1 plan for a tunnel that was scrapped by the Great Depression. A total cost figure of 150 million was given in 1929 Autobiography of Sir John Stokes regarding 1882 deliberations External links UK website at eurotunnel.com French website at eurotunnel.com/fr Tribute website at chunnel.com Channel Tunnel on OpenStreetMap wiki Tunnels completed in 1994 Coastal construction Eurostar France–United Kingdom border crossings Railway tunnels in England Railway tunnels in France Rail transport in France Rail transport in England Transport in Kent Transport in Folkestone and Hythe Undersea tunnels in Europe International tunnels International railway lines Transport in Pas-de-Calais Standard gauge railways in England Standard gauge railways in France Railway lines opened in 1994 Buildings and structures in Pas-de-Calais 1994 establishments in France 1994 establishments in England 25 kV AC railway electrification English Channel Dual-tube railway tunnels
5703
https://en.wikipedia.org/wiki/Cyberpunk
Cyberpunk
Cyberpunk is a subgenre of science fiction in a dystopian futuristic setting that tends to focus on a "combination of lowlife and high tech", featuring futuristic technological and scientific achievements, such as artificial intelligence and cyberware, juxtaposed with societal collapse, dystopia or decay. Much of cyberpunk is rooted in the New Wave science fiction movement of the 1960s and 1970s, when writers like Philip K. Dick, Michael Moorcock, Roger Zelazny, John Brunner, J. G. Ballard, Philip José Farmer and Harlan Ellison examined the impact of drug culture, technology, and the sexual revolution while avoiding the utopian tendencies of earlier science fiction. Comics exploring cyberpunk themes began appearing as early as Judge Dredd, first published in 1977. Released in 1984, William Gibson's influential debut novel Neuromancer helped solidify cyberpunk as a genre, drawing influence from punk subculture and early hacker culture. Frank Miller's Ronin is an example of a cyberpunk graphic novel. Other influential cyberpunk writers included Bruce Sterling and Rudy Rucker. The Japanese cyberpunk subgenre began in 1982 with the debut of Katsuhiro Otomo's manga series Akira, with its 1988 anime film adaptation (also directed by Otomo) later popularizing the subgenre. Early films in the genre include Ridley Scott's 1982 film Blade Runner, one of several of Philip K. Dick's works that have been adapted into films (in this case, Do Androids Dream of Electric Sheep?). The "first cyberpunk television series" was the TV series Max Headroom from 1987, playing in a futuristic dystopia ruled by an oligarchy of television networks, and where computer hacking played a central role in many story lines. The films Johnny Mnemonic (1995) and New Rose Hotel (1998), both based upon short stories by William Gibson, flopped commercially and critically, while The Matrix trilogy (1999–2003) and Judge Dredd (1995) were some of the most successful cyberpunk films. Newer cyberpunk media includes Blade Runner 2049 (2017), a sequel to the original 1982 film; Dredd (2012), which was not a sequel to the original movie; Upgrade (2018); Alita: Battle Angel (2019), based on the 1990s Japanese manga Battle Angel Alita; the 2018 Netflix TV series Altered Carbon, based on Richard K. Morgan's 2002 novel of the same name; the 2020 remake of 1997 role-playing video game Final Fantasy VII; and the video game Cyberpunk 2077 (2020), based on R. Talsorian Games's 1988 tabletop role-playing game Cyberpunk. Background Lawrence Person has attempted to define the content and ethos of the cyberpunk literary movement stating: Cyberpunk plots often center on conflict among artificial intelligences, hackers, and megacorporations, and tend to be set in a near-future Earth, rather than in the far-future settings or galactic vistas found in novels such as Isaac Asimov's Foundation or Frank Herbert's Dune. The settings are usually post-industrial dystopias but tend to feature extraordinary cultural ferment and the use of technology in ways never anticipated by its original inventors ("the street finds its own uses for things"). Much of the genre's atmosphere echoes film noir, and written works in the genre often use techniques from detective fiction. There are sources who view that cyberpunk has shifted from a literary movement to a mode of science fiction due to the limited number of writers and its transition to a more generalized cultural formation. History and origins The origins of cyberpunk are rooted in the New Wave science fiction movement of the 1960s and 1970s, where New Worlds, under the editorship of Michael Moorcock, began inviting and encouraging stories that examined new writing styles, techniques, and archetypes. Reacting to conventional storytelling, New Wave authors attempted to present a world where society coped with a constant upheaval of new technology and culture, generally with dystopian outcomes. Writers like Roger Zelazny, J. G. Ballard, Philip José Farmer, Samuel R. Delany, and Harlan Ellison often examined the impact of drug culture, technology, and the sexual revolution with an avant-garde style influenced by the Beat Generation (especially William S. Burroughs's science fiction writing), Dadaism, and their own ideas. Ballard attacked the idea that stories should follow the "archetypes" popular since the time of Ancient Greece, and the assumption that these would somehow be the same ones that would call to modern readers, as Joseph Campbell argued in The Hero with a Thousand Faces. Instead, Ballard wanted to write a new myth for the modern reader, a style with "more psycho-literary ideas, more meta-biological and meta-chemical concepts, private time systems, synthetic psychologies and space-times, more of the sombre half-worlds one glimpses in the paintings of schizophrenics." This had a profound influence on a new generation of writers, some of whom would come to call their movement "cyberpunk". One, Bruce Sterling, later said: Ballard, Zelazny, and the rest of New Wave was seen by the subsequent generation as delivering more "realism" to science fiction, and they attempted to build on this. Samuel R. Delanys 1968 novel Nova is also considered one of the major forerunners of the cyberpunk movement. It prefigures, for instance, cyberpunk's staple trope of humans interfacing with computers via implants. Writer William Gibson claimed to be greatly influenced by Delany, and his novel Neuromancer includes allusions to Nova. Similarly influential, and generally cited as proto-cyberpunk, is the Philip K. Dick novel Do Androids Dream of Electric Sheep?, first published in 1968. Presenting precisely the general feeling of dystopian post-economic-apocalyptic future as Gibson and Sterling later deliver, it examines ethical and moral problems with cybernetic, artificial intelligence in a way more "realist" than the Isaac Asimov Robot series that laid its philosophical foundation. Dick's protege and friend K. W. Jeter wrote a novel called Dr. Adder in 1972 that, Dick lamented, might have been more influential in the field had it been able to find a publisher at that time. It was not published until 1984, after which Jeter made it the first book in a trilogy, followed by The Glass Hammer (1985) and Death Arms (1987). Jeter wrote other standalone cyberpunk novels before going on to write three authorized sequels to Do Androids Dream of Electric Sheep?, named Blade Runner 2: The Edge of Human (1995), Blade Runner 3: Replicant Night (1996), and Blade Runner 4: Eye and Talon. Do Androids Dream of Electric Sheep? was made into the seminal movie Blade Runner, released in 1982. This was one year after William Gibson's story "Johnny Mnemonic" helped move proto-cyberpunk concepts into the mainstream. That story, which also became a film years later in 1995, involves another dystopian future, where human couriers deliver computer data, stored cybernetically in their own minds. Etymology The term "cyberpunk" first appeared as the title of a short story by Bruce Bethke, written in 1980 and published in Amazing Stories in 1983. The name was picked up by Gardner Dozois, editor of Isaac Asimov's Science Fiction Magazine, and popularized in his editorials. Bethke says he made two lists of words, one for technology, one for troublemakers, and experimented with combining them variously into compound words, consciously attempting to coin a term that encompassed both punk attitudes and high technology. He described the idea thus: Afterward, Dozois began using this term in his own writing, most notably in a Washington Post article where he said "About the closest thing here to a self-willed esthetic 'school' would be the purveyors of bizarre hard-edged, high-tech stuff, who have on occasion been referred to as 'cyberpunks'—Sterling, Gibson, Shiner, Cadigan, Bear." About that time in 1984, William Gibson's novel Neuromancer was published, delivering a glimpse of a future encompassed by what became an archetype of cyberpunk "virtual reality", with the human mind being fed light-based worldscapes through a computer interface. Some, perhaps ironically including Bethke himself, argued at the time that the writers whose style Gibson's books epitomized should be called "Neuromantics", a pun on the name of the novel plus "New Romantics", a term used for a New Wave pop music movement that had just occurred in Britain, but this term did not catch on. Bethke later paraphrased Michael Swanwick's argument for the term: "the movement writers should properly be termed neuromantics, since so much of what they were doing was clearly imitating Neuromancer". Sterling was another writer who played a central role, often consciously, in the cyberpunk genre, variously seen as either keeping it on track, or distorting its natural path into a stagnant formula. In 1986, he edited a volume of cyberpunk stories called Mirrorshades: The Cyberpunk Anthology, an attempt to establish what cyberpunk was, from Sterling's perspective. In the subsequent decade, the motifs of Gibson's Neuromancer became formulaic, climaxing in the satirical extremes of Neal Stephenson's Snow Crash in 1992. Bookending the cyberpunk era, Bethke himself published a novel in 1995 called Headcrash, like Snow Crash a satirical attack on the genre's excesses. Fittingly, it won an honor named after cyberpunk's spiritual founder, the Philip K. Dick Award. It satirized the genre in this way: The impact of cyberpunk, though, has been long-lasting. Elements of both the setting and storytelling have become normal in science fiction in general, and a slew of sub-genres now have -punk tacked onto their names, most obviously steampunk, but also a host of other cyberpunk derivatives. Style and ethos Primary figures in the cyberpunk movement include William Gibson, Neal Stephenson, Bruce Sterling, Bruce Bethke, Pat Cadigan, Rudy Rucker, and John Shirley. Philip K. Dick (author of Do Androids Dream of Electric Sheep?, from which the film Blade Runner was adapted) is also seen by some as prefiguring the movement. Blade Runner can be seen as a quintessential example of the cyberpunk style and theme. Video games, board games, and tabletop role-playing games, such as Cyberpunk 2020 and Shadowrun, often feature storylines that are heavily influenced by cyberpunk writing and movies. Beginning in the early 1990s, some trends in fashion and music were also labeled as cyberpunk. Cyberpunk is also featured prominently in anime and manga (Japanese cyberpunk), with Akira, Ghost in the Shell and Cowboy Bebop being among the most notable. Setting Cyberpunk writers tend to use elements from crime fiction—particularly hardboiled detective fiction and film noir—and postmodernist prose to describe an often nihilistic underground side of an electronic society. The genre's vision of a troubled future is often called the antithesis of the generally utopian visions of the future popular in the 1940s and 1950s. Gibson defined cyberpunk's antipathy towards utopian science fiction in his 1981 short story "The Gernsback Continuum," which pokes fun at and, to a certain extent, condemns utopian science fiction. In some cyberpunk writing, much of the action takes place online, in cyberspace, blurring the line between actual and virtual reality. A typical trope in such work is a direct connection between the human brain and computer systems. Cyberpunk settings are dystopias with corruption, computers, and computer networks. Giant, multinational corporations have for the most part replaced governments as centers of political, economic, and even military power. The economic and technological state of Japan is a regular theme in the cyberpunk literature of the 1980s. Of Japan's influence on the genre, William Gibson said, "Modern Japan simply was cyberpunk." Cyberpunk is often set in urbanized, artificial landscapes, and "city lights, receding" was used by Gibson as one of the genre's first metaphors for cyberspace and virtual reality. The cityscapes of Hong Kong has had major influences in the urban backgrounds, ambiance and settings in many cyberpunk works such as Blade Runner and Shadowrun. Ridley Scott envisioned the landscape of cyberpunk Los Angeles in Blade Runner to be "Hong Kong on a very bad day". The streetscapes of the Ghost in the Shell film were based on Hong Kong. Its director Mamoru Oshii felt that Hong Kong's strange and chaotic streets where "old and new exist in confusing relationships", fit the theme of the film well. Hong Kong's Kowloon Walled City is particularly notable for its disorganized hyper-urbanization and breakdown in traditional urban planning to be an inspiration to cyberpunk landscapes. Portrayals of East Asia and Asians in Western cyberpunk have been criticized as Orientalist and promoting racist tropes playing on American and European fears of East Asian dominance; this has been referred to as "techno-Orientalism". Society and government Cyberpunk can be intended to disquiet readers and call them to action. It often expresses a sense of rebellion, suggesting that one could describe it as a type of cultural revolution in science fiction. In the words of author and critic David Brin: ...a closer look [at cyberpunk authors] reveals that they nearly always portray future societies in which governments have become wimpy and pathetic ...Popular science fiction tales by Gibson, Williams, Cadigan and others do depict Orwellian accumulations of power in the next century, but nearly always clutched in the secretive hands of a wealthy or corporate elite. Cyberpunk stories have also been seen as fictional forecasts of the evolution of the Internet. The earliest descriptions of a global communications network came long before the World Wide Web entered popular awareness, though not before traditional science-fiction writers such as Arthur C. Clarke and some social commentators such as James Burke began predicting that such networks would eventually form. Some observers cite that cyberpunk tends to marginalize sectors of society such as women and people of colour. It is claimed that, for instance, cyberpunk depicts fantasies that ultimately empower masculinity using fragmentary and decentered aesthetic that culminate in a masculine genre populated by male outlaws. Critics also note the absence of any reference to Africa or black characters in the quintessential cyberpunk film Blade Runner while other films reinforce stereotypes. Media Literature Minnesota writer Bruce Bethke coined the term in 1983 for his short story "Cyberpunk", which was published in an issue of Amazing Science Fiction Stories. The term was quickly appropriated as a label to be applied to the works of William Gibson, Bruce Sterling, Pat Cadigan and others. Of these, Sterling became the movement's chief ideologue, thanks to his fanzine Cheap Truth. John Shirley wrote articles on Sterling and Rucker's significance. John Brunner's 1975 novel The Shockwave Rider is considered by many to be the first cyberpunk novel with many of the tropes commonly associated with the genre, some five years before the term was popularized by Dozois. William Gibson with his novel Neuromancer (1984) is arguably the most famous writer connected with the term cyberpunk. He emphasized style, a fascination with surfaces, and atmosphere over traditional science-fiction tropes. Regarded as ground-breaking and sometimes as "the archetypal cyberpunk work", Neuromancer was awarded the Hugo, Nebula, and Philip K. Dick Awards. Count Zero (1986) and Mona Lisa Overdrive (1988) followed after Gibson's popular debut novel. According to the Jargon File, "Gibson's near-total ignorance of computers and the present-day hacker culture enabled him to speculate about the role of computers and hackers in the future in ways hackers have since found both irritatingly naïve and tremendously stimulating." Early on, cyberpunk was hailed as a radical departure from science-fiction standards and a new manifestation of vitality. Shortly thereafter, however, some critics arose to challenge its status as a revolutionary movement. These critics said that the science fiction New Wave of the 1960s was much more innovative as far as narrative techniques and styles were concerned. Furthermore, while Neuromancers narrator may have had an unusual "voice" for science fiction, much older examples can be found: Gibson's narrative voice, for example, resembles that of an updated Raymond Chandler, as in his novel The Big Sleep (1939). Others noted that almost all traits claimed to be uniquely cyberpunk could in fact be found in older writers' works—often citing J. G. Ballard, Philip K. Dick, Harlan Ellison, Stanisław Lem, Samuel R. Delany, and even William S. Burroughs. For example, Philip K. Dick's works contain recurring themes of social decay, artificial intelligence, paranoia, and blurred lines between objective and subjective realities. The influential cyberpunk movie Blade Runner (1982) is based on his book, Do Androids Dream of Electric Sheep?. Humans linked to machines are found in Pohl and Kornbluth's Wolfbane (1959) and Roger Zelazny's Creatures of Light and Darkness (1968). In 1994, scholar Brian Stonehill suggested that Thomas Pynchon's 1973 novel Gravity's Rainbow "not only curses but precurses what we now glibly dub cyberspace." Other important predecessors include Alfred Bester's two most celebrated novels, The Demolished Man and The Stars My Destination, as well as Vernor Vinge's novella True Names. Reception and impact Science-fiction writer David Brin describes cyberpunk as "the finest free promotion campaign ever waged on behalf of science fiction". It may not have attracted the "real punks", but it did ensnare many new readers, and it provided the sort of movement that postmodern literary critics found alluring. Cyberpunk made science fiction more attractive to academics, argues Brin; in addition, it made science fiction more profitable to Hollywood and to the visual arts generally. Although the "self-important rhetoric and whines of persecution" on the part of cyberpunk fans were irritating at worst and humorous at best, Brin declares that the "rebels did shake things up. We owe them a debt." Fredric Jameson considers cyberpunk the "supreme literary expression if not of postmodernism, then of late capitalism itself". Cyberpunk further inspired many later writers to incorporate cyberpunk ideas into their own works, such as George Alec Effinger's When Gravity Fails. Wired magazine, created by Louis Rossetto and Jane Metcalfe, mixes new technology, art, literature, and current topics in order to interest today's cyberpunk fans, which Paula Yoo claims "proves that hardcore hackers, multimedia junkies, cyberpunks and cellular freaks are poised to take over the world". Film and television The film Blade Runner (1982) is set in 2019 in a dystopian future in which manufactured beings called replicants are slaves used on space colonies and are legal prey on Earth to various bounty hunters who "retire" (kill) them. Although Blade Runner was largely unsuccessful in its first theatrical release, it found a viewership in the home video market and became a cult film. Since the movie omits the religious and mythical elements of Dick's original novel (e.g. empathy boxes and Wilbur Mercer), it falls more strictly within the cyberpunk genre than the novel does. William Gibson would later reveal that upon first viewing the film, he was surprised at how the look of this film matched his vision for Neuromancer, a book he was then working on. The film's tone has since been the staple of many cyberpunk movies, such as The Matrix trilogy (1999–2003), which uses a wide variety of cyberpunk elements. A sequel to Blade Runner was released in 2017. The TV series Max Headroom (1987) is an iconic cyberpunk work, taking place in a futuristic dystopia ruled by an oligarchy of television networks. Computer hacking played a central role in many of the story lines. Max Headroom has been called "the first cyberpunk television series". The number of films in the genre has grown steadily since Blade Runner. Several of Philip K. Dick's works have been adapted to the silver screen. The films Johnny Mnemonic (1995) and New Rose Hotel (1998), both based on short stories by William Gibson, flopped commercially and critically. Other cyberpunk films include RoboCop (1987), Total Recall (1990), Hardware (1990), The Lawnmower Man (1992), 12 Monkeys (1995), Hackers (1995), and Strange Days (1995). Some cyberpunk films have been described as tech-noir, a hybrid genre combining neo-noir and science fiction or cyberpunk. Anime and manga The Japanese cyberpunk subgenre began in 1982 with the debut of Katsuhiro Otomo's manga series Akira, with its 1988 anime film adaptation, which Otomo directed, later popularizing the subgenre. Akira inspired a wave of Japanese cyberpunk works, including manga and anime series such as Ghost in the Shell, Battle Angel Alita, and Cowboy Bebop. Other early Japanese cyberpunk works include the 1982 film Burst City, the 1985 original video animation Megazone 23, and the 1989 film Tetsuo: The Iron Man. In contrast to Western cyberpunk which has roots in New Wave science fiction literature, Japanese cyberpunk has roots in underground music culture, specifically the Japanese punk subculture that arose from the Japanese punk music scene in the 1970s. The filmmaker Sogo Ishii introduced this subculture to Japanese cinema with the punk film Panic High School (1978) and the punk biker film Crazy Thunder Road (1980), both portraying the rebellion and anarchy associated with punk, and the latter featuring a punk biker gang aesthetic. Ishii's punk films paved the way for Otomo's seminal cyberpunk work Akira. Cyberpunk themes are widely visible in anime and manga. In Japan, where cosplay is popular and not only teenagers display such fashion styles, cyberpunk has been accepted and its influence is widespread. William Gibson's Neuromancer, whose influence dominated the early cyberpunk movement, was also set in Chiba, one of Japan's largest industrial areas, although at the time of writing the novel Gibson did not know the location of Chiba and had no idea how perfectly it fit his vision in some ways. The exposure to cyberpunk ideas and fiction in the 1980s has allowed it to seep into the Japanese culture. Cyberpunk anime and manga draw upon a futuristic vision which has elements in common with Western science fiction and therefore have received wide international acceptance outside Japan. "The conceptualization involved in cyberpunk is more of forging ahead, looking at the new global culture. It is a culture that does not exist right now, so the Japanese concept of a cyberpunk future, seems just as valid as a Western one, especially as Western cyberpunk often incorporates many Japanese elements." William Gibson is now a frequent visitor to Japan, and he came to see that many of his visions of Japan have become a reality: Modern Japan simply was cyberpunk. The Japanese themselves knew it and delighted in it. I remember my first glimpse of Shibuya, when one of the young Tokyo journalists who had taken me there, his face drenched with the light of a thousand media-suns—all that towering, animated crawl of commercial information—said, "You see? You see? It is Blade Runner town." And it was. It so evidently was. Influence Akira (1982 manga) and its 1988 anime film adaptation have influenced numerous works in animation, comics, film, music, television and video games. Akira has been cited as a major influence on Hollywood films such as The Matrix, Chronicle, Looper, Midnight Special, and Inception, as well as cyberpunk-influenced video games such as Hideo Kojima's Snatcher and Metal Gear Solid, Valve's Half-Life series and Dontnod Entertainment's Remember Me. Akira has also influenced the work of musicians such as Kanye West, who paid homage to Akira in the "Stronger" music video, and Lupe Fiasco, whose album Tetsuo & Youth is named after Tetsuo Shima. The popular bike from the film, Kaneda's Motorbike, appears in Steven Spielberg'''s film Ready Player One, and CD Projekt's video game Cyberpunk 2077.Ghost in the Shell (1995) influenced a number of prominent filmmakers, most notably the Wachowskis in The Matrix (1999) and its sequels. The Matrix series took several concepts from the film, including the Matrix digital rain, which was inspired by the opening credits of Ghost in the Shell and a sushi magazine the wife of the senior designer of the animation, Simon Witheley, had in the kitchen at the time., and the way characters access the Matrix through holes in the back of their necks. Other parallels have been drawn to James Cameron's Avatar, Steven Spielberg's A.I. Artificial Intelligence, and Jonathan Mostow's Surrogates. James Cameron cited Ghost in the Shell as a source of inspiration, citing it as an influence on Avatar. The original video animation Megazone 23 (1985) has a number of similarities to The Matrix. Battle Angel Alita (1990) has had a notable influence on filmmaker James Cameron, who was planning to adapt it into a film since 2000. It was an influence on his TV series Dark Angel, and he is the producer of the 2019 film adaptation Alita: Battle Angel. Comics In 1975, artist Moebius collaborated with writer Dan O'Bannon on a story called The Long Tomorrow, published in the French magazine Métal Hurlant. One of the first works featuring elements now seen as exemplifying cyberpunk, it combined influences from film noir and hardboiled crime fiction with a distant sci-fi environment. Author William Gibson stated that Moebius' artwork for the series, along with other visuals from Métal Hurlant, strongly influenced his 1984 novel Neuromancer. The series had a far-reaching impact in the cyberpunk genre, being cited as an influence on Ridley Scott's Alien (1979) and Blade Runner. Moebius later expanded upon The Long Tomorrow's aesthetic with The Incal, a graphic novel collaboration with Alejandro Jodorowsky published from 1980 to 1988. The story centers around the exploits of a detective named John Difool in various science fiction settings, and while not confined to the tropes of cyberpunk, it features many elements of the genre. Concurrently with many other foundational cyberpunk works, DC Comics published Frank Miller's six-issue miniseries Rōnin from 1983 to 1984. The series, incorporating aspects of Samurai culture, martial arts films and manga, is set in a dystopian near-future New York. It explores the link between an ancient Japanese warrior and the apocalyptic, crumbling cityscape he finds himself in. The comic also bears several similarities to Akira, with highly powerful telepaths playing central roles, as well as sharing many key visuals.Rōnin would go on to influence many later works, including Samurai Jack and the Teenage Mutant Ninja Turtles, as well as video games such as Cyberpunk 2077. Two years later, Miller himself would incorporate several toned-down elements of Rōnin into his acclaimed 1986 miniseries The Dark Knight Returns, in which a retired Bruce Wayne once again takes up the mantle of Batman in a Gotham that is increasingly becoming more dystopian. Paul Pope's Batman: Year 100, published in 2006, also exhibits several traits typical of cyberpunk fiction, such as a rebel protagonist opposing a future authoritarian state, and a distinct retrofuturist aesthetic that makes callbacks to both The Dark Knight Returns and Batman's original appearances in the 1940s. Games There are many cyberpunk video games. Popular series include Final Fantasy VII and its spin-offs and remake, the Megami Tensei series, Kojima's Snatcher and Metal Gear series, Deus Ex series, Syndicate series, and System Shock and its sequel. Other games, like Blade Runner, Ghost in the Shell, and the Matrix series, are based upon genre movies, or role-playing games (for instance the various Shadowrun games). Several RPGs called Cyberpunk exist: Cyberpunk, Cyberpunk 2020, Cyberpunk v3.0 and Cyberpunk Red written by Mike Pondsmith and published by R. Talsorian Games, and GURPS Cyberpunk, published by Steve Jackson Games as a module of the GURPS family of RPGs. Cyberpunk 2020 was designed with the settings of William Gibson's writings in mind, and to some extent with his approval, unlike the approach taken by FASA in producing the transgenre Shadowrun game and its various sequels, which mixes cyberpunk with fantasy elements such as magic and fantasy races such as orcs and elves. Both are set in the near future, in a world where cybernetics are prominent. In addition, Iron Crown Enterprises released an RPG named Cyberspace, which was out of print for several years until recently being re-released in online PDF form. CD Projekt Red released Cyberpunk 2077, a cyberpunk open world first-person shooter/role-playing video game (RPG) based on the tabletop RPG Cyberpunk 2020, on December 10, 2020. In 1990, in a convergence of cyberpunk art and reality, the United States Secret Service raided Steve Jackson Games's headquarters and confiscated all their computers. Officials denied that the target had been the GURPS Cyberpunk sourcebook, but Jackson would later write that he and his colleagues "were never able to secure the return of the complete manuscript; [...] The Secret Service at first flatly refused to return anything – then agreed to let us copy files, but when we got to their office, restricted us to one set of out-of-date files – then agreed to make copies for us, but said "tomorrow" every day from March 4 to March 26. On March 26 we received a set of disks which purported to be our files, but the material was late, incomplete and well-nigh useless." Steve Jackson Games won a lawsuit against the Secret Service, aided by the new Electronic Frontier Foundation. This event has achieved a sort of notoriety, which has extended to the book itself as well. All published editions of GURPS Cyberpunk have a tagline on the front cover, which reads "The book that was seized by the U.S. Secret Service!" Inside, the book provides a summary of the raid and its aftermath. Cyberpunk has also inspired several tabletop, miniature and board games such as Necromunda by Games Workshop. Netrunner is a collectible card game introduced in 1996, based on the Cyberpunk 2020 role-playing game. Tokyo NOVA, debuting in 1993, is a cyberpunk role-playing game that uses playing cards instead of dice.Cyberpunk 2077 set a new record for the largest number of simultaneous players in a single player game, with a record 1,003,262 playing just after the December 10th launch, according to Steam Database. That tops the previous Steam record of 472,962 players set by Fallout 4 back in 2015. Music Invariably the origin of cyberpunk music lies in the synthesizer-heavy scores of cyberpunk films such as Escape from New York (1981) and Blade Runner (1982). Some musicians and acts have been classified as cyberpunk due to their aesthetic style and musical content. Often dealing with dystopian visions of the future or biomechanical themes, some fit more squarely in the category than others. Bands whose music has been classified as cyberpunk include Psydoll, Front Line Assembly, Clock DVA, Angelspit and Sigue Sigue Sputnik. Some musicians not normally associated with cyberpunk have at times been inspired to create concept albums exploring such themes. Albums such as the British musician and songwriter Gary Numan's Replicas, The Pleasure Principle and Telekon were heavily inspired by the works of Philip K. Dick. Kraftwerk's The Man-Machine and Computer World albums both explored the theme of humanity becoming dependent on technology. Nine Inch Nails' concept album Year Zero also fits into this category. Fear Factory concept albums are heavily based upon future dystopia, cybernetics, clash between man and machines, virtual worlds. Billy Idol's Cyberpunk drew heavily from cyberpunk literature and the cyberdelic counter culture in its creation. 1. Outside, a cyberpunk narrative fueled concept album by David Bowie, was warmly met by critics upon its release in 1995. Many musicians have also taken inspiration from specific cyberpunk works or authors, including Sonic Youth, whose albums Sister and Daydream Nation take influence from the works of Philip K. Dick and William Gibson respectively. Madonna's 2001 Drowned World Tour opened with a cyberpunk section, where costumes, asethetics and stage props were used to accentuate the dystopian nature of the theatrical concert. Lady Gaga used a cyberpunk-persona and visual style for her sixth studio album Chromatica (2020). Vaporwave and synthwave are also influenced by cyberpunk. The former has been inspired by one of the messages of cyberpunk and is interpreted as a dystopian critique of capitalism in the vein of cyberpunk and the latter is more surface-level, inspired only by the aesthetic of cyberpunk as a nostalgic retrofuturistic revival of aspects of cyberpunk's origins. Social impact Art and architecture Writers David Suzuki and Holly Dressel describe the cafes, brand-name stores and video arcades of the Sony Center in the Potsdamer Platz public square of Berlin, Germany, as "a vision of a cyberpunk, corporate urban future". Society and counterculture Several subcultures have been inspired by cyberpunk fiction. These include the cyberdelic counter culture of the late 1980s and early 1990s. Cyberdelic, whose adherents referred to themselves as "cyberpunks", attempted to blend the psychedelic art and drug movement with the technology of cyberculture. Early adherents included Timothy Leary, Mark Frauenfelder and R. U. Sirius. The movement largely faded following the dot-com bubble implosion of 2000. Cybergoth is a fashion and dance subculture which draws its inspiration from cyberpunk fiction, as well as rave and Gothic subcultures. In addition, a distinct cyberpunk fashion of its own has emerged in recent years which rejects the raver and goth influences of cybergoth, and draws inspiration from urban street fashion, "post apocalypse", functional clothing, high tech sports wear, tactical uniform and multifunction. This fashion goes by names like "tech wear", "goth ninja" or "tech ninja". The Kowloon Walled City in Hong Kong (demolished in 1994) is often referenced as the model cyberpunk/dystopian slum as, given its poor living conditions at the time coupled with the city's political, physical, and economic isolation has caused many in academia to be fascinated by the ingenuity of its spawning. Related genres As a wider variety of writers began to work with cyberpunk concepts, new subgenres of science fiction emerged, some of which could be considered as playing off the cyberpunk label, others which could be considered as legitimate explorations into newer territory. These focused on technology and its social effects in different ways. One prominent subgenre is "steampunk," which is set in an alternate history Victorian era that combines anachronistic technology with cyberpunk's bleak film noir world view. The term was originally coined around 1987 as a joke to describe some of the novels of Tim Powers, James P. Blaylock, and K.W. Jeter, but by the time Gibson and Sterling entered the subgenre with their collaborative novel The Difference Engine'' the term was being used earnestly as well. Another subgenre is "biopunk" (cyberpunk themes dominated by biotechnology) from the early 1990s, a derivative style building on biotechnology rather than informational technology. In these stories, people are changed in some way not by mechanical means, but by genetic manipulation. Cyberpunk works have been described as well situated within postmodern literature. Registered trademark status In the United States, the term "Cyberpunk" is a registered trademark by R. Talsorian Games Inc. for its tabletop role-playing game. Within the European Union, the "Cyberpunk" trademark is owned by two parties: CD Projekt SA for "games and online gaming services" (particularly for the video game adaptation of the former) and by Sony Music for use outside games. See also Corporate warfare Cyborg Digital dystopia Postcyberpunk Posthumanization Steampunk Solarpunk Transhumanism Type 1 civilization Utopian and dystopian fiction References External links Cyberpunk on The Encyclopedia of Science Fiction The Cyberpunk Directory—Comprehensive directory of cyberpunk resources Cyberpunk Media Archive Archive of cyberpunk media The Cyberpunk Project—A project dedicated toward maintaining a cyberpunk database, library, and other information cyberpunks.com A website dedicated to cyberpunk themed news and media Dystopian fiction Subcultures Postmodernism Postmodern art Science fiction culture 1960s neologisms
5704
https://en.wikipedia.org/wiki/Comic%20strip
Comic strip
A comic strip is a sequence of cartoons, arranged in interrelated panels to display brief humor or form a narrative, often serialized, with text in balloons and captions. Traditionally, throughout the 20th and into the 21st century, these have been published in newspapers and magazines, with daily horizontal strips printed in black-and-white in newspapers, while Sunday papers offered longer sequences in special color comics sections. With the advent of the internet, online comic strips began to appear as webcomics. Most strips are written and drawn by a comics artist, known as a cartoonist. As the word "comic" implies, strips are frequently humorous. Examples of these gag-a-day strips are Blondie, Bringing Up Father, Marmaduke, and Pearls Before Swine. In the late 1920s, comic strips expanded from their mirthful origins to feature adventure stories, as seen in Popeye, Captain Easy, Buck Rogers, Tarzan, and Terry and the Pirates. In the 1940s, soap-opera-continuity strips such as Judge Parker and Mary Worth gained popularity. Because "comic" strips are not always funny, cartoonist Will Eisner has suggested that sequential art would be a better genre-neutral name. Every day in American newspapers, for most of the 20th century, there were at least 200 different comic strips and cartoon panels, which makes 73,000 per year. Comic strips have appeared inside American magazines such as Liberty and Boys' Life, but also on the front covers, such as the Flossy Frills series on The American Weekly Sunday newspaper supplement. In the UK and the rest of Europe, comic strips are also serialized in comic book magazines, with a strip's story sometimes continuing over three pages. History Storytelling using a sequence of pictures has existed through history. One medieval European example in textile form is the Bayeux Tapestry. Printed examples emerged in 19th-century Germany and in 18th-century England, where some of the first satirical or humorous sequential narrative drawings were produced. William Hogarth's 18th-century English cartoons include both narrative sequences, such as A Rake's Progress, and single panels. The Biblia pauperum ("Paupers' Bible"), a tradition of picture Bibles beginning in the Late Middle Ages, sometimes depicted Biblical events with words spoken by the figures in the miniatures written on scrolls coming out of their mouths—which makes them to some extent ancestors of the modern cartoon strips. In China, with its traditions of block printing and of the incorporation of text with image, experiments with what became lianhuanhua date back to 1884. Newspapers The first newspaper comic strips appeared in North America in the late 19th century. The Yellow Kid is usually credited as one of the first newspaper strips. However, the art form combining words and pictures developed gradually and there are many examples which led up to the comic strip. The Glasgow Looking Glass was the first mass-produced publication to tell stories using illustrations and is regarded as the worlds first comic strip. It satirised the political and social life of Scotland in the 1820s. It was conceived and illustrated by William Heath. Swiss author and caricature artist Rodolphe Töpffer (Geneva, 1799–1846) is considered the father of the modern comic strips. His illustrated stories such as Histoire de M. Vieux Bois (1827), first published in the US in 1842 as The Adventures of Obadiah Oldbuck or Histoire de Monsieur Jabot (1831), inspired subsequent generations of German and American comic artists. In 1865, German painter, author, and caricaturist Wilhelm Busch created the strip Max and Moritz, about two trouble-making boys, which had a direct influence on the American comic strip. Max and Moritz was a series of seven severely moralistic tales in the vein of German children's stories such as Struwwelpeter ("Shockheaded Peter"). In the story's final act, the boys, after perpetrating some mischief, are tossed into a sack of grain, run through a mill, and consumed by a flock of geese (without anybody mourning their demise). Max and Moritz provided an inspiration for German immigrant Rudolph Dirks, who created the Katzenjammer Kids in 1897—a strip starring two German-American boys visually modelled on Max and Moritz. Familiar comic-strip iconography such as stars for pain, sawing logs for snoring, speech balloons, and thought balloons originated in Dirks' strip. Hugely popular, Katzenjammer Kids occasioned one of the first comic-strip copyright ownership suits in the history of the medium. When Dirks left William Randolph Hearst for the promise of a better salary under Joseph Pulitzer, it was an unusual move, since cartoonists regularly deserted Pulitzer for Hearst. In a highly unusual court decision, Hearst retained the rights to the name "Katzenjammer Kids", while creator Dirks retained the rights to the characters. Hearst promptly hired Harold Knerr to draw his own version of the strip. Dirks renamed his version Hans and Fritz (later, The Captain and the Kids). Thus, two versions distributed by rival syndicates graced the comics pages for decades. Dirks' version, eventually distributed by United Feature Syndicate, ran until 1979. In the United States, the great popularity of comics sprang from the newspaper war (1887 onwards) between Pulitzer and Hearst. The Little Bears (1893–96) was the first American comic strip with recurring characters, while the first color comic supplement was published by the Chicago Inter-Ocean sometime in the latter half of 1892, followed by the New York Journals first color Sunday comic pages in 1897. On January 31, 1912, Hearst introduced the nation's first full daily comic page in his New York Evening Journal. The history of this newspaper rivalry and the rapid appearance of comic strips in most major American newspapers is discussed by Ian Gordon. Numerous events in newspaper comic strips have reverberated throughout society at large, though few of these events occurred in recent years, owing mainly to the declining use of continuous storylines on newspaper comic strips, which since the 1970s had been waning as an entertainment form. From 1903 to 1905 Gustave Verbeek, wrote his comic series "The UpsideDowns of Old Man Muffaroo and Little Lady Lovekins". These comics were made in such a way that one could read the 6 panel comic, flip the book and keep reading. He made 64 such comics in total. The longest-running American comic strips are: The Katzenjammer Kids (1897–2006; 109 years) Gasoline Alley (1918–present) Ripley's Believe It or Not! (1918–present) Barney Google and Snuffy Smith (1919–present) Thimble Theater/Popeye (1919–present) Blondie (1930–present) Dick Tracy (1931–present) Alley Oop (1932–present) Bringing Up Father (1913–2000; 87 years) Little Orphan Annie (1924–2010; 86 years) Most newspaper comic strips are syndicated; a syndicate hires people to write and draw a strip and then distributes it to many newspapers for a fee. Some newspaper strips begin or remain exclusive to one newspaper. For example, the Pogo comic strip by Walt Kelly originally appeared only in the New York Star in 1948 and was not picked up for syndication until the following year. Newspaper comic strips come in two different types: daily strips and Sunday strips. In the United States, a daily strip appears in newspapers on weekdays, Monday through Saturday, as contrasted with a Sunday strip, which typically only appears on Sundays. Daily strips usually are printed in black and white, and Sunday strips are usually in color. However, a few newspapers have published daily strips in color, and some newspapers have published Sunday strips in black and white. Popularity Making his first appearance in the British magazine Judy by writer and fledgling artist Charles H. Ross in 1867, Ally Sloper is one of the earliest comic strip characters and he is regarded as the first recurring character in comics. The highly popular character was spun off into his own comic, Ally Sloper's Half Holiday, in 1884. While in the early 20th century comic strips were a frequent target for detractors of "yellow journalism", by the 1920s the medium became wildly popular. While radio, and later, television surpassed newspapers as a means of entertainment, most comic strip characters were widely recognizable until the 1980s, and the "funny pages" were often arranged in a way they appeared at the front of Sunday editions. In 1931, George Gallup's first poll had the comic section as the most important part of the newspaper, with additional surveys pointing out that the comic strips were the second most popular feature after the picture page. During the 1930s, many comic sections had between 12 and 16 pages, although in some cases, these had up to 24 pages. The popularity and accessibility of strips meant they were often clipped and saved; authors including John Updike and Ray Bradbury have written about their childhood collections of clipped strips. Often posted on bulletin boards, clipped strips had an ancillary form of distribution when they were faxed, photocopied or mailed. The Baltimore Suns Linda White recalled, "I followed the adventures of Winnie Winkle, Moon Mullins and Dondi, and waited each fall to see how Lucy would manage to trick Charlie Brown into trying to kick that football. (After I left for college, my father would clip out that strip each year and send it to me just to make sure I didn't miss it.)" Production and format The two conventional formats for newspaper comics are strips and single gag panels. The strips are usually displayed horizontally, wider than they are tall. Single panels are square, circular or taller than they are wide. Strips usually, but not always, are broken up into several smaller panels with continuity from panel to panel. A horizontal strip can also be used for a single panel with a single gag, as seen occasionally in Mike Peters' Mother Goose and Grimm. Early daily strips were large, often running the entire width of the newspaper, and were sometimes three or more inches high. Initially, a newspaper page included only a single daily strip, usually either at the top or the bottom of the page. By the 1920s, many newspapers had a comics page on which many strips were collected together. During the 1930s, the original art for a daily strip could be drawn as large as 25 inches wide by six inches high. Over decades, the size of daily strips became smaller and smaller, until by 2000, four standard daily strips could fit in an area once occupied by a single daily strip. As strips have become smaller, the number of panels have been reduced. Proof sheets were the means by which syndicates provided newspapers with black-and-white line art for the reproduction of strips (which they arranged to have colored in the case of Sunday strips). Michigan State University Comic Art Collection librarian Randy Scott describes these as "large sheets of paper on which newspaper comics have traditionally been distributed to subscribing newspapers. Typically each sheet will have either six daily strips of a given title or one Sunday strip. Thus, a week of Beetle Bailey would arrive at the Lansing State Journal in two sheets, printed much larger than the final version and ready to be cut apart and fitted into the local comics page." Comic strip historian Allan Holtz described how strips were provided as mats (the plastic or cardboard trays in which molten metal is poured to make plates) or even plates ready to be put directly on the printing press. He also notes that with electronic means of distribution becoming more prevalent printed sheets "are definitely on their way out." NEA Syndicate experimented briefly with a two-tier daily strip, Star Hawks, but after a few years, Star Hawks dropped down to a single tier. In Flanders, the two-tier strip is the standard publication style of most daily strips like Spike and Suzy and Nero. They appear Monday through Saturday; until 2003 there were no Sunday papers in Flanders. In the last decades, they have switched from black and white to color. Cartoon panels Single panels usually, but not always, are not broken up and lack continuity. The daily Peanuts is a strip, and the daily Dennis the Menace is a single panel. J. R. Williams' long-run Out Our Way continued as a daily panel even after it expanded into a Sunday strip, Out Our Way with the Willets. Jimmy Hatlo's They'll Do It Every Time was often displayed in a two-panel format with the first panel showing some deceptive, pretentious, unwitting or scheming human behavior and the second panel revealing the truth of the situation. Sunday comics Sunday newspapers traditionally included a special color section. Early Sunday strips (known colloquially as "the funny papers", shortened to "the funnies"), such as Thimble Theatre and Little Orphan Annie, filled an entire newspaper page, a format known to collectors as full page. Sunday pages during the 1930s and into the 1940s often carried a secondary strip by the same artist as the main strip. No matter whether it appeared above or below a main strip, the extra strip was known as the topper, such as The Squirrel Cage which ran along with Room and Board, both drawn by Gene Ahern. During the 1930s, the original art for a Sunday strip was usually drawn quite large. For example, in 1930, Russ Westover drew his Tillie the Toiler Sunday page at a size of 17" × 37". In 1937, the cartoonist Dudley Fisher launched the innovative Right Around Home, drawn as a huge single panel filling an entire Sunday page. Full-page strips were eventually replaced by strips half that size. Strips such as The Phantom and Terry and the Pirates began appearing in a format of two strips to a page in full-size newspapers, such as the New Orleans Times Picayune, or with one strip on a tabloid page, as in the Chicago Sun-Times. When Sunday strips began to appear in more than one format, it became necessary for the cartoonist to allow for rearranged, cropped or dropped panels. During World War II, because of paper shortages, the size of Sunday strips began to shrink. After the war, strips continued to get smaller and smaller because of increased paper and printing costs. The last full-page comic strip was the Prince Valiant strip for 11 April 1971. Comic strips have also been published in Sunday newspaper magazines. Russell Patterson and Carolyn Wells' New Adventures of Flossy Frills was a continuing strip series seen on Sunday magazine covers. Beginning January 26, 1941, it ran on the front covers of Hearst's American Weekly newspaper magazine supplement, continuing until March 30 of that year. Between 1939 and 1943, four different stories featuring Flossy appeared on American Weekly covers. Sunday comics sections employed offset color printing with multiple print runs imitating a wide range of colors. Printing plates were created with four or more colors—traditionally, the CMYK color model: cyan, magenta, yellow and "K" for black. With a screen of tiny dots on each printing plate, the dots allowed an image to be printed in a halftone that appears to the eye in different gradations. The semi-opaque property of ink allows halftone dots of different colors to create an optical effect of full-color imagery. Underground comic strips The decade of the 1960s saw the rise of underground newspapers, which often carried comic strips, such as Fritz the Cat and The Fabulous Furry Freak Brothers. Zippy the Pinhead initially appeared in underground publications in the 1970s before being syndicated. Bloom County and Doonesbury began as strips in college newspapers under different titles, and later moved to national syndication. Underground comic strips covered subjects that are usually taboo in newspaper strips, such as sex and drugs. Many underground artists, notably Vaughn Bode, Dan O'Neill, Gilbert Shelton, and Art Spiegelman went on to draw comic strips for magazines such as Playboy, National Lampoon, and Pete Millar's CARtoons. Jay Lynch graduated from undergrounds to alternative weekly newspapers to Mad and children's books. Webcomics Webcomics, also known as online comics and internet comics, are comics that are available to read on the Internet. Many are exclusively published online, but the majority of traditional newspaper comic strips have some Internet presence. King Features Syndicate and other syndicates often provide archives of recent strips on their websites. Some, such as Scott Adams, creator of Dilbert, include an email address in each strip. Conventions and genres Most comic strip characters do not age throughout the strip's life, but in some strips, like Lynn Johnston's award-winning For Better or For Worse, the characters age as the years pass. The first strip to feature aging characters was Gasoline Alley. The history of comic strips also includes series that are not humorous, but tell an ongoing dramatic story. Examples include The Phantom, Prince Valiant, Dick Tracy, Mary Worth, Modesty Blaise, Little Orphan Annie, Flash Gordon, and Tarzan. Sometimes these are spin-offs from comic books, for example Superman, Batman, and The Amazing Spider-Man. A number of strips have featured animals as main characters. Some are non-verbal (Marmaduke, The Angriest Dog in the World), some have verbal thoughts but are not understood by humans, (Garfield, Snoopy in Peanuts), and some can converse with humans (Bloom County, Calvin and Hobbes, Mutts, Citizen Dog, Buckles, Get Fuzzy, Pearls Before Swine, and Pooch Cafe). Other strips are centered entirely on animals, as in Pogo and Donald Duck. Gary Larson's The Far Side was unusual, as there were no central characters. Instead The Far Side used a wide variety of characters including humans, monsters, aliens, chickens, cows, worms, amoebas, and more. John McPherson's Close to Home also uses this theme, though the characters are mostly restricted to humans and real-life situations. Wiley Miller not only mixes human, animal, and fantasy characters, but also does several different comic strip continuities under one umbrella title, Non Sequitur. Bob Thaves's Frank & Ernest began in 1972 and paved the way for some of these strips, as its human characters were manifest in diverse forms—as animals, vegetables, and minerals. Social and political influence The comics have long held a distorted mirror to contemporary society, and almost from the beginning have been used for political or social commentary. This ranged from the conservative slant of Harold Gray's Little Orphan Annie to the unabashed liberalism of Garry Trudeau's Doonesbury. Al Capp's Li'l Abner espoused liberal opinions for most of its run, but by the late 1960s, it became a mouthpiece for Capp's repudiation of the counterculture. Pogo used animals to particularly devastating effect, caricaturing many prominent politicians of the day as animal denizens of Pogo's Okeefenokee Swamp. In a fearless move, Pogo's creator Walt Kelly took on Joseph McCarthy in the 1950s, caricaturing him as a bobcat named Simple J. Malarkey, a megalomaniac who was bent on taking over the characters' birdwatching club and rooting out all undesirables. Kelly also defended the medium against possible government regulation in the McCarthy era. At a time when comic books were coming under fire for supposed sexual, violent, and subversive content, Kelly feared the same would happen to comic strips. Going before the Congressional subcommittee, he proceeded to charm the members with his drawings and the force of his personality. The comic strip was safe for satire. During the early 20th century, comic strips were widely associated with publisher William Randolph Hearst, whose papers had the largest circulation of strips in the United States. Hearst was notorious for his practice of yellow journalism, and he was frowned on by readers of The New York Times and other newspapers which featured few or no comic strips. Hearst's critics often assumed that all the strips in his papers were fronts for his own political and social views. Hearst did occasionally work with or pitch ideas to cartoonists, most notably his continued support of George Herriman's Krazy Kat. An inspiration for Bill Watterson and other cartoonists, Krazy Kat gained a considerable following among intellectuals during the 1920s and 1930s. Some comic strips, such as Doonesbury and Mallard Fillmore, may be printed on the editorial or op-ed page rather than the comics page because of their regular political commentary. For example, the August 12, 1974 Doonesbury strip was awarded a 1975 Pulitzer Prize for its depiction of the Watergate scandal. Dilbert is sometimes found in the business section of a newspaper instead of the comics page because of the strip's commentary about office politics, and Tank McNamara often appears on the sports page because of its subject matter. Lynn Johnston's For Better or For Worse created an uproar when Lawrence, one of the strip's supporting characters, came out of the closet. Publicity and recognition The world's longest comic strip is long and on display at Trafalgar Square as part of the London Comedy Festival. The London Cartoon Strip was created by 15 of Britain's best known cartoonists and depicts the history of London. The Reuben, named for cartoonist Rube Goldberg, is the most prestigious award for U.S. comic strip artists. Reuben awards are presented annually by the National Cartoonists Society (NCS). In 1995, the United States Postal Service issued a series of commemorative stamps, Comic Strip Classics, marking the comic-strip centennial. Today's strip artists, with the help of the NCS, enthusiastically promote the medium, which since the 1970s (and particularly the 1990s) has been considered to be in decline due to numerous factors such as changing tastes in humor and entertainment, the waning relevance of newspapers in general and the loss of most foreign markets outside English-speaking countries. One particularly humorous example of such promotional efforts is the Great Comic Strip Switcheroonie, held in 1997 on April Fool's Day, an event in which dozens of prominent artists took over each other's strips. Garfields Jim Davis, for example, switched with Blondies Stan Drake, while Scott Adams (Dilbert) traded strips with Bil Keane (The Family Circus). While the 1997 Switcheroonie was a one-time publicity stunt, an artist taking over a feature from its originator is an old tradition in newspaper cartooning (as it is in the comic book industry). In fact, the practice has made possible the longevity of the genre's more popular strips. Examples include Little Orphan Annie (drawn and plotted by Harold Gray from 1924 to 1944 and thereafter by a succession of artists including Leonard Starr and Andrew Pepoy), and Terry and the Pirates, started by Milton Caniff in 1934 and picked up by George Wunder. A business-driven variation has sometimes led to the same feature continuing under a different name. In one case, in the early 1940s, Don Flowers' Modest Maidens was so admired by William Randolph Hearst that he lured Flowers away from the Associated Press and to King Features Syndicate by doubling the cartoonist's salary, and renamed the feature Glamor Girls to avoid legal action by the AP. The latter continued to publish Modest Maidens, drawn by Jay Allen in Flowers' style. Issues in U.S. newspaper comic strips As newspapers have declined, the changes have affected comic strips. Jeff Reece, lifestyle editor of The Florida Times-Union, wrote, "Comics are sort of the 'third rail' of the newspaper." Size In the early decades of the 20th century, all Sunday comics received a full page, and daily strips were generally the width of the page. The competition between papers for having more cartoons than the rest from the mid-1920s, the growth of large-scale newspaper advertising during most of the thirties, paper rationing during World War II, the decline on news readership (as television newscasts began to be more common) and inflation (which has caused higher printing costs) beginning during the fifties and sixties led to Sunday strips being published on smaller and more diverse formats. As newspapers have reduced the page count of Sunday comic sections since the late 1990s (by the 2010s, most sections have only four pages, with the back page not always being destined for comics) has also led to further downsizes. Daily strips have suffered as well. Before the mid-1910s, there was not a "standard" size", with strips running the entire width of a page or having more than one tier. By the 1920s, strips often covered six of the eight columns occupied by a traditional broadsheet paper. During the 1940s, strips were reduced to four columns wide (with a "transition" width of five columns). As newspapers became narrower beginning in the 1970s, strips have gotten even smaller, often being just three columns wide, a similar width to the one most daily panels occupied before the 1940s. In an issue related to size limitations, Sunday comics are often bound to rigid formats that allow their panels to be rearranged in several different ways while remaining readable. Such formats usually include throwaway panels at the beginning, which some newspapers will omit for space. As a result, cartoonists have less incentive to put great efforts into these panels. Garfield and Mutts were known during the mid-to-late 80s and 1990s respectively for their throwaways on their Sunday strips, however both strips now run "generic" title panels. Some cartoonists have complained about this, with Walt Kelly, creator of Pogo, openly voicing his discontent about being forced to draw his Sunday strips in such rigid formats from the beginning. Kelly's heirs opted to end the strip in 1975 as a form of protest against the practice. Since then, Calvin and Hobbes creator Bill Watterson has written extensively on the issue, arguing that size reduction and dropped panels reduce both the potential and freedom of a cartoonist. After a lengthy battle with his syndicate, Watterson won the privilege of making half page-sized Sunday strips where he could arrange the panels any way he liked. Many newspaper publishers and a few cartoonists objected to this, and some papers continued to print Calvin and Hobbes at small sizes. Opus won that same privilege years after Calvin and Hobbes ended, while Wiley Miller circumvented further downsizes by making his Non Sequitur Sunday strip available only in a vertical arrangement. Most strips created since 1990, however, are drawn in the unbroken "third-page" format. Few newspapers still run half-page strips, as with Prince Valiant and Hägar the Horrible in the front page of the Reading Eagle Sunday comics section until the mid-2010s. Format With the success of The Gumps during the 1920s, it became commonplace for strips (comedy- and adventure-laden alike) to have lengthy stories spanning weeks or months. The "Monarch of Medioka" story in Floyd Gottfredson's Mickey Mouse comic strip ran from September 8, 1937 to May 2, 1938. Between the 1960s and the late 1980s, as television news relegated newspaper reading to an occasional basis rather than daily, syndicators were abandoning long stories and urging cartoonists to switch to simple daily gags, or week-long "storylines" (with six consecutive (mostly unrelated) strips following a same subject), with longer storylines being used mainly on adventure-based and dramatic strips. Strips begun during the mid-1980s or after (such as Get Fuzzy, Over the Hedge, Monty, and others) are known for their heavy use of storylines, lasting between one and three weeks in most cases. The writing style of comic strips changed as well after World War II. With an increase in the number of college-educated readers, there was a shift away from slapstick comedy and towards more cerebral humor. Slapstick and visual gags became more confined to Sunday strips, because as Garfield creator Jim Davis put it, "Children are more likely to read Sunday strips than dailies." Second author Many older strips are no longer drawn by the original cartoonist, who has either died or retired. Such strips are known as "zombie strips". A cartoonist, paid by the syndicate or sometimes a relative of the original cartoonist, continues writing the strip, a tradition that became commonplace in the early half of the 20th century. Hägar the Horrible and Frank and Ernest are both drawn by the sons of the creators. Some strips which are still in affiliation with the original creator are produced by small teams or entire companies, such as Jim Davis' Garfield, however there is some debate if these strips fall in this category. This act is commonly criticized by modern cartoonists including Watterson and Pearls Before Swine'''s Stephan Pastis. The issue was addressed in six consecutive Pearls strips in 2005. Charles Schulz, of Peanuts fame, requested that his strip not be continued by another cartoonist after his death. He also rejected the idea of hiring an inker or letterer, comparing it to a golfer hiring a man to make his putts. Schulz's family has honored his wishes and refused numerous proposals by syndicators to continue Peanuts with a new author. Assistants Since the consolidation of newspaper comics by the first quarter of the 20th century, most cartoonists have used a group of assistants (with usually one of them credited). However, quite a few cartoonists (e.g.: George Herriman and Charles Schulz, among others) have done their strips almost completely by themselves; often criticizing the use of assistants for the same reasons most have about their editors hiring anyone else to continue their work after their retirement. Rights to the strips Historically, syndicates owned the creators' work, enabling them to continue publishing the strip after the original creator retired, left the strip, or died. This practice led to the term "legacy strips", or more pejoratively "zombie strips". Most syndicates signed creators to 10- or even 20-year contracts. (There have been exceptions, however, such as Bud Fisher's Mutt and Jeff being an early—if not the earliest—case in which the creator retained ownership of his work.) Both these practices began to change with the 1970 debut of Universal Press Syndicate, as the company gave cartoonists a 50-percent ownership share of their work. Creators Syndicate, founded in 1987, granted artists full rights to the strips, something that Universal Press did in 1990, followed by King Features in 1995. By 1999 both Tribune Media Services and United Feature had begun granting ownership rights to creators (limited to new and/or hugely popular strips). Censorship Starting in the late 1940s, the national syndicates which distributed newspaper comic strips subjected them to very strict censorship. Li'l Abner was censored in September 1947 and was pulled from the Pittsburgh Press by Scripps-Howard. The controversy, as reported in Time, centered on Capp's portrayal of the U.S. Senate. Said Edward Leech of Scripps, "We don't think it is good editing or sound citizenship to picture the Senate as an assemblage of freaks and crooks... boobs and undesirables." As comics are easier for children to access compared to other types of media, they have a significantly more rigid censorship code than other media. Stephan Pastis has lamented that the "unwritten" censorship code is still "stuck somewhere in the 1950s". Generally, comics are not allowed to include such words as "damn", "sucks", "screwed", and "hell", although there have been exceptions such as the September 22, 2010 Mother Goose and Grimm in which an elderly man says, "This nursing home food sucks," and a pair of Pearls Before Swine comics from January 11, 2011 with a character named Ned using the word "crappy". Naked backsides and shooting guns cannot be shown, according to Dilbert cartoonist Scott Adams. Such comic strip taboos were detailed in Dave Breger's book But That's Unprintable (Bantam, 1955). Many issues such as sex, narcotics, and terrorism cannot or can very rarely be openly discussed in strips, although there are exceptions, usually for satire, as in Bloom County. This led some cartoonists to resort to double entendre or dialogue children do not understand, as in Greg Evans' Luann. Another example of wordplay to get around censorship is a July 27, 2016 Pearls Before Swine strip that features Pig talking to his sister, and says the phrase "I SIS!" repeatedly after correcting his sister's grammar. The strip then cuts to a scene of a NSA wiretap agent, following a scene of Pig being arrested by the FBI saying "Never correct your sister's grammar", implying that the CIA mistook the phrase "I SIS" with "ISIS". Younger cartoonists have claimed commonplace words, images, and issues should be allowed in the comics, considering that the pressure on "clean" humor has been a chief factor for the declining popularity of comic strips since the 1990s (Aaron McGruder, creator of The Boondocks, decided to end his strip partly because of censorship issues, while the Popeye daily comic strip ended in 1994 after newspapers objected to a storyline they considered to be a satire on abortion). Some of the taboo words and topics are mentioned daily on television and other forms of visual media. Webcomics and comics distributed primarily to college newspapers are much freer in this respect. See alsoBiblia pauperumBilly Ireland Cartoon Library & Museum Comic book Comics studies History of American comics List of British comic strips List of cartoonists List of newspaper comic strips Military humor comic strips References Bibliography Further reading Blackbeard, Bill, ed. The Smithsonian Collection of Newspaper Comics. (1977) Smithsonian Institution Press/Harry N. Abrams Castelli, Alfredo. Here We Are AgainGordon, Ian. Comic Strips and Consumer Culture (1998) Smithsonian Institution Press Goulart, Ron. Encyclopedia of American ComicsGoulart, Ron. The FunniesGoulart, Ron. The Adventurous DecadeHoltz, Allan. American Newspaper Comics: An Encyclopedic Reference Guide. (2012) University of Michigan Press. Horn, Maurice. The World Encyclopedia of Comics. (1976) Chelsea House, (1982) Avon. Horn, Maurice. The World Encyclopedia of Cartoons (Chelsea House, 1979) – 6 volumes Horn, Maurice. 100 Years of American Newspaper Comics (Gramercy Books, 1996) Koenigsberg, Moses. King News, Moses Koenigsberg Mott, Frank Luther. American JournalismRobbins, Trina. A Century of Women CartoonistsRobbins, Trina and Yronwode, Cat. Women and the ComicsRobinson, Jerry. The ComicsSheridan, Martin. Comics And Their CreatorsStein, Daniel and Jan-Noel Thon, eds. From Comic Strips to Graphic Novels. Contributions to the Theory and History of Graphic Narrative. Berlin/Boston 2015. Tebbell. The Compact History of the American NewspaperStrickler, Dave. Syndicated Comic Strips and ArtistsWatson, Elmo Scott. A History of Newspaper Syndicates in the United States, Elmo Scott Watson Waugh, Coulton. The Comics'' External links National Cartoonists Society Comic Art Collection at the University of Missouri Billy Ireland Cartoon Library and Museum at Ohio State University Comics formats Comics terminology
5705
https://en.wikipedia.org/wiki/Continuum%20hypothesis
Continuum hypothesis
In mathematics, specifically set theory, the continuum hypothesis (abbreviated CH) is a hypothesis about the possible sizes of infinite sets. It states that or equivalently, that In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: , or even shorter with beth numbers: . The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940. The name of the hypothesis comes from the term the continuum for the real numbers. History Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it. It became the first on David Hilbert's list of important open questions that was presented at the International Congress of Mathematicians in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated. Kurt Gödel proved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory. The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 by Paul Cohen. Cardinality of infinite sets Two sets are said to have the same cardinality or cardinal number if there exists a bijection (a one-to-one correspondence) between them. Intuitively, for two sets S and T to have the same cardinality means that it is possible to "pair off" elements of S with elements of T in such a fashion that every element of S is paired off with exactly one element of T and vice versa. Hence, the set {banana, apple, pear} has the same cardinality as {yellow, red, green}. With infinite sets such as the set of integers or rational numbers, the existence of a bijection between two sets becomes more difficult to demonstrate. The rational numbers seemingly form a counterexample to the continuum hypothesis: the integers form a proper subset of the rationals, which themselves form a proper subset of the reals, so intuitively, there are more rational numbers than integers and more real numbers than rational numbers. However, this intuitive analysis is flawed; it does not take proper account of the fact that all three sets are infinite. It turns out the rational numbers can actually be placed in one-to-one correspondence with the integers, and therefore the set of rational numbers is the same size (cardinality) as the set of integers: they are both countable sets. Cantor gave two proofs that the cardinality of the set of integers is strictly smaller than that of the set of real numbers (see Cantor's first uncountability proof and Cantor's diagonal argument). His proofs, however, give no indication of the extent to which the cardinality of the integers is less than that of the real numbers. Cantor proposed the continuum hypothesis as a possible solution to this question. The continuum hypothesis states that the set of real numbers has minimal possible cardinality which is greater than the cardinality of the set of integers. That is, every set, S, of real numbers can either be mapped one-to-one into the integers or the real numbers can be mapped one-to-one into S. As the real numbers are equinumerous with the powerset of the integers, i.e. , the continuum hypothesis can be restated as follows: Assuming the axiom of choice, there is a unique smallest cardinal number greater than , and the continuum hypothesis is in turn equivalent to the equality . Independence from ZFC The independence of the continuum hypothesis (CH) from Zermelo–Fraenkel set theory (ZF) follows from combined work of Kurt Gödel and Paul Cohen. Gödel showed that CH cannot be disproved from ZF, even if the axiom of choice (AC) is adopted (making ZFC). Gödel's proof shows that CH and AC both hold in the constructible universe L, an inner model of ZF set theory, assuming only the axioms of ZF. The existence of an inner model of ZF in which additional axioms hold shows that the additional axioms are consistent with ZF, provided ZF itself is consistent. The latter condition cannot be proved in ZF itself, due to Gödel's incompleteness theorems, but is widely believed to be true and can be proved in stronger set theories. Cohen showed that CH cannot be proven from the ZFC axioms, completing the overall independence proof. To prove his result, Cohen developed the method of forcing, which has become a standard tool in set theory. Essentially, this method begins with a model of ZF in which CH holds, and constructs another model which contains more sets than the original, in a way that CH does not hold in the new model. Cohen was awarded the Fields Medal in 1966 for his proof. The independence proof just described shows that CH is independent of ZFC. Further research has shown that CH is independent of all known large cardinal axioms in the context of ZFC. Moreover, it has been shown that the cardinality of the continuum can be any cardinal consistent with König's theorem. A result of Solovay, proved shortly after Cohen's result on the independence of the continuum hypothesis, shows that in any model of ZFC, if is a cardinal of uncountable cofinality, then there is a forcing extension in which . However, per König's theorem, it is not consistent to assume is or or any cardinal with cofinality . The continuum hypothesis is closely related to many statements in analysis, point set topology and measure theory. As a result of its independence, many substantial conjectures in those fields have subsequently been shown to be independent as well. The independence from ZFC means that proving or disproving the CH within ZFC is impossible. However, Gödel and Cohen's negative results are not universally accepted as disposing of all interest in the continuum hypothesis. The continuum hypothesis remains an active topic of research; see Woodin and Peter Koellner for an overview of the current research status. The continuum hypothesis and the axiom of choice were among the first genuinely mathematical statements shown to be independent of ZF set theory. Although the existence of some statements independent of ZFC had already been known more than two decades prior: for example, assuming good soundness properties and the consistency ZFC, Gödel's incompleteness theorems, which were published in 1931, establish that there is a formal statement (one for each appropriate Gödel numbering scheme) expressing the consistency of ZFC, that is also independent of it. The latter independence result indeed holds for many theories. Arguments for and against the continuum hypothesis Gödel believed that CH is false, and that his proof that CH is consistent with ZFC only shows that the Zermelo–Fraenkel axioms do not adequately characterize the universe of sets. Gödel was a platonist and therefore had no problems with asserting the truth and falsehood of statements independent of their provability. Cohen, though a formalist, also tended towards rejecting CH. Historically, mathematicians who favored a "rich" and "large" universe of sets were against CH, while those favoring a "neat" and "controllable" universe favored CH. Parallel arguments were made for and against the axiom of constructibility, which implies CH. More recently, Matthew Foreman has pointed out that ontological maximalism can actually be used to argue in favor of CH, because among models that have the same reals, models with "more" sets of reals have a better chance of satisfying CH. Another viewpoint is that the conception of set is not specific enough to determine whether CH is true or false. This viewpoint was advanced as early as 1923 by Skolem, even before Gödel's first incompleteness theorem. Skolem argued on the basis of what is now known as Skolem's paradox, and it was later supported by the independence of CH from the axioms of ZFC since these axioms are enough to establish the elementary properties of sets and cardinalities. In order to argue against this viewpoint, it would be sufficient to demonstrate new axioms that are supported by intuition and resolve CH in one direction or another. Although the axiom of constructibility does resolve CH, it is not generally considered to be intuitively true any more than CH is generally considered to be false. At least two other axioms have been proposed that have implications for the continuum hypothesis, although these axioms have not currently found wide acceptance in the mathematical community. In 1986, Chris Freiling presented an argument against CH by showing that the negation of CH is equivalent to Freiling's axiom of symmetry, a statement derived by arguing from particular intuitions about probabilities. Freiling believes this axiom is "intuitively true" but others have disagreed. A difficult argument against CH developed by W. Hugh Woodin has attracted considerable attention since the year 2000. Foreman does not reject Woodin's argument outright but urges caution. Woodin proposed a new hypothesis that he labeled the , or "Star axiom". The Star axiom would imply that is , thus falsifying CH. The Star axiom was bolstered by an independent May 2021 proof showing the Star axiom can be derived from a variation of Martin's maximum. However, Woodin stated in the 2010s that he now instead believes CH to be true, based on his belief in his new "ultimate L" conjecture. Solomon Feferman has argued that CH is not a definite mathematical problem. He proposes a theory of "definiteness" using a semi-intuitionistic subsystem of ZF that accepts classical logic for bounded quantifiers but uses intuitionistic logic for unbounded ones, and suggests that a proposition is mathematically "definite" if the semi-intuitionistic theory can prove . He conjectures that CH is not definite according to this notion, and proposes that CH should, therefore, be considered not to have a truth value. Peter Koellner wrote a critical commentary on Feferman's article. Joel David Hamkins proposes a multiverse approach to set theory and argues that "the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and, as a result, it can no longer be settled in the manner formerly hoped for". In a related vein, Saharon Shelah wrote that he does "not agree with the pure Platonic view that the interesting problems in set theory can be decided, that we just have to discover the additional axiom. My mental picture is that we have many possible set theories, all conforming to ZFC". The generalized continuum hypothesis The generalized continuum hypothesis (GCH) states that if an infinite set's cardinality lies between that of an infinite set S and that of the power set of S, then it has the same cardinality as either S or . That is, for any infinite cardinal there is no cardinal such that . GCH is equivalent to: for every ordinal (occasionally called Cantor's aleph hypothesis). The beth numbers provide an alternate notation for this condition: for every ordinal . The continuum hypothesis is the special case for the ordinal . GCH was first suggested by Philip Jourdain. For the early history of GCH, see Moore. Like CH, GCH is also independent of ZFC, but Sierpiński proved that ZF + GCH implies the axiom of choice (AC) (and therefore the negation of the axiom of determinacy, AD), so choice and GCH are not independent in ZF; there are no models of ZF in which GCH holds and AC fails. To prove this, Sierpiński showed GCH implies that every cardinality n is smaller than some aleph number, and thus can be ordered. This is done by showing that n is smaller than which is smaller than its own Hartogs number—this uses the equality ; for the full proof, see Gillman. Kurt Gödel showed that GCH is a consequence of ZF + V=L (the axiom that every set is constructible relative to the ordinals), and is therefore consistent with ZFC. As GCH implies CH, Cohen's model in which CH fails is a model in which GCH fails, and thus GCH is not provable from ZFC. W. B. Easton used the method of forcing developed by Cohen to prove Easton's theorem, which shows it is consistent with ZFC for arbitrarily large cardinals to fail to satisfy . Much later, Foreman and Woodin proved that (assuming the consistency of very large cardinals) it is consistent that holds for every infinite cardinal . Later Woodin extended this by showing the consistency of for every . Carmi Merimovich showed that, for each n ≥ 1, it is consistent with ZFC that for each κ, 2κ is the nth successor of κ. On the other hand, László Patai proved that if γ is an ordinal and for each infinite cardinal κ, 2κ is the γth successor of κ, then γ is finite. For any infinite sets A and B, if there is an injection from A to B then there is an injection from subsets of A to subsets of B. Thus for any infinite cardinals A and B, . If A and B are finite, the stronger inequality holds. GCH implies that this strict, stronger inequality holds for infinite cardinals as well as finite cardinals. Implications of GCH for cardinal exponentiation Although the generalized continuum hypothesis refers directly only to cardinal exponentiation with 2 as the base, one can deduce from it the values of cardinal exponentiation in all cases. GCH implies that: when α ≤ β+1; when β+1 < α and , where cf is the cofinality operation; and when β+1 < α and . The first equality (when α ≤ β+1) follows from: , while: ; The third equality (when β+1 < α and ) follows from: , by König's theorem, while: Where, for every γ, GCH is used for equating and ; is used as it is equivalent to the axiom of choice. See also Absolute Infinite Beth number Cardinality Ω-logic Wetzel's problem References Sources Further reading Gödel, K.: What is Cantor's Continuum Problem?, reprinted in Benacerraf and Putnam's collection Philosophy of Mathematics, 2nd ed., Cambridge University Press, 1983. An outline of Gödel's arguments against CH. Martin, D. (1976). "Hilbert's first problem: the continuum hypothesis," in Mathematical Developments Arising from Hilbert's Problems, Proceedings of Symposia in Pure Mathematics XXVIII, F. Browder, editor. American Mathematical Society, 1976, pp. 81–92. External links Forcing (mathematics) Independence results Basic concepts in infinite set theory Hilbert's problems Infinity Hypotheses Cardinal numbers
5706
https://en.wikipedia.org/wiki/%C3%87evik%20Bir
Çevik Bir
Çevik Bir (born 1939) is a retired Turkish army general. He was a member of the Turkish General Staff in the 1990s. He took a major part in several important international missions in the Middle East and North Africa. He was born in Buca, Izmir Province, in 1939 and is married with one child. He graduated from the Turkish Military Academy as an engineer officer in 1958, from the Army Staff College in 1970 and from the Armed Forces College in 1971. He graduated from NATO Defense College, Rome, Italy in 1973. From 1983 to 1985, he served at SHAPE, NATO's headquarters in Belgium. He was promoted to brigadier general and commanded an armed brigade and division in Turkey. From 1987 to 1991, he served as major general, and then was promoted to lieutenant general. After the dictator Siad Barre’s ousting, conflicts between the General Mohammed Farah Aidid party and other clans in Somalia had led to famine and lawlessness throughout the country. An estimated 300,000 people had died from starvation. A combined military force of United States and United Nations (under the name "UNOSOM") were deployed to Mogadishu, to monitor the ceasefire and deliver food and supplies to the starving people of Somali. Çevik Bir, who was then a lieutenant-general of Turkey, became the force commander of UNOSOM II in April 1993. Despite the retreat of US and UN forces after several deaths due to local hostilities mainly led by Aidid, the introduction of a powerful military force opened the transportation routes, enabling the provision of supplies and ended the famine quickly. He was succeeded as Force Commander by a Malaysian general in January 1994. He became a four-star general and served three years as vice chairman of the Turkish Armed Forces, then appointed commander of the Turkish First Army, in Istanbul. While he was vice chairman of the TAF, he signed the Turkish-Israeli Military Coordination agreement in 1996. Çevik Bir became the Turkish army's deputy chief of general staff shortly after the Somali operation and played a vital role in establishing a Turkish-Israeli entente. He retired from the army on 30 August 1999. He is a former member of the Association for the Study of the Middle East and Africa (ASMEA). On 12 April 2012, Bir and 30 other officers were taken in custody for their role in the 1997 military memorandum that forced the then Turkish government, led by the Refah Partisi (Welfare Party), to step down. On 11 September 2021, the General Staff Personnel Presidency reported to the Ankara 5th High Criminal Court, where the case was heard, that the administrative action was taken to demolish the 13 retired generals convicted in the February 28 trial. Thus, Çevik Bir was demoted. Çevik Bir, one of the generals who planned the process, said "In Turkey we have a marriage of Islam and democracy. (…) The child of this marriage is secularism. Now this child gets sick from time to time. The Turkish Armed Forces is the doctor which saves the child. Depending on how sick the kid is, we administer the necessary medicine to make sure the child recuperates". Distinctions United Nations Medal (1994) US Medal of Merit (1994) Turkish Armed Forces Medal of Distinguished Service (1995) German Medal of Honor (1996) Turkish Armed Forces Medal of Merit (1996) United Kingdom Distinguished Achievement Medal (1997) United Kingdom Distinguished Service Medal (1997) Jordanian Medal of Istihkak (1998) French Medal of Merit (1999) References 1939 births Military personnel from İzmir Living people Turkish Military Academy alumni Army War College (Turkey) alumni Turkish Army generals
5708
https://en.wikipedia.org/wiki/Collectivism%20%28disambiguation%29
Collectivism (disambiguation)
Collectivism is the type of social organization. Collectivism may also refer to: Bureaucratic collectivism, a theory of class society which is used to describe the Soviet Union under Joseph Stalin Collectivist anarchism, a socialist doctrine in which the workers own and manage the production Collectivism (art), art which is created by a group of people rather than an individual Communitarianism, a political position that emphasizes the importance of the community over the individual or attempts to integrate the two Corporatism, a political ideology in which groups, rather than individuals, are the building blocks of society See also Collective Collective farming, aka collectivization Collective security Collective ownership Collective agreement
5711
https://en.wikipedia.org/wiki/Nepeta
Nepeta
Nepeta is a genus of flowering plants in the family Lamiaceae. The genus name, from Latin (“catnip”), is reportedly in reference to Nepete, an ancient Etruscan city. There are about 250 species. The genus is native to Europe, Asia, and Africa, and has also naturalized in North America. Some members of this group are known as catnip or catmint because of their effect on house cats – the nepetalactone contained in some Nepeta species binds to the olfactory receptors of cats, typically resulting in temporary euphoria. Description Most of the species are herbaceous perennial plants, but some are annuals. They have sturdy stems with opposite heart-shaped, green to gray-green leaves. Nepeta plants are usually aromatic in foliage and flowers. The tubular flowers can be lavender, blue, white, pink, or lilac, and spotted with tiny lavender-purple dots. The flowers are located in verticillasters grouped on spikes; or the verticillasters are arranged in opposite cymes, racemes, or panicles – toward the tip of the stems. The calyx is tubular or campanulate, they are slightly curved or straight, and the limbs are often 2-lipped with five teeth. The lower lip is larger, with 3-lobes, and the middle lobe is the largest. The flowers have 4 hairless stamens that are nearly parallel, and they ascend under the upper lip of the corolla. Two stamen are longer and stamens of pistillate flowers are rudimentary. The style protrudes outside of the mouth of the flowers. The fruits are nutlets, which are oblong-ovoid, ellipsoid, ovoid, or obovoid in shape. The surfaces of the nutlets can be slightly ribbed, smooth or warty. Selected species Some species formerly classified as Nepeta are now in the genera Dracocephalum, Glechoma, and Calamintha . Species include: Nepeta adenophyta Hedge Nepeta agrestis Loisel. Nepeta alaghezi Pojark. Nepeta alatavica Lipsky Nepeta algeriensis Noë Nepeta amicorum Rech.f. Nepeta amoena Stapf Nepeta anamurensis Gemici & Leblebici Nepeta annua Pall. Nepeta apuleji Ucria Nepeta argolica Bory & Chaub. Nepeta assadii Jamzad Nepeta assurgens Hausskn. & Bornm. Nepeta astorensis Shinwari & Chaudhri Nepeta atlantica Ball Nepeta autraniana Bornm. Nepeta azurea R.Br. ex Benth. Nepeta badachschanica Kudrjasch. Nepeta bakhtiarica Rech.f. Nepeta ballotifolia Hochst. ex A.Rich. Nepeta balouchestanica Jamzad & Ingr. Nepeta barfakensis Rech.f. Nepeta baytopii Hedge & Lamond Nepeta bazoftica Jamzad Nepeta bellevii Prain Nepeta betonicifolia C.A.Mey. Nepeta binaloudensis Jamzad Nepeta bodeana Bunge Nepeta × boissieri Willk. Nepeta bokhonica Jamzad Nepeta bombaiensis Dalzell Nepeta bornmuelleri Hausskn. ex Bornm. Nepeta botschantzevii Czern. Nepeta brachyantha Rech.f. & Edelb. Nepeta bracteata Benth. Nepeta brevifolia C.A.Mey. Nepeta bucharica Lipsky Nepeta caerulea Aiton Nepeta caesarea Boiss. Nepeta campestris Benth. Nepeta camphorata Boiss. & Heldr. Nepeta × campylantha Rech.f. Nepeta cataria L. Nepeta cephalotes Boiss. Nepeta chionophila Boiss. & Hausskn. Nepeta ciliaris Benth. Nepeta cilicica Boiss. ex Benth. Nepeta clarkei Hook.f. Nepeta coerulescens Maxim. Nepeta concolor Boiss. & Heldr. ex Benth. Nepeta conferta Hedge & Lamond Nepeta congesta Fisch. & C.A.Mey. Nepeta connata Royle ex Benth. Nepeta consanguinea Pojark. Nepeta crinita Montbret & Aucher ex Benth. Nepeta crispa Willd. Nepeta curviflora Boiss. Nepeta cyanea Steven Nepeta cyrenaica Quézel & Zaffran Nepeta czegemensis Pojark. Nepeta daenensis Boiss. Nepeta deflersiana Schweinf. ex Hedge Nepeta densiflora Kar. & Kir. Nepeta dentata C.Y.Wu & S.J.Hsuan Nepeta denudata Benth. Nepeta dirmencii Yild. & Dinç Nepeta discolor Royle ex Benth. Nepeta distans Royle Nepeta duthiei Prain & Mukerjee Nepeta elliptica Royle ex Benth. Nepeta elymaitica Bornm. Nepeta erecta (Royle ex Benth.) Benth. Nepeta eremokosmos Rech.f. Nepeta eremophila Hausskn. & Bornm. Nepeta eriosphaera Rech.f. & Köie Nepeta eriostachya Benth. Nepeta ernesti-mayeri Diklic & V.Nikolic Nepeta everardii S.Moore Nepeta × faassenii Bergmans ex Stearn Nepeta flavida Hub.-Mor. Nepeta floccosa Benth. Nepeta foliosa Moris Nepeta fordii Hemsl. Nepeta formosa Kudrjasch. Nepeta freitagii Rech.f. Nepeta glechomifolia (Dunn) Hedge Nepeta gloeocephala Rech.f. Nepeta glomerata Montbret & Aucher ex Benth. Nepeta glomerulosa Boiss. Nepeta glutinosa Benth. Nepeta gontscharovii Kudrjasch. Nepeta govaniana (Wall. ex Benth.) Benth. Nepeta graciliflora Benth. Nepeta granatensis Boiss. Nepeta grandiflora M.Bieb. Nepeta grata Benth. Nepeta griffithii Hedge Nepeta heliotropifolia Lam. Nepeta hemsleyana Oliv. ex Prain Nepeta henanensis C.S.Zhu Nepeta hindostana (B.Heyne ex Roth) Haines Nepeta hispanica Boiss. & Reut. Nepeta hormozganica Jamzad Nepeta humilis Benth. Nepeta hymenodonta Boiss. Nepeta isaurica Boiss. & Heldr. ex Benth. Nepeta ispahanica Boiss. Nepeta italica L. Nepeta jakupicensis Micevski Nepeta jomdaensis H.W.Li Nepeta juncea Benth. Nepeta knorringiana Pojark. Nepeta koeieana Rech.f. Nepeta kokamirica Regel Nepeta kokanica Regel Nepeta komarovii E.A.Busch Nepeta kotschyi Boiss. Nepeta kurdica Hausskn. & Bornm. Nepeta kurramensis Rech.f. Nepeta ladanolens Lipsky Nepeta laevigata (D.Don) Hand.-Mazz. Nepeta lagopsis Benth. Nepeta lamiifolia Willd. Nepeta lamiopsis Benth. ex Hook.f. Nepeta lasiocephala Benth. Nepeta latifolia DC. Nepeta leucolaena Benth. ex Hook.f. Nepeta linearis Royle ex Benth. Nepeta lipskyi Kudrjasch. Nepeta longibracteata Benth. Nepeta longiflora Vent. Nepeta longituba Pojark. Nepeta ludlow-hewittii Blakelock Nepeta macrosiphon Boiss. Nepeta mahanensis Jamzad & M.Simmonds Nepeta manchuriensis S.Moore Nepeta mariae Regel Nepeta maussarifii Lipsky Nepeta melissifolia Lam. Nepeta membranifolia C.Y.Wu Nepeta menthoides Boiss. & Buhse Nepeta meyeri Benth. Nepeta micrantha Bunge Nepeta minuticephala Jamzad Nepeta mirzayanii Rech.f. & Esfand. Nepeta mollis Benth. Nepeta monocephala Rech.f. Nepeta monticola Kudr. Nepeta multibracteata Desf. Nepeta multicaulis Mukerjee Nepeta multifida L. Nepeta natanzensis Jamzad Nepeta nawarica Rech.f. Nepeta nepalensis Spreng. Nepeta nepetella L. Nepeta nepetellae Forssk. Nepeta nepetoides (Batt. ex Pit.) Harley Nepeta nervosa Royle ex Benth. Nepeta nuda L. Nepeta obtusicrena Boiss. & Kotschy ex Hedge Nepeta odorifera Lipsky Nepeta olgae Regel Nepeta orphanidea Boiss. Nepeta pabotii Mouterde Nepeta paktiana Rech.f. Nepeta pamirensis Franch. Nepeta parnassica Heldr. & Sart. Nepeta paucifolia Mukerjee Nepeta persica Boiss. Nepeta petraea Benth. Nepeta phyllochlamys P.H.Davis Nepeta pilinux P.H.Davis Nepeta podlechii Rech.f. Nepeta podostachys Benth. Nepeta pogonosperma Jamzad & Assadi Nepeta polyodonta Rech.f. Nepeta praetervisa Rech.f. Nepeta prattii H.Lév. Nepeta prostrata Benth. Nepeta pseudokokanica Pojark. Nepeta pubescens Benth. Nepeta pungens (Bunge) Benth. Nepeta racemosa Lam. Nepeta raphanorhiza Benth. Nepeta rechingeri Hedge Nepeta rivularis Bornm. Nepeta roopiana Bordz. Nepeta rtanjensis Diklic & Milojevic Nepeta rubella A.L.Budantzev Nepeta rugosa Benth. Nepeta saccharata Bunge Nepeta santoana Popov Nepeta saturejoides Boiss. Nepeta schiraziana Boiss. Nepeta schmidii Rech.f. Nepeta schugnanica Lipsky Nepeta scordotis L. Nepeta septemcrenata Ehrenb. ex Benth. Nepeta sessilis C.Y.Wu & S.J.Hsuan Nepeta shahmirzadensis Assadi & Jamzad Nepeta sheilae Hedge & R.A.King Nepeta sibirica L. Nepeta sorgerae Hedge & Lamond Nepeta sosnovskyi Askerova Nepeta souliei H.Lév. Nepeta spathulifera Benth. Nepeta sphaciotica P.H.Davis Nepeta spruneri Boiss. Nepeta stachyoides Coss. ex Batt. Nepeta staintonii Hedge Nepeta stenantha Kotschy & Boiss. Nepeta stewartiana Diels Nepeta straussii Hausskn. & Bornm. Nepeta stricta (Banks & Sol.) Hedge & Lamond Nepeta suavis Stapf Nepeta subcaespitosa Jehan Nepeta subhastata Regel Nepeta subincisa Benth. Nepeta subintegra Maxim. Nepeta subsessilis Maxim. Nepeta sudanica F.W.Andrews Nepeta sulfuriflora P.H.Davis Nepeta sulphurea C. Koch Nepeta sungpanensis C.Y.Wu Nepeta supina Steven Nepeta taxkorganica Y.F.Chang Nepeta tenuiflora Diels Nepeta tenuifolia Benth. Nepeta teucriifolia Willd. Nepeta teydea Webb & Berthel. Nepeta tibestica Maire Nepeta × tmolea Boiss. Nepeta trachonitica Post Nepeta transiliensis Pojark. Nepeta trautvetteri Boiss. & Buhse Nepeta trichocalyx Greuter & Burdet Nepeta tuberosa L. Nepeta tytthantha Pojark. Nepeta uberrima Rech.f. Nepeta ucranica L. Nepeta veitchii Duthie Nepeta velutina Pojark. Nepeta viscida Boiss. Nepeta vivianii (Coss.) Bég. & Vacc. Nepeta wettsteinii Heinr.Braun Nepeta wilsonii Duthie Nepeta woodiana Hedge Nepeta yanthina Franch. Nepeta yesoensis (Franch. & Sav.) B.D.Jacks. Nepeta zandaensis H.W.Li Nepeta zangezura Grossh. Gallery Uses Cultivation Some Nepeta species are cultivated as ornamental plants. They can be drought tolerant – water conserving, often deer repellent, with long bloom periods from late spring to autumn. Some species also have repellent properties to insect pests, including aphids and squash bugs, when planted in a garden. Nepeta species are used as food plants by the larvae of some Lepidoptera (butterfly and moth) species including Coleophora albitarsella, and as nectar sources for pollinators, such as honey bees and hummingbirds. Selected ornamental species Nepeta cataria (catnip, catswort) – the "true catnip", cultivated as an ornamental plant, has become an invasive species in some habitats. Nepeta grandiflora (giant catmint, Caucasus catmint) – lusher than true catnip and has dark green leaves and dark blue flowers. Nepeta × faassenii (garden catmint) – a hybrid of garden source with gray-green foliage and lavender flowers. It is drought-tolerant and deer-resistant. The cultivar 'Walker's Low' was named Perennial of the Year for 2007 by the Perennial Plant Association. Nepeta racemosa (raceme catnip) – commonly used in landscaping. It is hardy, rated for USDA hardiness zone 5b. References Further reading External links GRIN Species Records of Nepeta [http://www.efloras.org/browse.aspx?flora_id=110&start_taxon_id=122138 Flora of Nepal: Nepeta'] Drugs.com: Catnip "Nepetalactone: What is in catnip anyway?" HowStuffWorks, Inc.: How does catnip work? Sciencedaily.com: "Catnip Repels Mosquitoes More Effectively Than DEET" – reported at the 2001 American Chemical Society meeting''. Lamiaceae genera Perennial plants Cat attractants Drought-tolerant plants Herbs Medicinal plants Garden plants of Africa Garden plants of Asia Garden plants of Europe Taxa named by Carl Linnaeus
5714
https://en.wikipedia.org/wiki/Cornish%20Nationalist%20Party
Cornish Nationalist Party
The Cornish Nationalist Party (CNP; ) is a political party, founded by Dr James Whetter, who campaigned for independence for Cornwall. History It was formed by people who left Cornwall's main nationalist party Mebyon Kernow on 28 May 1975, but it is no longer for independence. A separate party with a similar name (Cornish National Party) existed from 1969. The split with Mebyon Kernow was based on the same debate that was occurring in most of the other political parties campaigning for autonomy from the United Kingdom at the time (such as the Scottish National Party and Plaid Cymru): whether to be a centre-left party, appealing to the electorate on a social democratic line, or whether to appeal emotionally on a centre-right cultural line. Originally, another subject of the split was whether to embrace devolution as a first step to full independence (or as the sole step if this was what the electorate wished) or for it to be "all or nothing". The CNP essentially represented a more right-wing outlook from those who disagree that economic arguments were more likely to win votes than cultural. The CNP worked to preserve the Celtic identity of Cornwall and improve its economy, and encouraged links with Cornish people overseas and with other regions with distinct identities. It also gave support to the Cornish language and commemorated Thomas Flamank, a leader of the Cornish Rebellion in 1497, at an annual ceremony at Bodmin on 27 June each year. The CNP was for some time seen as more of a pressure group, as it did not put up candidates for any elections, although its visibility and influence within Cornwall is negligible. In April 2009, a news story reported that the CNP had re-formed following a conference in Bodmin; however, it did not contest any elections that year. Dr Whetter was the founder and editor of the CNP quarterly journal, The Cornish Banner (An Baner Kernewek), within the actions of the Roseland Institute. Since his death in 2018 the CNP has been led by Androw Hawke. A newspaper article and a revamp of the party website in October 2014 state that the party is now to contest elections once more. John Le Bretton, vice-chairman of the party, said: "The CNP supports the retention of Cornwall Council as a Cornwall-wide authority running Cornish affairs and we call for the British government in Westminster to devolve powers to the council so that decisions affecting Cornwall can be made in Cornwall". The CNP polled 227 (0.4) votes in Truro during the 1979 UK General Election, 364 (0.67) in North Cornwall in the 1983 UK General Election, and 1,892 (1.0) at the European Parliament elections in the Cornwall and Plymouth constituency in 1984. The candidate on all three occasions was the founder and first leader of the CNP, Dr James Whetter. The CNP had one parish councillor, CNP leader Androw Hawke who was elected to Polperro Community Council for the second time on 4 May 2017. The reformed party was registered with the Electoral Commission in 2014, but ceased to be registered in 2017. Policy The Policy Statement and Programme of the CNP were published in 1975 and included the following points: To look after the interests of Cornish people. To preserve and enhance the identity of Kernow, an essentially Celtic identity. To achieve self-government for Kernow. Total sovereignty will be exercised by the Cornish state over the land within its traditional border. Kernow's official language will be Cornish. Better job prospects for Cornish people. Reduction of unemployment to an acceptable level (2.5%). The protection of the self-employed and small businesses in Cornwall. Cheaper housing and priority for Cornish people. Discouragement of second homes. Controls over tourism. The Cornish state will have control over the number and nature of immigrants. The establishment of a Cornish economic department to aid the basic industries of farming, fishing, china clay and mining and secondary industries developing from these. Improved transport facilities in Cornwall with greater scope for private enterprise to operate. Existing medical and welfare services for Cornish people will be developed and improved. Protection of Cornish natural resources, including offshore resources. Conservation of the Cornish landscape and the unique Cornish environment, culture and identity. Courses on Cornish language and history should be made available in schools for those who want them. Recognition of the Cornish flag of St Piran and the retention of the Tamar border with England. The rule of law will be upheld by the Cornish state and the judiciary will be separate from the legislative and executive functions of the state. The Cornish state will create a home defence force, linked to local communities and civil units of administration. Young Cornish people will be given instruction as to world religions and secular philosophies but the greatest attention will be given to Christianity and early Celtic beliefs. A far greater say in government for Cornish people (by referendums if necessary) and the decentralisation of considerable powers to a Cornish nation within a united Europe - special links being established with our Celtic brothers and sisters in Scotland, Ireland, Isle of Man, Wales and Brittany. The party's policies include the following: Calling for more legislative powers to be given to Cornwall Council. The authority should effectively become the Cornish government, with town and parish councils acting as local government. Cornwall council should have a reduction in councillors, with standardisation of electoral areas and constituencies in throughout Cornwall. The Westminster government should appoint a Minister for Cornwall and confirm there will be no further plans to have any parliamentary constituency covering part of Cornwall and Devon. Image There have been perceived image problems as the CNP has been seen as similarly styled to the BNP and NF (the nativist British National Party and National Front), and during the 1970s letters were published in the party magazine The Cornish Banner (An Baner Kernewek) sympathetic to the NF and critical of "Zionist" politicians. The CNP also formed a controversial uniformed wing known as the Greenshirts led by the CNP Youth Movement leader and Public Relations Officer, Wallace Simmons who also founded the pro-NF Cornish Front. (Although the CNP and CF were sympathetic to Irish republicanism while the NF was supportive of Ulster loyalism, with the exception of leading NF figures like Patrick Harrington, who refused to condemn the IRA during an interview for the Channel 4 TV documentary Disciples of Chaos). See also List of topics related to Cornwall Cornish self-government movement Constitutional status of Cornwall References External links The CNP at the Roseland Institute The Cornish Banner website 2017 Spectator magazine article about Cornish Nationalism, Mebyon Kernow and the CNP Political parties established in 1975 1975 establishments in the United Kingdom Home rule in the United Kingdom Conservative parties in the United Kingdom Politics of Cornwall Cornish nationalist parties Right-wing parties in the United Kingdom
5715
https://en.wikipedia.org/wiki/Cryptanalysis
Cryptanalysis
Cryptanalysis (from the Greek kryptós, "hidden", and analýein, "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown. In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation. Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems often involve solving carefully constructed problems in pure mathematics, the best-known being integer factorization. Overview In encryption, confidential information (called the "plaintext") is sent securely to a recipient by the sender first converting it into an unreadable form ("ciphertext") using an encryption algorithm. The ciphertext is sent through an insecure channel to the recipient. The recipient decrypts the ciphertext by applying an inverse decryption algorithm, recovering the plaintext. To decrypt the ciphertext, the recipient requires a secret knowledge from the sender, usually a string of letters, numbers, or bits, called a cryptographic key. The concept is that even if an unauthorized person gets access to the ciphertext during transmission, without the secret key they cannot convert it back to plaintext. Encryption has been used throughout history to send important military, diplomatic and commercial messages, and today is very widely used in computer networking to protect email and internet communication. The goal of cryptanalysis is for a third party, a cryptanalyst, to gain as much information as possible about the original ("plaintext"), attempting to "break" the encryption to read the ciphertext and learning the secret key so future messages can be decrypted and read. A mathematical technique to do this is called a cryptographic attack. Cryptographic attacks can be characterized in a number of ways: Amount of information available to the attacker Attacks can be classified based on what type of information the attacker has available. As a basic starting point it is normally assumed that, for the purposes of analysis, the general algorithm is known; this is Shannon's Maxim "the enemy knows the system" – in its turn, equivalent to Kerckhoffs' principle. This is a reasonable assumption in practice – throughout history, there are countless examples of secret algorithms falling into wider knowledge, variously through espionage, betrayal and reverse engineering. (And on occasion, ciphers have been broken through pure deduction; for example, the German Lorenz cipher and the Japanese Purple code, and a variety of classical schemes): Ciphertext-only: the cryptanalyst has access only to a collection of ciphertexts or codetexts. Known-plaintext: the attacker has a set of ciphertexts to which they know the corresponding plaintext. Chosen-plaintext (chosen-ciphertext): the attacker can obtain the ciphertexts (plaintexts) corresponding to an arbitrary set of plaintexts (ciphertexts) of their own choosing. Adaptive chosen-plaintext: like a chosen-plaintext attack, except the attacker can choose subsequent plaintexts based on information learned from previous encryptions, similarly to the Adaptive chosen ciphertext attack. Related-key attack: Like a chosen-plaintext attack, except the attacker can obtain ciphertexts encrypted under two different keys. The keys are unknown, but the relationship between them is known; for example, two keys that differ in the one bit. Computational resources required Attacks can also be characterised by the resources they require. Those resources include: Time – the number of computation steps (e.g., test encryptions) which must be performed. Memory – the amount of storage required to perform the attack. Data – the quantity and type of plaintexts and ciphertexts required for a particular approach. It is sometimes difficult to predict these quantities precisely, especially when the attack is not practical to actually implement for testing. But academic cryptanalysts tend to provide at least the estimated order of magnitude of their attacks' difficulty, saying, for example, "SHA-1 collisions now 252." Bruce Schneier notes that even computationally impractical attacks can be considered breaks: "Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute force. Never mind that brute-force might require 2128 encryptions; an attack requiring 2110 encryptions would be considered a break...simply put, a break can just be a certificational weakness: evidence that the cipher does not perform as advertised." Partial breaks The results of cryptanalysis can also vary in usefulness. Cryptographer Lars Knudsen (1998) classified various types of attack on block ciphers according to the amount and quality of secret information that was discovered: Total break – the attacker deduces the secret key. Global deduction – the attacker discovers a functionally equivalent algorithm for encryption and decryption, but without learning the key. Instance (local) deduction – the attacker discovers additional plaintexts (or ciphertexts) not previously known. Information deduction – the attacker gains some Shannon information about plaintexts (or ciphertexts) not previously known. Distinguishing algorithm – the attacker can distinguish the cipher from a random permutation. Academic attacks are often against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially more difficult to execute as rounds are added to a cryptosystem, so it's possible for the full cryptosystem to be strong even though reduced-round variants are weak. Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow; the successful attacks on DES, MD5, and SHA-1 were all preceded by attacks on weakened versions. In academic cryptography, a weakness or a break in a scheme is usually defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts. It also might require the attacker be able to do things many real-world attackers can't: for example, the attacker may need to choose particular plaintexts to be encrypted or even to ask for plaintexts to be encrypted using several keys related to the secret key. Furthermore, it might only reveal a small amount of information, enough to prove the cryptosystem imperfect but too little to be useful to real-world attackers. Finally, an attack might only apply to a weakened version of cryptographic tools, like a reduced-round block cipher, as a step towards breaking the full system. History Cryptanalysis has coevolved together with cryptography, and the contest can be traced through the history of cryptography—new ciphers being designed to replace old broken designs, and new cryptanalytic techniques invented to crack the improved schemes. In practice, they are viewed as two sides of the same coin: secure cryptography requires design against possible cryptanalysis. Classical ciphers Although the actual word "cryptanalysis" is relatively recent (it was coined by William Friedman in 1920), methods for breaking codes and ciphers are much older. David Kahn notes in The Codebreakers that Arab scholars were the first people to systematically document cryptanalytic methods. The first known recorded explanation of cryptanalysis was given by Al-Kindi (c. 801–873, also known as "Alkindus" in Europe), a 9th-century Arab polymath, in Risalah fi Istikhraj al-Mu'amma (A Manuscript on Deciphering Cryptographic Messages). This treatise contains the first description of the method of frequency analysis. Al-Kindi is thus regarded as the first codebreaker in history. His breakthrough work was influenced by Al-Khalil (717–786), who wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels. Frequency analysis is the basic tool for breaking most classical ciphers. In natural languages, certain letters of the alphabet appear more often than others; in English, "E" is likely to be the most common letter in any sample of plaintext. Similarly, the digraph "TH" is the most likely pair of letters in English, and so on. Frequency analysis relies on a cipher failing to hide these statistics. For example, in a simple substitution cipher (where each letter is simply replaced with another), the most frequent letter in the ciphertext would be a likely candidate for "E". Frequency analysis of such a cipher is therefore relatively easy, provided that the ciphertext is long enough to give a reasonably representative count of the letters of the alphabet that it contains. Al-Kindi's invention of the frequency analysis technique for breaking monoalphabetic substitution ciphers was the most significant cryptanalytic advance until World War II. Al-Kindi's Risalah fi Istikhraj al-Mu'amma described the first cryptanalytic techniques, including some for polyalphabetic ciphers, cipher classification, Arabic phonetics and syntax, and most importantly, gave the first descriptions on frequency analysis. He also covered methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis. In Europe, Italian scholar Giambattista della Porta (1535–1615) was the author of a seminal work on cryptanalysis, De Furtivis Literarum Notis. Successful cryptanalysis has undoubtedly influenced history; the ability to read the presumed-secret thoughts and plans of others can be a decisive advantage. For example, in England in 1587, Mary, Queen of Scots was tried and executed for treason as a result of her involvement in three plots to assassinate Elizabeth I of England. The plans came to light after her coded correspondence with fellow conspirators was deciphered by Thomas Phelippes. In Europe during the 15th and 16th centuries, the idea of a polyalphabetic substitution cipher was developed, among others by the French diplomat Blaise de Vigenère (1523–96). For some three centuries, the Vigenère cipher, which uses a repeating key to select different encryption alphabets in rotation, was considered to be completely secure (le chiffre indéchiffrable—"the indecipherable cipher"). Nevertheless, Charles Babbage (1791–1871) and later, independently, Friedrich Kasiski (1805–81) succeeded in breaking this cipher. During World War I, inventors in several countries developed rotor cipher machines such as Arthur Scherbius' Enigma, in an attempt to minimise the repetition that had been exploited to break the Vigenère system. Ciphers from World War I and World War II In World War I, the breaking of the Zimmermann Telegram was instrumental in bringing the United States into the war. In World War II, the Allies benefitted enormously from their joint success cryptanalysis of the German ciphers – including the Enigma machine and the Lorenz cipher – and Japanese ciphers, particularly 'Purple' and JN-25. 'Ultra' intelligence has been credited with everything between shortening the end of the European war by up to two years, to determining the eventual result. The war in the Pacific was similarly helped by 'Magic' intelligence. Cryptanalysis of enemy messages played a significant part in the Allied victory in World War II. F. W. Winterbotham, quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at the war's end as describing Ultra intelligence as having been "decisive" to Allied victory. Sir Harry Hinsley, official historian of British Intelligence in World War II, made a similar assessment about Ultra, saying that it shortened the war "by not less than two years and probably by four years"; moreover, he said that in the absence of Ultra, it is uncertain how the war would have ended. In practice, frequency analysis relies as much on linguistic knowledge as it does on statistics, but as ciphers became more complex, mathematics became more important in cryptanalysis. This change was particularly evident before and during World War II, where efforts to crack Axis ciphers required new levels of mathematical sophistication. Moreover, automation was first applied to cryptanalysis in that era with the Polish Bomba device, the British Bombe, the use of punched card equipment, and in the Colossus computers – the first electronic digital computers to be controlled by a program. Indicator With reciprocal machine ciphers such as the Lorenz cipher and the Enigma machine used by Nazi Germany during World War II, each message had its own key. Usually, the transmitting operator informed the receiving operator of this message key by transmitting some plaintext and/or ciphertext before the enciphered message. This is termed the indicator, as it indicates to the receiving operator how to set his machine to decipher the message. Poorly designed and implemented indicator systems allowed first Polish cryptographers and then the British cryptographers at Bletchley Park to break the Enigma cipher system. Similar poor indicator systems allowed the British to identify depths that led to the diagnosis of the Lorenz SZ40/42 cipher system, and the comprehensive breaking of its messages without the cryptanalysts seeing the cipher machine. Depth Sending two or more messages with the same key is an insecure process. To a cryptanalyst the messages are then said to be "in depth." This may be detected by the messages having the same indicator by which the sending operator informs the receiving operator about the key generator initial settings for the message. Generally, the cryptanalyst may benefit from lining up identical enciphering operations among a set of messages. For example, the Vernam cipher enciphers by bit-for-bit combining plaintext with a long key using the "exclusive or" operator, which is also known as "modulo-2 addition" (symbolized by ⊕ ): Plaintext ⊕ Key = Ciphertext Deciphering combines the same key bits with the ciphertext to reconstruct the plaintext: Ciphertext ⊕ Key = Plaintext (In modulo-2 arithmetic, addition is the same as subtraction.) When two such ciphertexts are aligned in depth, combining them eliminates the common key, leaving just a combination of the two plaintexts: Ciphertext1 ⊕ Ciphertext2 = Plaintext1 ⊕ Plaintext2 The individual plaintexts can then be worked out linguistically by trying probable words (or phrases), also known as "cribs," at various locations; a correct guess, when combined with the merged plaintext stream, produces intelligible text from the other plaintext component: (Plaintext1 ⊕ Plaintext2) ⊕ Plaintext1 = Plaintext2 The recovered fragment of the second plaintext can often be extended in one or both directions, and the extra characters can be combined with the merged plaintext stream to extend the first plaintext. Working back and forth between the two plaintexts, using the intelligibility criterion to check guesses, the analyst may recover much or all of the original plaintexts. (With only two plaintexts in depth, the analyst may not know which one corresponds to which ciphertext, but in practice this is not a large problem.) When a recovered plaintext is then combined with its ciphertext, the key is revealed: Plaintext1 ⊕ Ciphertext1 = Key Knowledge of a key then allows the analyst to read other messages encrypted with the same key, and knowledge of a set of related keys may allow cryptanalysts to diagnose the system used for constructing them. Development of modern cryptography Governments have long recognized the potential benefits of cryptanalysis for intelligence, both military and diplomatic, and established dedicated organizations devoted to breaking the codes and ciphers of other nations, for example, GCHQ and the NSA, organizations which are still very active today. Even though computation was used to great effect in the cryptanalysis of the Lorenz cipher and other systems during World War II, it also made possible new methods of cryptography orders of magnitude more complex than ever before. Taken as a whole, modern cryptography has become much more impervious to cryptanalysis than the pen-and-paper systems of the past, and now seems to have the upper hand against pure cryptanalysis. The historian David Kahn notes: Kahn goes on to mention increased opportunities for interception, bugging, side channel attacks, and quantum computers as replacements for the traditional means of cryptanalysis. In 2010, former NSA technical director Brian Snow said that both academic and government cryptographers are "moving very slowly forward in a mature field." However, any postmortems for cryptanalysis may be premature. While the effectiveness of cryptanalytic methods employed by intelligence agencies remains unknown, many serious attacks against both academic and practical cryptographic primitives have been published in the modern era of computer cryptography: The block cipher Madryga, proposed in 1984 but not widely used, was found to be susceptible to ciphertext-only attacks in 1998. FEAL-4, proposed as a replacement for the DES standard encryption algorithm but not widely used, was demolished by a spate of attacks from the academic community, many of which are entirely practical. The A5/1, A5/2, CMEA, and DECT systems used in mobile and wireless phone technology can all be broken in hours, minutes or even in real-time using widely available computing equipment. Brute-force keyspace search has broken some real-world ciphers and applications, including single-DES (see EFF DES cracker), 40-bit "export-strength" cryptography, and the DVD Content Scrambling System. In 2001, Wired Equivalent Privacy (WEP), a protocol used to secure Wi-Fi wireless networks, was shown to be breakable in practice because of a weakness in the RC4 cipher and aspects of the WEP design that made related-key attacks practical. WEP was later replaced by Wi-Fi Protected Access. In 2008, researchers conducted a proof-of-concept break of SSL using weaknesses in the MD5 hash function and certificate issuer practices that made it possible to exploit collision attacks on hash functions. The certificate issuers involved changed their practices to prevent the attack from being repeated. Thus, while the best modern ciphers may be far more resistant to cryptanalysis than the Enigma, cryptanalysis and the broader field of information security remain quite active. Symmetric ciphers Boomerang attack Brute-force attack Davies' attack Differential cryptanalysis Impossible differential cryptanalysis Improbable differential cryptanalysis Integral cryptanalysis Linear cryptanalysis Meet-in-the-middle attack Mod-n cryptanalysis Related-key attack Sandwich attack Slide attack XSL attack Asymmetric ciphers Asymmetric cryptography (or public-key cryptography) is cryptography that relies on using two (mathematically related) keys; one private, and one public. Such ciphers invariably rely on "hard" mathematical problems as the basis of their security, so an obvious point of attack is to develop methods for solving the problem. The security of two-key cryptography depends on mathematical questions in a way that single-key cryptography generally does not, and conversely links cryptanalysis to wider mathematical research in a new way. Asymmetric schemes are designed around the (conjectured) difficulty of solving various mathematical problems. If an improved algorithm can be found to solve the problem, then the system is weakened. For example, the security of the Diffie–Hellman key exchange scheme depends on the difficulty of calculating the discrete logarithm. In 1983, Don Coppersmith found a faster way to find discrete logarithms (in certain groups), and thereby requiring cryptographers to use larger groups (or different types of groups). RSA's security depends (in part) upon the difficulty of integer factorization – a breakthrough in factoring would impact the security of RSA. In 1980, one could factor a difficult 50-digit number at an expense of 1012 elementary computer operations. By 1984 the state of the art in factoring algorithms had advanced to a point where a 75-digit number could be factored in 1012 operations. Advances in computing technology also meant that the operations could be performed much faster, too. Moore's law predicts that computer speeds will continue to increase. Factoring techniques may continue to do so as well, but will most likely depend on mathematical insight and creativity, neither of which has ever been successfully predictable. 150-digit numbers of the kind once used in RSA have been factored. The effort was greater than above, but was not unreasonable on fast modern computers. By the start of the 21st century, 150-digit numbers were no longer considered a large enough key size for RSA. Numbers with several hundred digits were still considered too hard to factor in 2005, though methods will probably continue to improve over time, requiring key size to keep pace or other methods such as elliptic curve cryptography to be used. Another distinguishing feature of asymmetric schemes is that, unlike attacks on symmetric cryptosystems, any cryptanalysis has the opportunity to make use of knowledge gained from the public key. Attacking cryptographic hash systems Birthday attack Hash function security summary Rainbow table Side-channel attacks Black-bag cryptanalysis Man-in-the-middle attack Power analysis Replay attack Rubber-hose cryptanalysis Timing analysis Quantum computing applications for cryptanalysis Quantum computers, which are still in the early phases of research, have potential use in cryptanalysis. For example, Shor's Algorithm could factor large numbers in polynomial time, in effect breaking some commonly used forms of public-key encryption. By using Grover's algorithm on a quantum computer, brute-force key search can be made quadratically faster. However, this could be countered by doubling the key length. See also Economics of security Global surveillance Information assurance, a term for information security often used in government Information security, the overarching goal of most cryptography National Cipher Challenge Security engineering, the design of applications and protocols Security vulnerability; vulnerabilities can include cryptographic or other flaws Topics in cryptography Zendian Problem Historic cryptanalysts Conel Hugh O'Donel Alexander Charles Babbage Lambros D. Callimahos Joan Clarke Alastair Denniston Agnes Meyer Driscoll Elizebeth Friedman William F. Friedman Meredith Gardner Friedrich Kasiski Al-Kindi Dilly Knox Solomon Kullback Marian Rejewski Joseph Rochefort, whose contributions affected the outcome of the Battle of Midway Frank Rowlett Abraham Sinkov Giovanni Soro, the Renaissance's first outstanding cryptanalyst John Tiltman Alan Turing William T. Tutte John Wallis – 17th-century English mathematician William Stone Weedon – worked with Fredson Bowers in World War II Herbert Yardley References Citations Sources Ibrahim A. Al-Kadi,"The origins of cryptology: The Arab contributions", Cryptologia, 16(2) (April 1992) pp. 97–126. Friedrich L. Bauer: "Decrypted Secrets". Springer 2002. Helen Fouché Gaines, "Cryptanalysis", 1939, Dover. David Kahn, "The Codebreakers – The Story of Secret Writing", 1967. Lars R. Knudsen: Contemporary Block Ciphers. Lectures on Data Security 1998: 105–126 Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America, 1966. Christopher Swenson, Modern Cryptanalysis: Techniques for Advanced Code Breaking, Friedman, William F., Military Cryptanalysis, Part I, Friedman, William F., Military Cryptanalysis, Part II, Friedman, William F., Military Cryptanalysis, Part III, Simpler Varieties of Aperiodic Substitution Systems, Friedman, William F., Military Cryptanalysis, Part IV, Transposition and Fractionating Systems, Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I, Volume 1, Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I, Volume 2, Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II, Volume 1, Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II, Volume 2, Transcript of a lecture given by Prof. Tutte at the University of Waterloo Further reading External links Basic Cryptanalysis (files contain 5 line header, that has to be removed first) Distributed Computing Projects List of tools for cryptanalysis on modern cryptography Simon Singh's crypto corner The National Museum of Computing UltraAnvil tool for attacking simple substitution ciphers How Alan Turing Cracked The Enigma Code Imperial War Museums Cryptographic attacks Applied mathematics Arab inventions
5716
https://en.wikipedia.org/wiki/Chicano
Chicano
Chicano (masculine form) or Chicana (feminine form) is an ethnic identity for Mexican Americans who have a non-Anglo self-image, embracing their Mexican Native ancestry. Chicano was originally a classist and racist slur used toward low-income Mexicans that was reclaimed in the 1940s among youth who belonged to the Pachuco and Pachuca subculture. In the 1960s, Chicano was widely reclaimed in the building of a movement toward political empowerment, ethnic solidarity, and pride in being of indigenous descent (with many using the Nahuatl language or names). Chicano developed its own meaning separate from Mexican American identity. Youth in barrios rejected cultural assimilation into whiteness and embraced their own identity and worldview as a form of empowerment and resistance. The community forged an independent political and cultural movement, sometimes working alongside the Black power movement. The Chicano Movement faltered by the mid-1970s as a result of external and internal pressures. It was under state surveillance, infiltration, and repression by U.S. government agencies, informants, and agent provocateurs, such as through COINTELPRO. The Chicano Movement also had a fixation on masculine pride and machismo that fractured the community through sexism toward Chicanas and homophobia toward queer Chicana/os. In the 1980s, assimilation and economic mobility motivated many to embrace Hispanic identity in an era of conservatism. The term Hispanic emerged from a collaboration between the U.S. government and Mexican-American political elites in the Hispanic Caucus of Congress. They used the term to identify themselves and the community with mainstream American culture, depart from Chicanismo, and distance themselves from what they perceived as the "militant" Black Caucus.At the grassroots level, Chicana/os continued to build the feminist, gay and lesbian, and anti-apartheid movements, which kept the identity politically relevant. After a decade of Hispanic dominance, Chicana/o student activism in the early 1990s recession and the anti-Gulf War movement revived the identity with a demand to expand Chicana/o studies programs. Chicanas were active at the forefront, despite facing critiques from "movement loyalists", as they did in the Chicano Movement. Chicana feminists addressed employment discrimination, environmental racism, healthcare, sexual violence, and exploitation in their communities and in solidarity with the Third World. Chicanas worked to "liberate her entire people"; not to oppress men, but to be equal partners in the movement. Xicanisma, coined by Ana Castillo in 1994, called for Chicana/os to "reinsert the forsaken feminine into our consciousness", to embrace one's Indigenous roots, and support Indigenous sovereignty. In the 2000s, earlier traditions of anti-imperialism in the Chicano Movement were expanded. Building solidarity with undocumented immigrants became more important, despite issues of legal status and economic competitiveness sometimes maintaining distance between groups. U.S. foreign interventions abroad were connected with domestic issues concerning the rights of undocumented immigrants in the United States. Chicano/a consciousness increasingly became transnational and transcultural, thinking beyond and bridging with communities over political borders. The identity was renewed based on Indigenous and decolonial consciousness, cultural expression, resisting gentrification, defense of immigrants, and the rights of women and queer people. Xicanx identity also emerged in the 2010s, based on the Chicana feminist intervention of Xicanisma. Etymology The etymology of the term Chicano is the subject of some debate by historians. Some believe Chicano is a Spanish language derivative of an older Nahuatl word Mexitli ("Meh-shee-tlee"). Mexitli formed part of the expression Huitzilopochtlil Mexitli—a reference to the historic migration of the Mexica people from their homeland of Aztlán to the Oaxaca Valley. Mexitli is the root of the word Mexica, which refers to the Mexica people, and its singular form Mexihcatl (). The x in Mexihcatl represents an /ʃ/ or sh sound in both Nahuatl and early modern Spanish, while the glottal stop in the middle of the Nahuatl word disappeared. The word Chicano may derive from the loss of the initial syllable of Mexicano (Mexican). According to Villanueva, "given that the velar (x) is a palatal phoneme (S) with the spelling (sh)," in accordance with the Indigenous phonological system of the Mexicas ("Meshicas"), it would become "Meshicano" or "Mechicano." In this explanation, Chicano comes from the "xicano" in "Mexicano." Some Chicanos replace the Ch with the letter X, or Xicano, to reclaim the Nahuatl sh sound that the Spanish did not have a letter for and marked with the letter "x". The first two syllables of Xicano are therefore in Nahuatl while the last syllable is Castilian. In Mexico's Indigenous regions, Indigenous people refer to members of the non-indigenous majority as mexicanos, referring to the modern nation of Mexico. Among themselves, the speaker identifies by their (village or tribal) identity, such as Mayan, Zapotec, Mixtec, Huastec, or any of the other hundreds of indigenous groups. A newly emigrated Nahuatl speaker in an urban center might have referred to his cultural relatives in this country, different from himself, as , shortened to Chicanos or Xicanos. Usage of terms Early recorded use The town of Chicana was shown on the Gutiérrez 1562 New World map near the mouth of the Colorado River, and is probably pre-Columbian in origin. The town was again included on Desegno del Discoperto Della Nova Franza, a 1566 French map by Paolo Forlani. Roberto Cintli Rodríguez places the location of Chicana at the mouth of the Colorado River, near present-day Yuma, Arizona. An 18th century map of the Nayarit Missions used the name Xicana for a town near the same location of Chicana, which is considered to be the oldest recorded usage of that term. A gunboat, the Chicana, was sold in 1857 to Jose Maria Carvajal to ship arms on the Rio Grande. The King and Kenedy firm submitted a voucher to the Joint Claims Commission of the United States in 1870 to cover the costs of this gunboat's conversion from a passenger steamer. No explanation for the boat's name is known. The Chicano poet and writer Tino Villanueva traced the first documented use of the term as an ethnonym to 1911, as referenced in a then-unpublished essay by University of Texas anthropologist José Limón. Linguists Edward R. Simmen and Richard F. Bauerle report the use of the term in an essay by Mexican-American writer, Mario Suárez, published in the Arizona Quarterly in 1947. There is ample literary evidence to substantiate that Chicano is a long-standing endonym, as a large body of Chicano literature pre-dates the 1950s. Reclaiming the term In the 1940s, "Chicano" was reclaimed by Pachuco youth as an expression of defiance to Anglo-American society. At the time, Chicano was used among English and Spanish speakers as a classist and racist slur to refer to working class Mexican Americans in Spanish-speaking neighborhoods. In Mexico, the term was used with Pocho "to deride Mexicans living in the United States, and especially their U.S.-born children, for losing their culture, customs, and language." Mexican anthropologist Manuel Gamio reported in 1930 that Chicamo (with an m) was used as a derogatory term by Hispanic Texans for recently arrived Mexican immigrants displaced during the Mexican Revolution in the beginning of the early 20th century. By the 1950s, Chicano referred to those who resisted total assimilation, while Pocho referred (often pejoratively) to those who strongly advocated for assimilation. In his essay "Chicanismo" in The Oxford Encyclopedia of Mesoamerican Cultures (2002), José Cuéllar, dates the transition from derisive to positive to the late 1950s, with increasing use by young Mexican-American high school students. These younger, politically aware Mexican Americans adopted the term "as an act of political defiance and ethnic pride", similar to the reclaiming of Black by African Americans. The Chicano Movement during the 1960s and early 1970s played a significant role in reclaiming "Chicano," challenging those who used it as a term of derision on both sides of the Mexico-U.S. border. Demographic differences in the adoption of Chicano occurred at first. It was more likely to be used by males than females, and less likely to be used among those of higher socioeconomic status. Usage was also generational, with third-generation men more likely to use the word. This group was also younger, more political, and different from traditional Mexican cultural heritage. Chicana was a similar classist term to refer to "[a] marginalized, brown woman who is treated as a foreigner and is expected to do menial labor and ask nothing of the society in which she lives." Among Mexican Americans, Chicano and Chicana began to be viewed as a positive identity of self-determination and political solidarity. In Mexico, Chicano may still be associated with a Mexican American person of low importance, class, and poor morals (similar to the terms Cholo, Chulo and Majo), indicating a difference in cultural views. Chicano Movement Chicano was widely reclaimed in the 1960s and 1970s during the Chicano Movement to assert a distinct ethnic, political, and cultural identity that resisted assimilation into whiteness, systematic racism and stereotypes, colonialism, and the American nation-state. Chicano identity formed around seven themes: unity, economy, education, institutions, self-defense, culture, and political liberation, in an effort to bridge regional and class divisions. The notion of Aztlán, a mythical homeland claimed to be located in the southwestern United States, mobilized Mexican Americans to take social and political action. Chicano became a unifying term for mestizos. Xicano was also used in the 1970s. In the 1970s, Chicanos developed a reverence for machismo while also maintaining the values of their original platform. For instance, Oscar Zeta Acosta defined machismo as the source of Chicano identity, claiming that this "instinctual and mystical source of manhood, honor and pride... alone justifies all behavior." Armando Rendón wrote in Chicano Manifesto (1971) that machismo was "in fact an underlying drive of the gathering identification of Mexican Americans... the essence of machismo, of being macho, is as much a symbolic principle for the Chicano revolt as it is a guideline for family life." From the beginning of the Chicano Movement, some Chicanas criticized the idea that machismo must guide the people and questioned if machismo was "indeed a genuinely Mexican cultural value or a kind of distorted view of masculinity generated by the psychological need to compensate for the indignities suffered by Chicanos in a white supremacist society." Angie Chabram-Dernersesian found that most of the literature on the Chicano Movement focused on men and boys, while almost none focused on Chicanas. The omission of Chicanas and the machismo of the Chicano Movement led to a shift by the 1990s. Xicanisma Xicanisma was coined by Ana Castillo in Massacre of the Dreamers (1994) as a recognition of a shift in consciousness since the Chicano Movement and to reinvigorate Chicana feminism. The aim of Xicanisma is not to replace patriarchy with matriarchy, but to create "a nonmaterialistic and nonexploitive society in which feminine principles of nurturing and community prevail"; where the feminine is reinserted into our consciousness rather than subordinated by colonization. The X reflects the Sh sound in Mesoamerican languages that the Spanish could not pronounce (such as Tlaxcala, which is pronounced Tlash-KAH-lah), and so marked this sound with a letter X. More than a letter, the X in Xicanisma is also a symbol to represent being at a literal crossroads or otherwise embodying hybridity. Xicanisma acknowledges Indigenous survival after hundreds of years of colonization and the need to reclaim one's Indigenous roots while also being "committed to the struggle for liberation of all oppressed people", wrote Francesca A. López. Activists like Guillermo Gómez-Peña, issued "a call for a return to the Amerindian roots of most Latinos as well as a call for a strategic alliance to give agency to Native American groups." This can include one's Indigenous roots from Mexico "as well as those with roots centered in Central and South America," wrote Francisco Rios. Castillo argued that this shift in language was important because "language is the vehicle by which we perceive ourselves in relation to the world". Among a minority of Mexican Americans, the term Xicanx may be used to refer to gender non-conformity. Luis J. Rodriguez states that "even though most US Mexicans may not use this term," that it can be important for gender non-conforming Mexican Americans. Xicanx may destabilize aspects of the coloniality of gender in Mexican American communities. Artist Roy Martinez states that it is not "bound to the feminine or masculine aspects" and that it may be "inclusive to anyone who identifies with it". Some prefer the -e suffix Xicane in order to be more in-line with Spanish-speaking language constructs. Distinction from other terms Mexican American In the 1930s, "community leaders promoted the term Mexican American to convey an assimilationist ideology stressing white identity," as noted by legal scholar Ian Haney López. Lisa Y. Ramos argues that "this phenomenon demonstrates why no Black-Brown civil rights effort emerged prior to the 1960s." Chicano youth rejected the previous generation's racial aspirations to assimilate into Anglo-American society and developed a "Pachuco culture that fashioned itself neither as Mexican nor American." In the Chicano Movement, possibilities for Black–brown unity arose: "Chicanos defined themselves as proud members of a brown race, thereby rejecting, not only the previous generation's assimilationist orientation, but their racial pretensions as well." Chicano leaders collaborated with Black Power movement leaders and activists. Mexican Americans insisted that Mexicans were white, while Chicanos embraced being non-white and the development of brown pride. Mexican American continued to be used by a more assimilationist faction who wanted to define Mexican Americans "as a white ethnic group that had little in common with African Americans." Carlos Muñoz argues that the desire to separate themselves from Blackness and political struggle was rooted in an attempt to minimize "the existence of racism toward their own people, [believing] they could "deflect" anti-Mexican sentiment in society" through affiliating with whiteness. Hispanic Following the decline of the Chicano Movement, Hispanic was first defined by the U.S. Federal Office of Management and Budget's (OMB) Directive No. 15 in 1977 as "a person of Mexican, Dominican, Puerto Rican, Cuban, Central or South America or other Spanish culture or origin, regardless of race." The term was promoted by Mexican American political elites to encourage cultural assimilation into whiteness and move away from Chicanismo. The rise of Hispanic identity paralleled the emerging era of political and cultural conservatism in the United States during the 1980s. Key members of the Mexican American political elite, all of whom were middle-aged men, helped popularize the term Hispanic among Mexican Americans. The term was picked up by electronic and print media. Laura E. Gómez conducted a series of interviews with these elites and found that one of the main reasons Hispanic was promoted was to move away from Chicano: "The Chicano label reflected the more radical political agenda of Mexican-Americans in the 1960s and 1970s, and the politicians who call themselves Hispanic today are the harbingers of a more conservative, more accomadationist politics." Gómez found that some of these elites promoted Hispanic to appeal to white American sensibilities, particularly in regard to separating themselves from Black political consciousness. Gómez records:Another respondent agreed with this position, contrasting his white colleagues' perceptions of the Congressional Hispanic Caucus with their perception of the Congressional Black Caucus. 'We certainly haven't been militant like the Black Caucus. We're seen as a power bloc—an ethnic power bloc striving to deal with mainstream issues.'In 1980, Hispanic was first made available as a self-identification on U.S. census forms. While Chicano also appeared on the 1980 U.S. census, it was only permitted to be selected as a subcategory underneath Spanish/Hispanic descent, which erased the possibility of Afro-Chicanos and of Chicanos being of Indigenous descent. Chicano did not appear on any subsequent census forms and Hispanic has remained. Since then, Hispanic has widely been used by politicians and the media. For this reason, many Chicanos reject the term Hispanic. Other terms Instead of or in addition to identifying as Chicano or any of its variations, some may prefer: Latino/a, also anglicized as "Latin." Some US Latinos use Latin as a gender neutral alternative. Latin American (especially if immigrant). Mexican; "Brown" Mestizo; [insert racial identity ] (e.g. ); . (or ) / ; ; . Part/member of . (Internal identifier, Spanish for "the Race") American, solely. Identity Chicano and Chicana identity reflects elements of ethnic, political, cultural and Indigenous hybridity. These qualities of what constitutes Chicano identity may be expressed by Chicanos differently. Armando Rendón wrote in the Chicano Manifesto (1971), "I am Chicano. What it means to me may be different than what it means to you." Benjamin Alire Sáenz wrote "There is no such thing as the Chicano voice: there are only Chicano and Chicana voices." The identity can be somewhat ambiguous (e.g. in the 1991 Culture Clash play A Bowl of Beings, in response to Che Guevara's demand for a definition of "Chicano", an "armchair activist" cries out, "I still don't know!"). Many Chicanos understand themselves as being "neither from here, nor from there", as neither from the United States or Mexico. Juan Bruce-Novoa wrote in 1990: "A Chicano lives in the space between the hyphen in Mexican-American." Being Chicano/a may represent the struggle of being institutionally acculturated to assimilate into the Anglo-dominated society of the United States, yet maintaining the cultural sense developed as a Latin-American cultured U.S.-born Mexican child. Rafael Pérez-Torres wrote, "one can no longer assert the wholeness of a Chicano subject ... It is illusory to deny the nomadic quality of the Chicano community, a community in flux that yet survives and, through survival, affirms itself." Ethnic identity Chicano is a way for Mexican Americans to assert ethnic solidarity and Brown Pride. Boxer Rodolfo Gonzales was one of the first to reclaim the term in this way. This Brown Pride movement established itself alongside the Black is Beautiful movement. Chicano identity emerged as a symbol of pride in having a non-white and non-European image of oneself. It challenged the U.S. census designation "Whites with Spanish Surnames" that was used in the 1950s. Chicanos asserted ethnic pride at a time when Mexican assimilation into whiteness was being promoted by the U.S. government. Ian Haney López argues that this was to "serve Anglo self-interest", who claimed Mexicans were white to try to deny racism against them. Alfred Arteaga argues that Chicano as an ethnic identity is born out of the European colonization of the Americas. He states that Chicano arose as hybrid ethnicity or race amidst colonial violence. This hybridity extends beyond a previously generalized "Aztec" ancestry, since the Indigenous peoples of Mexico are a diverse group of nations and peoples. A 2011 study found that 85 to 90% of maternal mtDNA lineages in Mexican Americans are Indigenous. Chicano ethnic identity may involve more than just Indigenous and Spanish ancestry. It may also include African ancestry (as a result of Spanish slavery or runaway slaves from Anglo-Americans). Arteaga concluded that "the physical manifestation of the Chicano, is itself a product of hybridity."Robert Quintana Hopkins argues that Afro-Chicanos are sometimes erased from the ethnic identity "because so many people uncritically apply the 'one drop rule' in the U.S. [which] ignores the complexity of racial hybridity." Black and Chicano communities have engaged in close political movements and struggles for liberation, yet there have also been tensions between Black and Chicano communities. This has been attributed to racial capitalism and anti-Blackness in Chicano communities. Afro-Chicano rapper Choosey stated "there's a stigma that Black and Mexican cultures don't get along, but I wanted to show the beauty in being a product of both." Political identity Chicano political identity developed from a reverence of Pachuco resistance in the 1940s. Luis Valdez wrote that "Pachuco determination and pride grew through the 1950s and gave impetus to the Chicano Movement of the 1960s ... By then the political consciousness stirred by the 1943 Zoot Suit Riots had developed into a movement that would soon issue the Chicano Manifesto—a detailed platform of political activism." By the 1960s, the Pachuco figure "emerged as an icon of resistance in Chicano cultural production." The Pachuca was not regarded with the same status. Catherine Ramírez credits this to the Pachuca being interpreted as a symbol of "dissident femininity, female masculinity, and, in some instances, lesbian sexuality". The political identity was founded on the principle that the U.S. nation-state had impoverished and exploited the Chicano people and communities. Alberto Varon argued that this brand of Chicano nationalism focused on the machismo subject in its calls for political resistance. Chicano machismo was both a unifying and fracturing force. Cherríe Moraga argued that it fostered homophobia and sexism, which became obstacles to the Movement. As the Chicano political consciousness developed, Chicanas, including Chicana lesbians of color brought attention to "reproductive rights, especially sterilization abuse [sterilization of Latinas], battered women's shelters, rape crisis centers, [and] welfare advocacy." Chicana texts like Essays on La Mujer (1977), Mexican Women in the United States (1980), and This Bridge Called My Back (1981) have been relatively ignored even in Chicano Studies. Sonia Saldívar-Hull argued that even when Chicanas have challenged sexism, their identities have been invalidated. Chicano political activist groups like the Brown Berets (1967–1972; 1992–Present) gained support in their protests of educational inequalities and demanding an end to police brutality. They collaborated with the Black Panthers and Young Lords, which were founded in 1966 and 1968 respectively. Membership in the Brown Berets was estimated to have reached five thousand in over 80 chapters (mostly centered in California and Texas). The Brown Berets helped organize the Chicano Blowouts of 1968 and the national Chicano Moratorium, which protested the high rate of Chicano casualties in the Vietnam War. Police harassment, infiltration by federal agents provacateur via COINTELPRO, and internal disputes led to the decline and disbandment of the Berets in 1972. Sánchez, then a professor at East Los Angeles College, revived the Brown Berets in 1992 prompted by the high number of Chicano homicides in Los Angeles County, hoping to replace the gang life with the Brown Berets. Reies Tijerina, who was a vocal claimant to the rights of Latin Americans and Mexican Americans and a major figure of the early Chicano Movement, wrote: "The Anglo press degradized the word 'Chicano.' They use it to divide us. We use it to unify ourselves with our people and with Latin America." Cultural identity Chicano represents a cultural identity that is neither fully "American" or "Mexican." Chicano culture embodies the "in-between" nature of cultural hybridity. Central aspects of Chicano culture include lowriding, hip hop, rock, graffiti art, theater, muralism, visual art, literature, poetry, and more. Notable subcultures include the Cholo, Pachuca, Pachuco, and Pinto subcultures. Chicano culture has had international influence in the form of lowrider car clubs in Brazil and England, music and youth culture in Japan, Māori youth enhancing lowrider bicycles and taking on cholo style, and intellectuals in France "embracing the deterritorializing qualities of Chicano subjectivity." As early as the 1930s, the precursors to Chicano cultural identity were developing in Los Angeles, California and the Southwestern United States. Former zoot suiter Salvador "El Chava" reflects on how racism and poverty forged a hostile social environment for Chicanos which led to the development of gangs: "we had to protect ourselves". Barrios and colonias (rural barrios) emerged throughout southern California and elsewhere in neglected districts of cities and outlying areas with little infrastructure. Alienation from public institutions made some Chicano youth susceptible to gang channels, who became drawn to their rigid hierarchical structure and assigned social roles in a world of government-sanctioned disorder. Pachuco culture, which probably originated in the El Paso-Juarez area, spread to the borderland areas of California and Texas as Pachuquismo, which would eventually evolve into Chicanismo. Chicano zoot suiters on the west coast were influenced by Black zoot suiters in the jazz and swing music scene on the East Coast. Chicano zoot suiters developed a unique cultural identity, as noted by Charles "Chaz" Bojórquez, "with their hair done in big pompadours, and "draped" in tailor-made suits, they were swinging to their own styles. They spoke Cálo, their own language, a cool jive of half-English, half-Spanish rhythms. [...] Out of the zootsuiter experience came lowrider cars and culture, clothes, music, tag names, and, again, its own graffiti language." San Antonio-based Chicano artist Adan Hernandez regarded pachucos as "the coolest thing to behold in fashion, manner, and speech.” As described by artist Carlos Jackson, "Pachuco culture remains a prominent theme in Chicano art because the contemporary urban cholo culture" is seen as its heir. Many aspects of Chicano culture like lowriding cars and bicycles have been stigmatized and policed by Anglo Americans who perceive Chicanos as "juvenile delinquents or gang members" for their embrace of nonwhite style and cultures, much as they did Pachucos. These negative societal perceptions of Chicanos were amplified by media outlets such as the Los Angeles Times. Luis Alvarez remarks how negative portrayals in the media served as a tool to advocate for increased policing of Black and Brown male bodies in particular: "Popular discourse characterizing nonwhite youth as animal-like, hypersexual, and criminal marked their bodies as "other" and, when coming from city officials and the press, served to help construct for the public a social meaning of African Americans and Mexican American youth [as, in their minds, justifiably criminalized]." Chicano rave culture in southern California provided a space for Chicanos to partially escape criminalization in the 1990s. Artist and archivist Guadalupe Rosales states that "a lot of teenagers were being criminalized or profiled as criminals or gangsters, so the party scene gave access for people to escape that". Numerous party crews, such as Aztek Nation, organized events and parties would frequently take place in neighborhood backyards, particularly in East and South Los Angeles, the surrounding valleys, and Orange County. By 1995, it was estimated that over 500 party crews were in existence. They laid the foundations for "an influential but oft-overlooked Latin dance subculture that offered community for Chicano ravers, queer folk, and other marginalized youth." Ravers used map points techniques to derail police raids. Rosales states that a shift occurred around the late 1990s and increasing violence affected the Chicano party scene. Indigenous identity Chicano identity functions as a way to reclaim one's Indigenous American, and often Indigenous Mexican, ancestry—to form an identity distinct from European identity, despite some Chicanos being of partial European descent—as a way to resist and subvert colonial domination. Rather than part of European American culture, Alicia Gasper de Alba referred to Chicanismo as an "alter-Native culture, an Other American culture Indigenous to the land base now known as the West and Southwest of the United States." While influenced by settler-imposed systems and structures, Alba refers to Chicano culture as "not immigrant but native, not foreign but colonized, not alien but different from the overarching hegemony of white America." The Plan Espiritual de Aztlán (1969) drew from Frantz Fanon's The Wretched of the Earth (1961). In Wretched, Fanon stated: "the past existence of an Aztec civilization does not change anything very much in the diet of the Mexican peasant today", elaborating that "this passionate search for a national culture which existed before the colonial era finds its legitimate reason in the anxiety shared by native intellectuals to shrink away from that of Western culture in which they all risk being swamped ... the native intellectuals, since they could not stand wonderstruck before the history of today's barbarity, decided to go back further and to delve deeper down; and, let us make no mistake, it was with the greatest delight that they discovered that there was nothing to be ashamed of in the past, but rather dignity, glory, and solemnity." The Chicano Movement adopted this perspective through the notion of Aztlán—a mythic Aztec homeland which Chicanos used as a way to connect themselves to a precolonial past, before the time of the gringo' invasion of our lands." Chicano scholars have described how this functioned as a way for Chicanos to reclaim a diverse or imprecise Indigenous past; while recognizing how Aztlán promoted divisive forms of Chicano nationalism that "did little to shake the walls and bring down the structures of power as its rhetoric so firmly proclaimed". As stated by Chicano historian Juan Gómez-Quiñones, the Plan Espiritual de Aztlán was "stripped of what radical element it possessed by stressing its alleged romantic idealism, reducing the concept of Aztlán to a psychological ploy ... all of which became possible because of the Plan's incomplete analysis which, in turn, allowed it ... to degenerate into reformism." While acknowledging its romanticized and exclusionary foundations, Chicano scholars like Rafael Pérez-Torres state that Aztlán opened a subjectivity which stressed a connection to Indigenous peoples and cultures at a critical historical moment in which Mexican-Americans and Mexicans were "under pressure to assimilate particular standards—of beauty, of identity, of aspiration. In a Mexican context, the pressure was to urbanize and Europeanize ... "Mexican-Americans" were expected to accept anti-indigenous discourses as their own." As Pérez-Torres concludes, Aztlán allowed "for another way of aligning one's interests and concerns with community and with history ... though hazy as to the precise means in which agency would emerge, Aztlán valorized a Chicanismo that rewove into the present previously devalued lines of descent." Romanticized notions of Aztlán have declined among some Chicanos, who argue for a need to reconstruct the place of Indigeneity in relation to Chicano identity. Danza Azteca grew popular in the U.S. with the rise of the Chicano Movement, which inspired some "Latinos to embrace their ethnic heritage and question the Eurocentric norms forced upon them." The use of pre-contact Aztec cultural elements has been critiqued by some Chicanos who stress a need to represent the diversity of Indigenous ancestry among Chicanos. Patrisia Gonzales portrays Chicanos as descendants of the Indigenous peoples of Mexico who have been displaced by colonial violence, positioning them as "detribalized Indigenous peoples and communities." Roberto Cintli Rodríguez describes Chicanos as "de-Indigenized," which he remarks occurred "in part due to religious indoctrination and a violent uprooting from the land", detaching millions of people from maíz-based cultures throughout the greater Mesoamerican region. Rodríguez asks how and why "peoples who are clearly red or brown and undeniably Indigenous to this continent have allowed ourselves, historically, to be framed by bureaucrats and the courts, by politicians, scholars, and the media as alien, illegal, and less than human." Gloria E. Anzaldúa has addressed Chicano's detribalization: "In the case of Chicanos, being 'Mexican' is not a tribe. So in a sense Chicanos and Mexicans are 'detribalized'. We don't have tribal affiliations but neither do we have to carry ID cards establishing tribal affiliation." Anzaldúa recognized that "Chicanos, people of color, and 'whites have often chosen "to ignore the struggles of Native people even when it's right in our caras (faces)," expressing disdain for this "willful ignorance". She concluded that "though both "detribalized urban mixed bloods" and Chicanos are recovering and reclaiming, this society is killing off urban mixed bloods through cultural genocide, by not allowing them equal opportunities for better jobs, schooling, and health care." Inés Hernández-Ávila argued that Chicanos should recognize and reconnect with their roots "respectfully and humbly" while also validating "those peoples who still maintain their identity as original peoples of this continent" in order to create radical change capable of "transforming our world, our universe, and our lives". Political aspects Anti-imperialism and international solidarity During World War II, Chicano youth were targeted by white servicemen, who despised their "cool, measured indifference to the war, as well as an increasingly defiant posture toward whites in general". Historian Robin Kelley states that this "annoyed white servicemen to no end". During the Zoot Suit Riots (1943), white rage erupted in Los Angeles, which "became the site of racist attacks on Black and Chicano youth, during which white soldiers engaged in what amounted to a ritualized stripping of the zoot." Zoot suits were a symbol of collective resistance among Chicano and Black youth against city segregation and fighting in the war. Many Chicano and Black zoot-suiters engaged in draft evasion because they felt it was hypocritical for them to be expected to "fight for democracy" abroad yet face racism and oppression daily in the U.S. This galvanized Chicano youth to focus on anti-war activism, "especially influenced by the Third World movements of liberation in Asia, Africa, and Latin America." Historian Mario T. García reflects that "these anti-colonial and anti-Western movements for national liberation and self-awareness touched a historical nerve among Chicanos as they began to learn that they shared some similarities with these Third World struggles." Chicano poet Alurista argued that "Chicanos cannot be truly free until they recognize that the struggle in the United States is intricately bound with the anti-imperialist struggle in other countries." The Cuban Revolution (1953–1959) led by Fidel Castro and Che Guevara was particularly influential to Chicanos, as noted by García, who notes that Chicanos viewed the revolution as "a nationalist revolt against 'Yankee imperialism' and neo-colonialism." In the 1960s, the Chicano Movement brought "attention and commitment to local struggles with an analysis and understanding of international struggles". Chicano youth organized with Black, Latin American, and Filipino activists to form the Third World Liberation Front (TWLF), which fought for the creation of a Third World college. During the Third World Liberation Front strikes of 1968, Chicano artists created posters to express solidarity. Chicano poster artist Rupert García referred to the place of artists in the movement: "I was critical of the police, of capitalist exploitation. I did posters of Che, of Zapata, of other Third World leaders. As artists, we climbed down from the ivory tower." Learning from Cuban poster makers of the post-revolutionary period, Chicano artists "incorporated international struggles for freedom and self-determination, such as those of Angola, Chile, and South Africa", while also promoting the struggles of Indigenous people and other civil rights movements through Black-brown unity. Chicanas organized with women of color activists to create the Third World Women's Alliance (1968-1980), representing "visions of liberation in third world solidarity that inspired political projects among racially and economically marginalized communities" against U.S. capitalism and imperialism. The Chicano Moratorium (1969–1971) against the Vietnam War was one of the largest demonstrations of Mexican-Americans in history, drawing over 30,000 supporters in East L.A. Draft evasion was a form of resistance for Chicano anti-war activists such as Rosalio Muñoz, Ernesto Vigil, and Salomon Baldengro. They faced a felony charge—a minimum of five years prison time, $10,000, or both. In response, Munoz wrote "I declare my independence of the Selective Service System. I accuse the government of the United States of America of genocide against the Mexican people. Specifically, I accuse the draft, the entire social, political, and economic system of the United States of America, of creating a funnel which shoots Mexican youth into Vietnam to be killed and to kill innocent men, women, and children...." Rodolfo Corky Gonzales expressed a similar stance: "My feelings and emotions are aroused by the complete disregard of our present society for the rights, dignity, and lives of not only people of other nations but of our own unfortunate young men who die for an abstract cause in a war that cannot be honestly justified by any of our present leaders." Anthologies such as This Bridge Called My Back: Writings by Radical Women of Color (1981) were produced in the late 1970s and early 80s by writers who identified as lesbians of color, including Cherríe Moraga, Pat Parker, Toni Cade Bambara, Chrystos (self-identified claim of Menominee ancestry), Audre Lorde, Gloria E. Anzaldúa, Cheryl Clarke, Jewelle Gomez, Kitty Tsui, and Hattie Gossett, who developed a poetics of liberation. Kitchen Table: Women of Color Press and Third Woman Press, founded in 1979 by Chicana feminist Norma Alarcón, provided sites for the production of women of color and Chicana literatures and critical essays. While first world feminists focused "on the liberal agenda of political rights", Third World feminists "linked their agenda for women's rights with economic and cultural rights" and unified together "under the banner of Third World solidarity". Maylei Blackwell identifies that this internationalist critique of capitalism and imperialism forged by women of color has yet to be fully historicized and is "usually dropped out of the false historical narrative". In the 1980s and 90s, Central American activists influenced Chicano leaders. The Mexican American Legislative Caucus (MALC) supported the Esquipulas Peace Agreement in 1987, standing in opposition to Contra aid. Al Luna criticized Reagan and American involvement while defending Nicaragua's Sandinista-led government: "President Reagan cannot credibly make public speeches for peace in Central America while at the same time advocating for a three-fold increase in funding to the Contras." The Southwest Voter Research Initiative (SVRI), launched by Chicano leader Willie Velásquez, intended to educate Chicano youth about Central and Latin American political issues. In 1988, "there was no significant urban center in the Southwest where Chicano leaders and activists had not become involved in lobbying or organizing to change U.S. policy in Nicaragua." In the early 1990s, Cherríe Moraga urged Chicano activists to recognize that "the Anglo invasion of Latin America [had] extended well beyond the Mexican/American border" while Gloria E. Anzaldúa positioned Central America as the primary target of a U.S. interventionism that had murdered and displaced thousands. However, Chicano solidarity narratives of Central Americans in the 1990s tended to center themselves, stereotype Central Americans, and filter their struggles "through Chicana/o struggles, histories, and imaginaries." Chicano activists organized against the Gulf War (1990–91). Raul Ruiz of the Chicano Mexican Committee against the Gulf War stated that U.S. intervention was "to support U.S. oil interests in the region." Ruiz expressed, "we were the only Chicano group against the war. We did a lot of protesting in L.A. even though it was difficult because of the strong support for the war and the anti-Arab reaction that followed ... we experienced racist attacks [but] we held our ground." The end of the Gulf War, along with the Rodney King Riots, were crucial in inspiring a new wave of Chicano political activism. In 1994, one of the largest demonstrations of Mexican Americans in the history of the United States occurred when 70,000 people, largely Chicanos and Latinos, marched in Los Angeles and other cities to protest Proposition 187, which aimed to cut educational and welfare benefits for undocumented immigrants. In 2004, Mujeres against Militarism and the Raza Unida Coalition sponsored a Day of the Dead vigil against militarism within the Latino community, addressing the War in Afghanistan (2001–) and the Iraq War (2003–2011) They held photos of the dead and chanted "no blood for oil." The procession ended with a five-hour vigil at Tia Chucha's Centro Cultural. They condemned "the Junior Reserve Officers Training Corps (JROTC) and other military recruitment programs that concentrate heavily in Latino and African American communities, noting that JROTC is rarely found in upper-income Anglo communities." Rubén Funkahuatl Guevara organized a benefit concert for Latin@s Against the War in Iraq and Mexamérica por la Paz at Self-Help Graphics against the Iraq War. Although the events were well-attended, Guevara stated that "the Feds know how to manipulate fear to reach their ends: world military dominance and maintaining a foothold in an oil-rich region were their real goals." Labor organizing against capitalist exploitation Chicano and Mexican labor organizers played an active role in notable labor strikes since the early 20th century including the Oxnard strike of 1903, Pacific Electric Railway strike of 1903, 1919 Streetcar Strike of Los Angeles, Cantaloupe strike of 1928, California agricultural strikes (1931–1941), and the Ventura County agricultural strike of 1941, endured mass deportations as a form of strikebreaking in the Bisbee Deportation of 1917 and Mexican Repatriation (1929–1936), and experienced tensions with one another during the Bracero program (1942–1964). Although organizing laborers were harassed, sabotaged, and repressed, sometimes through warlike tactics from capitalist owners who engaged in coervice labor relations and collaborated with and received support from local police and community organizations, Chicano and Mexican workers, particularly in agriculture, have been engaged in widespread unionization activities since the 1930s. Prior to unionization, agricultural workers, many of whom were undocumented aliens, worked in dismal conditions. Historian F. Arturo Rosales recorded a Federal Project Writer of the period, who stated: "It is sad, yet true, commentary that to the average landowner and grower in California the Mexican was to be placed in much the same category with ranch cattle, with this exception–the cattle were for the most part provided with comparatively better food and water and immeasurably better living accommodations." Growers used cheap Mexican labor to reap bigger profits and, until the 1930s, perceived Mexicans as docile and compliant with their subjugated status because they "did not organize troublesome labor unions, and it was held that he was not educated to the level of unionism". As one grower described, "We want the Mexican because we can treat them as we cannot treat any other living man ... We can control them by keeping them at night behind bolted gates, within a stockade eight feet high, surrounded by barbed wire ... We can make them work under armed guards in the fields." Unionization efforts were initiated by the Confederación de Uniones Obreras (Federation of Labor Unions) in Los Angeles, with twenty-one chapters quickly extending throughout southern California, and La Unión de Trabajadores del Valle Imperial (Imperial Valley Workers' Union). The latter organized the Cantaloupe strike of 1928, in which workers demanded better working conditions and higher wages, but "the growers refused to budge and, as became a pattern, local authorities sided with the farmers and through harassment broke the strike". Communist-led organizations such as the Cannery and Agricultural Workers' Industrial Union (CAWIU) supported Mexican workers, renting spaces for cotton pickers during the cotton strikes of 1933 after they were thrown out of company housing by growers. Capitalist owners used "red-baiting" techniques to discredit the strikes through associating them with communists. Chicana and Mexican working women showed the greatest tendency to organize, particularly in the Los Angeles garment industry with the International Ladies' Garment Workers' Union, led by anarchist Rose Pesotta. During World War II, the government-funded Bracero program (1942–1964) hindered unionization efforts. In response to the California agricultural strikes and the 1941 Ventura County strike of Chicano and Mexican, as well as Filipino, lemon pickers/packers, growers organized the Ventura County Citrus Growers Committee (VCCGC) and launched a lobbying campaign to pressure the U.S. government to pass laws to prohibit labor organizing. VCCGC joined with other grower associations, forming a powerful lobbying bloc in Congress, and worked to legislate for (1) a Mexican guest workers program, which would become the Bracero program, (2) laws prohibiting strike activity, and (3) military deferments for pickers. Their lobbying efforts were successful: unionization among farmworkers was made illegal, farmworkers were excluded from minimum wage laws, and the usage of child labor by growers was ignored. In formerly active areas, such as Santa Paula, union activity stopped for over thirty years as a result. When World War II ended, the Bracero program continued. Legal anthropologist Martha Menchaca states that this was "regardless of the fact that massive quantities of crops were no longer needed for the war effort ... after the war, the braceros were used for the benefit of the large-scale growers and not for the nation's interest." The program was extended for an indefinite period in 1951. In the mid-1940s, labor organizer Ernesto Galarza founded the National Farm Workers Union (NFWU) in opposition to the Bracero Program, organizing a large-scale 1947 strike against the Di Giorgio Fruit Company in Arvin, California. Hundreds of Mexican, Filipino, and white workers walked out and demanded higher wages. The strike was broken by the usual tactics, with law enforcement on the side of the owners, evicting strikers and bringing in undocumented workers as strikebreakers. The NFWU folded, but served as a precursor to the United Farm Workers Union led by César Chávez. By the 1950s, opposition to the Bracero program had grown considerably, as unions, churches, and Mexican-American political activists raised awareness about the effects it had on American labor standards. On December 31, 1964, the U.S. government conceded and terminated the program. Following the closure of the Bracero program, domestic farmworkers began to organize again because "growers could not longer maintain the peonage system" with the end of imported laborers from Mexico. Labor organizing formed part of the Chicano Movement via the struggle of farmworkers against depressed wages and working conditions. César Chávez began organizing Chicano farmworkers in the early 1960s, first through the National Farm Workers Association (NFWA) and then merging the association with the Agricultural Workers Organizing Committee (AWOC), an organization of mainly Filipino workers, to form the United Farm Workers. The labor organizing of Chávez was central to the expansion of unionization throughout the United States and inspired the Farm Labor Organizing Committee (FLOC), under the leadership of Baldemar Velásquez, which continues today. Farmworkers collaborated with local Chicano organizations, such as in Santa Paula, California, where farmworkers attended Brown Berets meetings in the 1970s and Chicano youth organized to improve working conditions and initiate an urban renewal project on the eastside of the city. Although Mexican and Chicano workers, organizers, and activists organized for decades to improve working conditions and increase wages, some scholars characterize these gains as minimal. As described by Ronald Mize and Alicia Swords, "piecemeal gains in the interests of workers have had very little impact on the capitalist agricultural labor process, so picking grapes, strawberries, and oranges in 1948 is not so different from picking those same crops in 2008." U.S. agriculture today remains totally reliant on Mexican labor, with Mexican-born individuals now constituting about 90% of the labor force. Struggles in the education system Chicanos often endure struggles in the U.S. education system, such as being erased in curriculums and devalued as students. Some Chicanos identify schools as colonial institutions that exercise control over colonized students by teaching Chicanos to idolize whiteness and develop a negative self-image of themselves and their worldviews. School segregation between Mexican and white students was not legally ended until the late 1940s. In Orange County, California, 80% of Mexican students could only attend schools that taught Mexican children manual education, or gardening, bootmaking, blacksmithing, and carpentry for Mexican boys and sewing and homemaking for Mexican girls. White schools taught academic preparation. When Sylvia Mendez was told to attend a Mexican school, her parents brought suit against the court in Mendez vs. Westminster (1947) and won. Although legal segregation had been successfully challenged, de facto or segregation-in-practice continued in many areas. Schools with primarily Mexican American enrollment were still treated as "Mexican schools" much as before the legal overturning of segregation. Mexican American students were still treated poorly in schools. Continued bias in the education system motivated Chicanos to protest and use direct action, such as walkouts, in the 1960s. On March 5, 1968, the Chicano Blowouts at East Los Angeles High School occurred as a response to the racist treatment of Chicano students, an unresponsive school board, and a high dropout rate. It became known as "the first major mass protest against racism undertaken by Mexican-Americans in the history of the United States." Sal Castro, a Chicano social science teacher at the school was arrested and fired for inspiring the walkouts. It was led by Harry Gamboa Jr. who was named "one of the hundred most dangerous and violent subversives in the United States" for organizing the student walkouts. The day prior, FBI director J. Edgar Hoover sent out a memo to law enforcement to place top priority on "political intelligence work to prevent the development of nationalist movements in minority communities". Chicana activist Alicia Escalante protested Castro's dismissal: "We in the Movement will at least be able to hold our heads up and say that we haven't submitted to the gringo or to the pressures of the system. We are brown and we are proud. I am at least raising my children to be proud of their heritage, to demand their rights, and as they become parents they too will pass this on until justice is done." In 1969, Plan de Santa Bárbara was drafted as a 155-page document that outlined the foundation of Chicano Studies programs in higher education. It called for students, faculty, employees and the community to come together as "central and decisive designers and administrators of these programs". Chicano students and activists asserted that universities should exist to serve the community. However, by the mid-1970s, much of the radicalism of earlier Chicano studies became deflated by the education system, aimed to alter Chicano Studies programs from within. Mario García argued that one "encountered a deradicalization of the radicals". Some opportunistic faculty avoided their political responsibilities to the community. University administrators co-opted oppositional forces within Chicano Studies programs and encouraged tendencies that led "to the loss of autonomy of Chicano Studies programs." At the same time, "a domesticated Chicano Studies provided the university with the facade of being tolerant, liberal, and progressive." Some Chicanos argued that the solution was to create "publishing outlets that would challenge Anglo control of academic print culture with its rules on peer review and thereby publish alternative research," arguing that a Chicano space in the colonial academy could "avoid colonization in higher education". In an attempt to establish educational autonomy, they worked with institutions like the Ford Foundation, but found that "these organizations presented a paradox". Rodolfo Acuña argued that such institutions "quickly became content to only acquire funding for research and thereby determine the success or failure of faculty". Chicano Studies became "much closer [to] the mainstream than its practitioners wanted to acknowledge." Others argued that Chicano Studies at UCLA shifted from its earlier interests in serving the Chicano community to gaining status within the colonial institution through a focus on academic publishing, which alienated it from the community.In 2012, the Mexican American Studies Department Programs (MAS) in Tucson Unified School District were banned after a campaign led by Anglo-American politician Tom Horne accused it of working to "promote the overthrow of the U.S. government, promote resentment toward a race or class of people, are designed primarily for pupils of a particular ethnic group or advocate ethnic solidarity instead of the treatment of pupils as individuals." Classes on Latino literature, American history/Mexican-American perspectives, Chicano art, and an American government/social justice education project course were banned. Readings of In Lak'ech from Luis Valdez's poem Pensamiento Serpentino were also banned. Seven books, including Paulo Friere's Pedagogy of the Oppressed and works covering Chicano history and critical race theory, were banned, taken from students, and stored away. The ban was overturned in 2017 by Judge A. Wallace Tashima, who ruled that it was unconstitutional and motivated by racism by depriving Chicano students of knowledge, thereby violating their Fourteenth Amendment right. The Xicanx Institute for Teaching & Organizing (XITO) emerged to carry on the legacy of the MAS programs. Chicanos continue to support the institution of Chicano studies programs. In 2021, students at Southwestern College, the closest college to the Mexico-United States Border urged for the creation of a Chicanx Studies Program to service the predominately Latino student body. Rejection of borders The Chicano concept of sin fronteras rejects the idea of borders. Some argued that the 1848 Treaty of Guadalupe Hidalgo transformed the Rio Grande region from a rich cultural center to a rigid border poorly enforced by the United States government. At the end of the Mexican-American War, 80,000 Spanish-Mexican-Indian people were forced into sudden U.S. habitation. Some Chicanos identified with the idea of Aztlán as a result, which celebrated a time preceding land division and rejected the "immigrant/foreigner" categorization by Anglo society. Chicano activists have called for unionism between both Mexicans and Chicanos on both sides of the border. In the early 20th century, the border crossing had become a site of dehumanization for Mexicans. Protests in 1910 arose along the Santa Fe Bridge from abuses committed against Mexican workers while crossing the border. The 1917 Bath riots erupted after Mexicans crossing the border were required to strip naked and be disinfected with chemical agents like gasoline, kerosene, sulfuric acid, and Zyklon B, the latter of which was the fumigation of choice and would later notoriously be used in the gas chambers of Nazi Germany. Chemical dousing continued into the 1950s. During the early 20th century, Chicanos used corridos "to counter Anglocentric hegemony." Ramón Saldivar stated that "corridos served the symbolic function of empirical events and for creating counterfactual worlds of lived experience (functioning as a substitute for fiction writing)."Newspaper Sin Fronteras (1976–1979) openly rejected the Mexico-United States border. The newspaper considered it "to be only an artificial creation that in time would be destroyed by the struggles of Mexicans on both sides of the border" and recognized that "Yankee political, economic, and cultural colonialism victimized all Mexicans, whether in the U.S. or in Mexico." Similarly, the General Brotherhood of Workers (CASA), important to the development of young Chicano intellectuals and activists, identified that, as "victims of oppression, Mexicanos could achieve liberation and self-determination only by engaging in a borderless struggle to defeat American international capitalism." Chicana theorist Gloria E. Anzaldúa notably emphasized the border as a "1,950 mile-long wound that does not heal". In referring to the border as a wound, writer Catherine Leen suggests that Anzaldúa recognizes "the trauma and indeed physical violence very often associated with crossing the border from Mexico to the US, but also underlies the fact that the cyclical nature of this immigration means that this process will continue and find little resolution." Anzaldúa writes that la frontera signals "the coming together of two self-consistent but habitually incompatible frames of reference [which] cause un choque, a cultural collision" because "the U.S.-Mexican border es una herida abierta where the Third World grates against the first and bleeds." Chicano and Mexican artists and filmmakers continue to address "the contentious issues of exploitation, exclusion, and conflict at the border and attempt to overturn border stereotypes" through their work. Luis Alberto Urrea writes "the border runs down the middle of me. I have a barbed wire fence neatly bisecting my heart." Sociological aspects Criminalization The 19th-century and early-20th-century image of the Mexican in the U.S. was "that of the greasy Mexican bandit or bandito," who was perceived as criminal because of Mestizo ancestry and "Indian blood." This rhetoric fueled anti-Mexican sentiment among whites, which led to many lynchings of Mexicans in the period as an act of racist violence. One of the largest massacres of Mexicans was known as La Matanza in Texas, where hundreds of Mexicans were lynched by white mobs. Many whites viewed Mexicans as inherently criminal, which they connected to their Indigenous ancestry. White historian Walter P. Webb wrote in 1935, "there is a cruel streak in the Mexican nature ... this cruelty may be a heritage from the Spanish and of the Inquisition; it may, and doubtless should be, attributed partly to Indian blood." The "greasy bandito" stereotype of the old West evolved into images of "crazed Zoot-Suiters and pachuco killers in the 1940s, to contemporary cholos, gangsters, and gang members." Pachucos were portrayed as violent criminals in American mainstream media, which fueled the Zoot Suit Riots; initiated by off-duty policemen conducting a vigilante-hunt, the riots targeted Chicano youth who wore the zoot suit as a symbol of empowerment. On-duty police supported the violence against Chicano zoot suiters; they "escorted the servicemen to safety and arrested their Chicano victims." Arrest rates of Chicano youth rose during these decades, fueled by the "criminal" image portrayed in the media, by politicians, and by the police. Not aspiring to assimilate in Anglo-American society, Chicano youth were criminalized for their defiance to cultural assimilation: "When many of the same youth began wearing what the larger society considered outlandish clothing, sporting distinctive hairstyles, speaking in their own language (Caló), and dripping with attitude, law enforcement redoubled their efforts to rid [them from] the streets." In the 1970s and subsequent decades, there was a wave of police killings of Chicanos. One of the most prominent cases was Luis "Tato" Rivera, who was a 20-year-old Chicano shot in the back by officer Craig Short in 1975. 2,000 Chicano demonstrators showed up to the city hall of National City, California in protest. Short was indicted for manslaughter by district attorney Ed Miller and was acquitted of all charges. Short was later appointed acting chief of police of National City in 2003. Another high-profile case was the murder of Ricardo Falcón, a student at the University of Colorado and leader of the United Latin American Students (UMAS), by Perry Brunson, a member of the far-right American Independent Party, at a gas station. Bruson was tried for manslaughter and was "acquitted by an all-White jury". Falcón became a martyr for the Chicano Movement as police violence increased in the subsequent decades. Similar cases led sociologist Alfredo Mirandé to refer to the U.S. criminal justice system as gringo justice, because "it reflected one standard for Anglos and another for Chicanos." The criminalization of Chicano youth in the barrio remains omnipresent. Chicano youth who adopt a cholo or chola identity endure hyper-criminalization in what has been described by Victor Rios as the youth control complex. While older residents initially "embraced the idea of a chola or cholo as a larger subculture not necessarily associated with crime and violence (but rather with a youthful temporary identity), law enforcement agents, ignorant or disdainful of barrio life, labeled youth who wore clean white tennis shoes, shaved their heads, or long socks, as deviant." Community members were convinced by the police of cholo criminality, which led to criminalization and surveillance "reminiscent of the criminalization of Chicana and Chicano youth during the Zoot-Suit era in the 1940s." Sociologist José S. Plascencia-Castillo refers to the barrio as a panopticon that leads to intense self-regulation, as Cholo youth are both scrutinized by law enforcement to "stay in their side of town" and by the community who in some cases "call the police to have the youngsters removed from the premises". The intense governance of Chicano youth, especially of cholo identity, has deep implications on youth experience, affecting their physical and mental health as well as their outlook on the future. Some youth feel they "can either comply with the demands of authority figures, and become obedient and compliant, and suffer the accompanying loss of identity and self-esteem, or, adopt a resistant stance and contest social invisibility to command respect in the public sphere." Gender and sexuality Chicanas Chicanas often confront objectification in Anglo society, being perceived as "exotic", "lascivious", and "hot" at a very young age while also facing denigration as "barefoot", "pregnant", "dark", and "low-class". These perceptions in society create numerous negative sociological and psychological effects, such as excessive dieting and eating disorders. Social media may enhance these stereotypes of Chicana women and girls. Numerous studies have found that Chicanas experience elevated levels of stress as a result of sexual expectations by society, as well as their parents and families. Although many Chicana youth desire open conversation of these gender roles and sexuality, as well as mental health, these issues are often not discussed openly in Chicano families, which perpetuates unsafe and destructive practices. While young Chicanas are objectified, middle-aged Chicanas discuss feelings of being invisible, saying they feel trapped in balancing family obligations to their parents and children while attempting to create a space for their own sexual desires. The expectation that Chicanas should be "protected" by Chicanos may also constrict the agency and mobility of Chicanas. Chicanas are often relegated to a secondary and subordinate status in families. Cherrie Moraga argues that this issue of patriarchal ideology in Chicano and Latino communities runs deep, as the great majority of Chicano and Latino men believe in and uphold male supremacy. Moraga argues that this ideology is not only upheld by men in Chicano families, but also by mothers in their relationship to their children: "the daughter must constantly earn the mother's love, prove her fidelity to her. The son—he gets her love for free." Chicanos Chicanos develop their manhood within a context of marginalization in white society. Some argue that "Mexican men and their Chicano brothers suffer from an inferiority complex due to the conquest and genocide inflicted upon their Indigenous ancestors," which leaves Chicano men feeling trapped between identifying with the so-called "superior" European and the so-called "inferior" Indigenous sense of self. This conflict may manifest itself in the form of hypermasculinity or machismo, in which a "quest for power and control over others in order to feel better" about oneself is undertaken. This may result in men developing abusive behaviors, the development of an impenetrable "cold" persona, alcohol abuse, and other destructive and self-isolating behaviors. The lack of discussion of what it means to be a Chicano man between Chicano male youth and their fathers or their mothers creates a search for identity that often leads to self-destructive behaviors. Chicano male youth tend to learn about sex from their peers as well as older male family members who perpetuate the idea that as men they have "a right to engage in sexual activity without commitment". The looming threat of being labeled a joto (gay) for not engaging in sexual activity also conditions many Chicanos to "use" women for their own sexual desires. Gabriel S. Estrada argues that the criminalization of Chicanos proliferates further homophobia among Chicano boys and men who may adopt hypermasculine personas to escape such association. Heteronormativity Heteronormative gender roles are typically enforced in Chicano families. Any deviation from gender and sexual conformity is commonly perceived as a weakening or attack of la familia. However, Chicano men who retain a masculine or machismo performance are afforded some mobility to discreetly engage in homosexual behaviors, as long as it remains on the fringes. Effeminacy in Chicanos, Chicana lesbianism, and any deviation is understood as an attack on the family. Queer Chicana/os may seek refuge in their families, if possible, because it is difficult for them to find spaces where they feel safe in the dominant and hostile white gay culture. Chicano machismo, religious traditionalism, and homophobia creates challenges for them to feel accepted by their families. Gabriel S. Estrada argues that upholding "Judeo-Christian mandates against homosexuality that are not native to [Indigenous Mexico]," exiles queer Chicana/o youth. Mental health Chicanos may seek out both Western biomedical healthcare and Indigenous health practices when dealing with trauma or illness. The effects of colonization are proven to produce psychological distress among Indigenous communities. Intergenerational trauma, along with racism and institutionalized systems of oppression, have been shown to adversely impact the mental health of Chicanos and Latinos. Mexican Americans are three times more likely than European Americans to live in poverty. Chicano adolescent youth experience high rates of depression and anxiety. Chicana adolescents have higher rates of depression and suicidal ideation than their European-American and African-American peers. Chicano adolescents experience high rates of homicide, and suicide. Chicanos ages ten to seventeen are at a greater risk for mood and anxiety disorders than their European-American and African-American peers. Scholars have determined that the reasons for this are unclear due to the scarcity of studies on Chicano youth, but that intergenerational trauma, acculturative stress, and family factors are believed to contribute. Among Mexican immigrants who have lived in the United States for less than thirteen years, lower rates of mental health disorders were found in comparison to Mexican-Americans and Chicanos born in the United States. Scholar Yvette G. Flores concludes that these studies demonstrate that "factors associated with living in the United States are related to an increased risk of mental disorders." Risk factors for negative mental health include historical and contemporary trauma stemming from colonization, marginalization, discrimination, and devaluation. The disconnection of Chicanos from their Indigeneity has been cited as a cause of trauma and negative mental health:Loss of language, cultural rituals, and spiritual practices creates shame and despair. The loss of culture and language often goes unmourned, because it is silenced and denied by those who occupy, conquer, or dominate. Such losses and their psychological and spiritual impact are passed down across generations, resulting in depression, disconnection, and spiritual distress in subsequent generations, which are manifestations of historical or intergenerational trauma.Psychological distress may emerge from Chicanos being "othered" in society since childhood and is linked to psychiatric disorders and symptoms which are culturally bound—susto (fright), nervios (nerves), mal de ojo (evil eye), and ataque de nervios (an attack of nerves resembling a panic attack). Manuel X. Zamarripa discusses how mental health and spirituality are often seen as disconnected subjects in Western perspectives. Zamarripa states "in our community, spirituality is key for many of us in our overall wellbeing and in restoring and giving balance to our lives". For Chicanos, Zamarripa recognizes that identity, community, and spirituality are three core aspects which are essential to maintaining good mental health. Spirituality Chicano spirituality has been described as a process of engaging in a journey to unite one's consciousness for the purposes of cultural unity and social justice. It brings together many elements and is therefore hybrid in nature. Scholar Regina M Marchi states that Chicano spirituality "emphasizes elements of struggle, process, and politics, with the goal of creating a unity of consciousness to aid social development and political action". Lara Medina and Martha R. Gonzales explain that "reclaiming and reconstructing our spirituality based on non-Western epistemologies is central to our process of decolonization, particularly in these most troubling times of incessant Eurocentric, heteronormative patriarchy, misogyny, racial injustice, global capitalist greed, and disastrous global climate change." As a result, some scholars state that Chicano spirituality must involve a study of Indigenous Ways of Knowing (IWOK). The Circulo de Hombres group in San Diego, California spiritually heals Chicano, Latino, and Indigenous men "by exposing them to Indigenous-based frameworks, men of this cultural group heal and rehumanize themselves through Maya-Nahua Indigenous-based concepts and teachings", helping them process intergenerational trauma and dehumanization that has resulted from colonization. A study on the group reported that reconnecting with Indigenous worldviews was overwhelmingly successful in helping Chicano, Latino, and Indigenous men heal. As stated by Jesus Mendoza, "our bodies remember our indigenous roots and demand that we open our mind, hearts, and souls to our reality". Chicano spirituality is a way for Chicanos to listen, reclaim, and survive while disrupting coloniality. While historically Catholicism was the primary way for Chicanos to express their spirituality, this is changing rapidly. According to a Pew Research Center report in 2015, "the primary role of Catholicism as a conduit to spirituality has declined and some Chicanos have changed their affiliation to other Christian religions and many more have stopped attending church altogether." Increasingly, Chicanos are considering themselves spiritual rather than religious or part of an organized religion. A study on spirituality and Chicano men in 2020 found that many Chicanos indicated the benefits of spirituality through connecting with Indigenous spiritual beliefs and worldviews instead of Christian or Catholic organized religion in their lives. Dr. Lara Medina defines spirituality as (1) Knowledge of oneself—one's gifts and one's challenges, (2) Co-creation or a relationship with communities (others), and (3) A relationship with sacred sources of life and death 'the Great Mystery' or Creator. Jesus Mendoza writes that, for Chicanos, "spirituality is our connection to the earth, our pre-Hispanic history, our ancestors, the mixture of pre-Hispanic religion with Christianity ... a return to a non-Western worldview that understands all life as sacred." In her writing on Gloria Anzaldua's idea of spiritual activism, AnaLouise Keating states that spirituality is distinct from organized religion and New Age thinking. Leela Fernandes defines spirituality as follows:When I speak of spirituality, at the most basic level I am referring to an understanding of the self as encompassing body and mind, as well as spirit. I am also referring to a transcendent sense of interconnection that moves beyond the knowable, visible material world. This sense of interconnection has been described variously as divinity, the sacred, spirit, or simply the universe. My understanding is also grounded in a form of lived spirituality, which is directly accessible to all and which does not need to be mediated by religious experts, institutions or theological texts; this is what is often referred to as the mystical side of spirituality... Spirituality can be as much about practices of compassion, love, ethics, and truth defined in nonreligious terms as it can be related to the mystical reinterpretations of existing religious traditions. David Carrasco states that Mesoamerican spiritual or religious beliefs have historically always been evolving in response to the conditions of the world around them: "These ritual and mythic traditions were not mere repetitions of ancient ways. New rituals and mythic stories were produced to respond to ecological, social, and economic changes and crises." This was represented through the art of the Olmecs, Maya, and Mexica. European colonizers sought and worked to destroy Mesoamerican worldviews regarding spirituality and replace these with a Christian model. The colonizers used syncreticism in art and culture, exemplified through practices such as the idea presented in the Testerian Codices that "Jesus ate tortillas with his disciples at the last supper" or the creation of the Virgen de Guadalupe (mirroring the Christian Mary) in order to force Christianity into Mesoamerican cosmology. Chicanos can create new spiritual traditions by recognizing this history or "by observing the past and creating a new reality". Gloria Anzaldua states that this can be achieved through nepantla spirituality or a space where, as stated by Jesus Mendoza, "all religious knowledge can coexist and create a new spirituality ... where no one is above the other ... a place where all is useful and none is rejected." Anzaldua and other scholars acknowledge that this is a difficult process that involves navigating many internal contradictions in order to find a path towards spiritual liberation. Cherrie Moraga calls for a deeper self-exploration of who Chicanos are in order to reach "a place of deeper inquiry into ourselves as a people ... possibly, we must turn our eyes away from racist America and take stock at the damages done to us. Possibly, the greatest risks yet to be taken are entre nosotros, where we write, paint, dance, and draw the wound for one another to build a stronger pueblo. The women artist seemed disposed to do this, their work often mediating the delicate area between cultural affirmation and criticism." Laura E. Pérez states in her study of Chicana art that "the artwork itself [is] altar-like, a site where the disembodied—divine, emotional, or social—[is] acknowledged, invoked, meditated upon, and released as a shared offering." Cultural aspects The diversity of Chicano cultural production is vast. Guillermo Gómez-Peña wrote that the complexity and diversity of the Chicano community includes influences from Central American, Caribbean, Africans, and Asian Americans who have moved into Chicano communities as well as queer people of color. Many Chicano artists continue to question "conventional, static notions of Chicanismo," while others conform to more conventional cultural traditions. Film Chicano film has been marginalized since its inception and was established in the 1960s. The generally marginal status of Chicanos in the film industry has meant that many Chicano films are not released with wide theatrical distribution. Chicano film emerged from the creation of political plays and documentaries. This included El Teatro Campesino's Yo Soy Joaquín (1969), Luis Valdez's El Corrido (1976), and Efraín Gutiérrez's Please, Don't Bury Me Alive! (1976), the latter of which is referred to as the first full-length Chicano film. Docudramas then emerged like Esperanza Vasquez's Agueda Martínez (1977), Jesús Salvador Treviño's Raíces de Sangre (1977), and Robert M. Young's ¡Alambrista! (1977). Luis Valdez's Zoot Suit (1981), Young's The Ballad of Gregorio Cortez (1982), Gregory Nava's, My Family/Mi familia (1995) and Selena (1997), and Josefina López's Real Women Have Curves (2002). Chicana/o films continue to be regarded as a small niche in the film industry that has yet to receive mainstream commercial success. However, Chicana/o films have been influential in shaping how Chicana/os see themselves. Literature Chicano literature tends to focus on challenging the dominant narrative, while embracing notions of hybridity, including the use of Spanglish, as well as the blending of genre forms, such as fiction and autobiography. José Antonio Villarreal's Pocho (1959) is widely recognized as the first major Chicano novel. Poet Alurista wrote that Chicano literature served an important role to push back against narratives by white Anglo-Saxon Protestant culture that sought to "keep Mexicans in their place." Rodolfo "Corky" Gonzales's "Yo Soy Joaquin" is one of the first examples of explicitly Chicano poetry. Other early influential poems included "El Louie" by José Montoya and Abelardo "Lalo" Delgado's poem "Stupid America." In 1967, Octavio Romano founded Tonatiuh-Quinto Sol Publications, which was the first dedicated Chicano publication houses. The novel Chicano (1970) by Richard Vasquez, was the first novel about Mexican Americans to be released by a major publisher. It was widely read in high schools and universities during the 1970s and is now recognized as a breakthrough novel. Chicana feminist writers have tended to focus on themes of identity, questioning how identity is constructed, who constructs it, and for what purpose in a racist, classist, and patriarchal structure. Characters in books such as Victuum (1976) by Isabella Ríos, The House on Mango Street (1983) by Sandra Cisneros, Loving in the War Years: lo que nunca pasó por sus labios (1983) by Cherríe Moraga, The Last of the Menu Girls (1986) by Denise Chávez, Margins (1992) by Terri de la Peña, and Gulf Dreams (1996) by Emma Pérez have also been read regarding how they intersect with themes of gender and sexuality. Catrióna Rueda Esquibel performs a queer reading of Chicana literature in With Her Machete in Her Hand (2006) to demonstrate how some of the intimate relationships between girls and women contributed to a discourse on homoeroticism and queer sexuality in Chicana/o literature. Chicano characters who were gay tended to be removed from the barrio and were typically portrayed with negative attributes, such as the character of "Joe Pete" in Pocho and the unnamed protagonist of John Rechy's City of Night (1963). Other characters in the Chicano canon may also be read as queer, including the unnamed protagonist of Tomás Rivera's ...y no se lo tragó la tierra (1971), and "Antonio Márez" in Rudolfo Anaya's Bless Me, Ultima (1972). Juan Bruce-Novoa wrote that homosexuality was "far from being ignored during the 1960s and 1970s," despite homophobia restricting representations: "our community is less sexually repressive than we might expect". Music Lalo Guerrero has been lauded as the "father of Chicano music." Beginning in the 1930s, he wrote songs in the big band and swing genres and expanded into traditional genres of Mexican music. During the farmworkers' rights campaign, he wrote music in support of César Chávez and the United Farm Workers. Other notable musicians include Selena, who sang a mixture of Mexican, Tejano, and American popular music, and died in 1995 at the age of 23; Zack de la Rocha, social activist and lead vocalist of Rage Against the Machine; and Los Lonely Boys, a Texas-style country rock band. Chicano electro Chicano techno and electronic music artists DJ Rolando, Santiago Salazar, DJ Tranzo, and Esteban Adame have released music through independent labels like Underground Resistance, Planet E, Krown Entertainment, and Rush Hour. In the 1990s, house music artists such as DJ Juanito (Johnny Loopz), Rudy "Rude Dog" Gonzalez, and Juan V. released numerous tracks through Los Angeles-based house labels Groove Daddy Records and Bust A Groove. DJ Rolando's techno track "Knights of the Jaguar," released on the UR label in 1999, became the most well-known Chicano techno track after charting at #43 in the UK in 2000. Mixmag commented: "after it was released, it spread like wildfire all over the world. It's one of those rare tracks that feels like it can play for an eternity without anyone batting an eyelash." It's consistently placed on Best Songs lists. The official video for the track features various portraits of Chicana/os in Detroit among several Chicano murals, lowrider cars and lowrider bicycles, and lifestyle. Salazar and Adame are also affiliated with Underground Resistance and have collaborated with Nomadico. Salazar founded music labels Major People, Ican (as in Mex-Ican, with Esteban Adame) and Historia y Violencia (with Juan Mendez a.k.a. Silent Servant) and released his debut album Chicanismo in 2015 to positive reviews. Nomadico's label Yaxteq, founded in 2015, has released tracks by veteran Los Angeles techno producer Xavier De Enciso and Honduran producer Ritmos. Chicano folk A growing Tex-Mex polka band trend influenced by the and music of Mexican immigrants, has in turn influenced much new Chicano folk music, especially on large-market Spanish language radio stations and on television music video programs in the U.S. Some of these artists, like the band Quetzal, are known for the political content of political songs. Chicano rap Hip hop culture, which is cited as having formed in the 1980s street culture of African American, West Indian (especially Jamaican), and Puerto Rican New York City Bronx youth and characterized by DJing, rap music, graffiti, and breakdancing, was adopted by many Chicano youth by the 1980s as its influence moved westward across the United States. Chicano artists were beginning to develop their own style of hip hop. Rappers such as Ice-T and Eazy-E shared their music and commercial insights with Chicano rappers in the late 1980s. Chicano rapper Kid Frost, who is often cited as "the godfather of Chicano rap" was highly influenced by Ice-T and was even cited as his protégé. Chicano rap is a unique style of hip hop music which started with Kid Frost, who saw some mainstream exposure in the early 1990s. While Mellow Man Ace was the first mainstream rapper to use Spanglish, Frost's song "La Raza" paved the way for its use in American hip hop. Chicano rap tends to discuss themes of importance to young urban Chicanos. Some of the most prominent Chicano artists include A.L.T., Lil Rob, Psycho Realm, Baby Bash, Serio, A Lighter Shade of Brown, and Funky Aztecs. Chicano rap artists with less mainstream exposure, yet with popular underground followings include Cali Life Style, Ese 40'z, Sleepy Loka, Ms. Sancha, Mac Rockelle, Sir Dyno, and Choosey. Chicano R&B artists include Paula DeAnda, Amanda Perez, Frankie J, and Victor Ivan Santos (early member of the Kumbia Kings and associated with Baby Bash). Chicano jazz Although Latin jazz is most popularly associated with artists from the Caribbean (particularly Cuba) and Brazil, young Mexican Americans have played a role in its development over the years, going back to the 1930s and early 1940s, the era of the zoot suit, when young Mexican-American musicians in Los Angeles and San Jose, such as Jenni Rivera, began to experiment with , a jazz-like fusion genre that has grown recently in popularity among Mexican Americans Chicano rock In the 1950s, 1960s and 1970s, a wave of Chicano pop music surfaced through innovative musicians Carlos Santana, Johnny Rodriguez, Ritchie Valens and Linda Ronstadt. Joan Baez, who is also of Mexican-American descent, included Hispanic themes in some of her protest folk songs. Chicano rock is rock music performed by Chicano groups or music with themes derived from Chicano culture. There are two undercurrents in Chicano rock. One is a devotion to the original rhythm and blues roots of Rock and roll including Ritchie Valens, Sunny and the Sunglows, and ? and the Mysterians. Groups inspired by this include Sir Douglas Quintet, Thee Midniters, Los Lobos, War, Tierra, and El Chicano, and, of course, the Chicano Blues Man himself, the late Randy Garribay. The second theme is the openness to Latin American sounds and influences. Trini Lopez, Santana, Malo, Azteca, Toro, Ozomatli and other Chicano Latin rock groups follow this approach. Chicano rock crossed paths of other Latin rock genres (Rock en español) by Cubans, Puerto Ricans, such as Joe Bataan and Ralphi Pagan and South America (Nueva canción). Rock band The Mars Volta combines elements of progressive rock with traditional Mexican folk music and Latin rhythms along with Cedric Bixler-Zavala's Spanglish lyrics. Chicano punk is a branch of Chicano rock. There were many bands that emerged from the California punk scene, including The Zeros, Bags, Los Illegals, The Brat, The Plugz, Manic Hispanic, and the Cruzados; as well as others from outside of California including Mydolls from Houston, Texas and Los Crudos from Chicago, Illinois. Some music historians argue that Chicanos of Los Angeles in the late 1970s might have independently co-founded punk rock along with the already-acknowledged founders from European sources when introduced to the US in major cities. The rock band ? and the Mysterians, which was composed primarily of Mexican-American musicians, was the first band to be described as punk rock. The term was reportedly coined in 1971 by rock critic Dave Marsh in a review of their show for Creem magazine. Performance arts El Teatro Campesino (The Farmworkers' Theater) was founded by Luis Valdez and Agustin Lira in 1965 as the cultural wing of the United Farm Workers (UFW) as a result of the Great Delano Grape Strike in 1965. All of the actors were farmworkers and involved in organizing for farmworkers' rights. Its first performances sought to recruit members for the UFW and dissuade strikebreakers. Many early performances were not scripted and were rather conceived through the direction of Valdez and others through actos, in which a scenario would be proposed for a scene and then dialogue would simply be improvised.Chicano performance art continued with the work of Los Angeles' comedy troupe Culture Clash, Guillermo Gómez-Peña, and Nao Bustamante, known internationally for her conceptual art pieces and as a participant in Work of Art: The Next Great Artist. Chicano performance art became popular in the 1970s, blending humor and pathos for tragicomic effect. Groups such as Asco and the Royal Chicano Air Force illustrated this aspect of performance art through their work. Asco (Spanish for naseau or disgust), composed of Willie Herón, Gronk, Harry Gamboa Jr., and Patssi Valdez, created performance pieces such as the Walking Mural, walking down Whittier Boulevard dressed as "a multifaceted mural, a Christmas tree, and the Virgin of Guadalupe. Asco continued its conceptual performance piece until 1987. In the 1990s, San Diego-based artist cooperative of David Avalos, Louis Hock, and Elizabeth Sisco used their National Endowment for the Arts $5,000 fellowship subversively, deciding to circulate money back to the community: "handing 10-dollar bills to undocumented workers to spend as they please." Their piece Arte Reembolsa (Art Rebate) created controversy among the art establishment, with the documentation of the piece featuring "footage of U.S. House and Senate members questioning whether the project was, in fact, art." One of the most well-known performance art troupes is La Pocha Nostra, which has been covered in numerous articles for various performance art pieces. The troupe has been active since 1993 yet has remained relevant into the 2010s and 2020s due to its political commentary, including anti-corporate stances. The troupe regularly uses parody and humor in their performances to make complex commentary on various social issues. Creating thought-provoking performances that challenge the audience to think differently is often their intention with each performance piece. Visual arts The Chicano visual art tradition, like the identity, is grounded in community empowerment and resisting assimilation and oppression. Prior to the introduction of spray cans, paint brushes were used by Chicano "shoeshine boys [who] marked their names on the walls with their daubers to stake out their spots on the sidewalk" in the early 20th century. Pachuco graffiti culture in Los Angeles was already "in full bloom" by the 1930s and 1940s, pachucos developed their placa, "a distinctive calligraphic writing style" which went on to influence contemporary graffiti tagging. Paño, a form of pinto arte (a caló term for male prisoner) using pen and pencil, developed in the 1930s, first using bed sheets and pillowcases as canvases. Paño has been described as rasquachismo, a Chicano worldview and artmaking method which makes the most from the least. Graffiti artists, such as Charles "Chaz" Bojórquez, developed an original style of graffiti art known as West Coast Cholo style influenced by Mexican murals and pachuco placas (tags which indicate territorial boundaries) in the mid-20th century. In the 1960s, Chicano graffiti artists from San Antonio to L.A. (especially in East LA, Whittier, and Boyle Heights) used the art form to challenge authority, tagging police cars, buildings, and subways as "a demonstration of their bravado and anger", understanding their work as "individual acts of pride or protest, gang declarations of territory or challenge, and weapons in a class war." Chicano graffiti artists wrote C/S as an abbreviation for con safos or the variant con safo (loosely meaning "don't touch this" and expressing a "the same to you" attitude)—a common expression among Chicanos on the eastside of Los Angeles and throughout the Southwest. The Chicano Movement and political identity had heavily influenced Chicano artists by the 1970s. Alongside the Black arts movement, this led to the development of institutions such as Self-Help Graphics, Los Angeles Contemporary Exhibitions, and Plaza de la Raza. Artists such as Harry Gamboa Jr., Gronk, and Judith Baca created art which "stood in opposition to the commercial galleries, museums, and civic institutional mainstream". This was exemplified with Asco's tagging of LACMA after "a curator refused to even entertain the idea of a Chicano art show within its walls" in 1972. Chicano art collectives such as the Royal Chicano Air Force, founded in 1970 by Ricardo Favela, José Montoya and Esteban Villa, supported the United Farm Workers movement through art activism, using art to create and inspire social change. Favela believed that it was important to keep the culture alive through their artwork. Favela stated "I was dealing with art forms very foreign to me, always trying to do western art, but there was always something lacking... it was very simple: it was just my Chicano heart wanting to do Chicano art." Other Chicano visual art collectives included Con Safo in San Antonio, which included Felipe Reyes, José Esquivel, Roberto Ríos, Jesse Almazán, Jesse "Chista" Cantú, Jose Garza, Mel Casas, Rudy Treviño, César Martínez, Kathy Vargas, Amado Peña, Jr., Robando Briseño, and Roberto Gonzalez. The Mujeres Muralistas in the Mission District, San Francisco included Patricia Rodriguez, Graciela Carrillo, Consuelo Mendez, and Irene Perez. Chicano muralism, which began in the 1960s, became a state-sanctioned artform in the 1970s as an attempt by outsiders to "prevent gang violence and dissuade graffiti practices". This led to the creation of murals at Estrada Courts and other sites throughout Chicano communities. In some instances, these murals were covered with the placas they were instituted by the state to prevent. Marcos Sanchez-Tranquilino states that "rather than vandalism, the tagging of one's own murals points toward a complex sense of wall ownership and a social tension created by the uncomfortable yet approving attentions of official cultural authority." This created a division between established Chicano artists who celebrated inclusion and acceptance by the dominant culture and younger Chicano artists who "saw greater power in renegade muralism and barrio calligraphy than in state-sanctioned pieces." Chicano poster art became prominent in the 1970s as a way to challenge political authority, with pieces such as Rupert García's Save Our Sister (1972), depicting Angela Davis, and Yolanda M. López's Who's the Illegal Alien, Pilgrim? (1978) addressing settler colonialism. The oppositional current of Chicano art was bolstered in the 1980s by a rising hip hop culture. The Olympic freeway murals, including Frank Romero's Going to the Olympics, created for the 1984 Olympic Games in Los Angeles became another site of contestation, as Chicano and other graffiti artists tagged the state-sanctioned public artwork. Government officials, muralists, and some residents were unable to understand the motivations for this, described it "as "mindless", "animalistic" vandalism perpetrated by "kids" who simply lack respect." L.A. had developed a distinct graffiti culture by the 1990s and, with the rise of drugs and violence, Chicano youth culture gravitated towards graffiti to express themselves and to mark their territory amidst state-sanctioned disorder. Following the Rodney King riots and the murder of Latasha Harlins, which exemplified an explosion of racial tensions bubbling under in American society, racialized youth in L.A., "feeling forgotten, angry, or marginalized, [embraced] graffiti's expressive power [as] a tool to push back." Chicano art, although accepted into some institutional art spaces in shows like Chicano Art: Resistance and Affirmation, was still largely excluded from many mainstream art institutions in the 1990s. By the 2000s, attitudes towards graffiti by white hipster culture were changing, as it became known as "street art". In academic circles, "street art" was termed "post-graffiti". By the 2000s, where the LAPD once deployed CRASH (Community Resources Against Street Hoodlums) units in traditionally Chicano neighborhoods like Echo Park and "often brutalized suspected taggers and gang members", street art was now being mainstreamed by the white art world in those same neighborhoods.Despite this shift, Chicano artists continued to challenge what was acceptable to both insiders and outsiders of their communities. Controversy surrounding Chicana artist Alma López's "Our Lady" at the Museum of International Folk Art in 2001 erupted when "local demonstrators demanded the image be removed from the state-run museum". Previously, López's digital mural "Heaven" (2000), which depicted two Latina women embracing, had been vandalized. López received homophobic slurs, threats of physical violence, and over 800 hate mail inquiries for "Our Lady." Santa Fe Archbishop Michael J Sheehan referred to the woman in López's piece as "a tart or a street woman". López stated that the response came from the conservative Catholic Church, "which finds women's bodies inherently sinful, and thereby promot[es] hatred of women's bodies." The art was again protested in 2011. Manuel Paul's mural "Por Vida" (2015) at Galeria de la Raza in Mission District, San Francisco, which depicted queer and trans Chicanos, was targeted multiple times after its unveiling. Paul, a queer DJ and artist of the Maricón Collective, received online threats for the work. Ani Rivera, director of Galeria de la Raza, attributed the anger towards the mural to gentrification, which has led "some people [to] associate LGBT people with non-Latino communities." The mural was meant to challenge "long-held assumptions regarding the traditional exclusivity of heterosexuality in lowrider culture". Some credited the negative response to the mural's direct challenging of machismo and heteronormativity in the community. Xandra Ibarra's video art Spictacle II: La Tortillera (2004) was censored by San Antonio's Department of Arts and Culture in 2020 from "XicanX: New Visions", a show which aimed to challenge "previous and existing surveys of Chicano and Latino identity-based exhibitions" through highlighting "the womxn, queer, immigrant, indigenous and activist artists who are at the forefront of the movement". Ibarra stated "the video is designed to challenge normative ideals of Mexican womanhood and is in alignment with the historical lineage of LGBTQAI+ artists' strategies to intervene in homophobic and sexist violence." International influence Chicano culture has become popular in some areas internationally, most prominently in Japan, Brazil, and Thailand. Chicano ideas such as Chicano hybridity and borderlands theory have found influence as well, such as in decoloniality. In São Paulo, Chicano cultural influence has formed the "Cho-Low" (combination of Cholo and Lowrider) subculture that has formed a sense of cultural pride among youth. Chicano cultural influence is strong in Japan, where Chicano culture took hold in the 1980s and continued to grow with contributions from Shin Miyata, Junichi Shimodaira, Miki Style, Night Tha Funksta, and MoNa (Sad Girl). Miyata owns a record label, Gold Barrio Records, that re-releases Chicano music. Chicano fashion and other cultural aspects have also been adopted in Japan. There has been debate over whether this is cultural appropriation, with most arguing that it is appreciation rather than appropriation. In an interview asking why Chicano culture is popular in Japan, two long-time proponents of Chicano culture in Japan agreed that "it's not about Mexico or about America: it's an alluring quality unique to the hybrid nature of Chicano and imprinted in all its resulting art forms, from lowriders in the '80s to TikTok videos today, that people relate to and appreciate, not only in Japan but around the world." Most recently, Chicano culture has found influence in Thailand among working-class men and women that is called "Thaino" culture. They state that they have disassociated the violence that Hollywood portrays of Chicanos from the Chicano people themselves. They have adopted rules of no cocaine or amphetamines, and only marijuana, which is legal in Thailand. The leader of one group stated that he was inspired by how Chicanos created a culture out of defiance "to fight against people who were racist toward them" and that this inspired him, since he was born in a slum in Thailand. He also stated "if you look closely at [Chicano] culture, you'll notice how gentle it is. You can see this in their Latin music, dances, clothes, and how they iron their clothes. It's both neat and gentle." See also Caló Casta Chicana feminism Chicano Moratorium Chicano nationalism Chicano Park Cosmic race Josefa Segovia Latino punk Mexican Americans Race (U.S. Census) References Further reading Maylei Blackwell, ¡Chicana Power!: Contested Histories of Feminism in the Chicano Movement.Austin: University of Texas Press, 2011. Rodolfo Acuña, Occupied America: A History of Chicanos, Longman, 2006. John R. Chavez, "The Chicano Image and the Myth of Aztlan Rediscovered", in Patrick Gerster and Nicholas Cords (eds.), Myth America: A Historical Anthology, Volume II. St. James, New York: Brandywine Press, 1997. John R. Chavez, The Lost Land: A Chicano Image of the American Southwest, Las Cruces: New Mexico State University Publications, 1984. Lorena Oropeza, Raza Si, Guerra No: Chicano Protest and Patriotism during the Viet Nam War Era. Los Angeles:University of California Press, 2005. . Ignacio López-Calvo, Latino Los Angeles in Film and Fiction: The Cultural Production of Social Anxiety. University of Arizona Press, 2011. Natalia Molina, Fit to Be Citizens?: Public Health and Race in Los Angeles, 1879–1940. Los Angeles: University of California Press, 2006. Michael A. Olivas, Colored Men and Hombres Aquí: Hernandez V. Texas and the Emergence of Mexican American Lawyering. Arte Público Press, 2006. Randy J. Ontiveros, In the Spirit of a New People: The Cultural Politics of the Chicano Movement. New York University Press, 2014. Gregorio Riviera and Tino Villanueva (eds.), MAGINE: Literary Arts Journal. Special Issue on Chicano Art. Vol. 3, Nos. 1 & 2. Boston: Imagine Publishers. 1986. F. Arturo Rosales, Chicano! The History of the Mexican American Civil Rights Movement. Houston, Texas: Arte Publico Press, 1996. Lorena Oropeza, The King of Adobe: Reies López Tijerina, Lost Prophet of the Chicano Movement. Chapel Hill, North Carolina: The University of North Carolina Press, 2019. External links California Ethnic and Multicultural Archives – In the Chicano/Latino Collections California Ethnic and Multicultural Archives – Digital Chicano Art Chicano Studies Research Center Chicano tattoo gallery Education and the Mexican-American; Racism in America : past, present, future symposium 1968-10-03, National Records and Archives Administration, American Archive of Public Broadcasting El Centro Chicano y Latino ImaginArte – Interpreting and Re-imaging Chican@Art Mexican American Latin American culture Mexican-American culture Hispanic and Latino American Hispanic and Latino American history
5717
https://en.wikipedia.org/wiki/Canary%20Islands
Canary Islands
The Canary Islands (; , ), also known informally as the Canaries, are a Spanish autonomous community and archipelago in Macaronesia in the Atlantic Ocean. At their closest point to the African mainland, they are west of Morocco. They are the southernmost of the autonomous communities of Spain. The islands have a population of 2.2 million people and are the most populous special territory of the European Union. The eight main islands are (from largest to smallest in area) Tenerife, Fuerteventura, Gran Canaria, Lanzarote, La Palma, La Gomera, El Hierro and La Graciosa. The archipelago includes many smaller islands and islets, including Alegranza, Isla de Lobos, Montaña Clara, Roque del Oeste, and Roque del Este. It also includes a number of rocks, including Garachico and Anaga. In ancient times, the island chain was often referred to as "the Fortunate Isles". The Canary Islands are the southernmost region of Spain, and the largest and most populous archipelago of Macaronesia. Because of their location, the Canary Islands have historically been considered a link between the four continents of Africa, North America, South America, and Europe. In 2019, the Canary Islands had a population of 2,153,389, with a density of 287.39 inhabitants per km2, making it the eighth most populous autonomous community of Spain. The population is mostly concentrated in the two capital islands: around 43% on the island of Tenerife and 40% on the island of Gran Canaria. The Canary Islands, especially Tenerife, Gran Canaria, Fuerteventura, and Lanzarote, are a major tourist destination, with over 12 million visitors per year. This is due to their beaches, subtropical climate, and important natural attractions, especially Maspalomas in Gran Canaria and Mount Teide (a World Heritage Site) in Tenerife. Mount Teide is the highest peak in Spain and the 4th tallest volcano in the world, measured from its base on the ocean floor. The islands have warm summers and winters warm enough for the climate to be technically tropical at sea level. The amount of precipitation and the level of maritime moderation vary depending on location and elevation. The archipelago includes green areas as well as desert. The islands' high mountains are ideal for astronomical observation, because they lie above the temperature inversion layer. As a result, the archipelago boasts two professional observatories: the Teide Observatory on Tenerife, and Roque de los Muchachos Observatory on La Palma. In 1927, the Province of Canary Islands was split into two provinces. In 1982, the autonomous community of the Canary Islands was established. The cities of Santa Cruz de Tenerife and Las Palmas de Gran Canaria are, jointly, the capitals of the islands. Those cities are also, respectively, the capitals of the provinces of Santa Cruz de Tenerife and Las Palmas. Las Palmas de Gran Canaria has been the largest city in the Canaries since 1768, except for a brief period in the 1910s. Between the 1833 territorial division of Spain and 1927, Santa Cruz de Tenerife was the sole capital of the Canary Islands. In 1927, it was ordered by decree that the capital of the Canary Islands would be shared between two cities, and this arrangement persists to the present day. The third largest city in the Canary Islands is San Cristóbal de La Laguna (another World Heritage Site) on Tenerife. During the Age of Sail, the islands were the main stopover for Spanish galleons during the Spanish colonisation of the Americas, which sailed that far south in order to catch the prevailing northeasterly trade winds. Etymology The name Islas Canarias is likely derived from the Latin name Canariae Insulae, meaning "Islands of the Dogs", a name that was evidently generalized from the ancient name of one of these islands, Canaria – presumably Gran Canaria. According to the historian Pliny the Elder, the island Canaria contained "vast multitudes of dogs of very large size". The connection to dogs is retained in their depiction on the islands' coat-of-arms. Other theories speculate that the name comes from the Nukkari Berber tribe living in the Moroccan Atlas, named in Roman sources as Canarii, though Pliny again mentions the relation of this term with dogs. The name of the islands is not derived from the canary bird; rather, the birds are named after the islands. Physical geography Tenerife is the largest and most populous island of the archipelago. Gran Canaria, with 865,070 inhabitants, is both the Canary Islands' second most populous island, and the third most populous one in Spain after Tenerife (966,354 inhabitants) and Majorca (896,038 inhabitants). The island of Fuerteventura is the second largest in the archipelago and located from the African coast. The islands form the Macaronesia ecoregion with the Azores, Cape Verde, Madeira, and the Savage Isles. The Canary Islands is the largest and most populated archipelago of the Macaronesia region. The archipelago consists of seven large and several smaller islands, all of which are volcanic in origin. According to the position of the islands with respect to the north-east trade winds, the climate can be mild and wet or very dry. Several native species form laurisilva forests. As a consequence, the individual islands in the Canary archipelago tend to have distinct microclimates. Those islands such as El Hierro, La Palma and La Gomera lying to the west of the archipelago have a climate which is influenced by the moist Canary Current. They are well vegetated even at low levels and have extensive tracts of sub-tropical laurisilva forest. As one travels east toward the African coast, the influence of the current diminishes, and the islands become increasingly arid. Fuerteventura and Lanzarote, the islands which are closest to the African mainland, are effectively desert or semi desert. Gran Canaria is known as a "continent in miniature" for its diverse landscapes like Maspalomas and Roque Nublo. In terms of its climate Tenerife is particularly interesting. The north of the island lies under the influence of the moist Atlantic winds and is well vegetated, while the south of the island around the tourist resorts of Playa de las Américas and Los Cristianos is arid. The island rises to almost above sea level, and at altitude, in the cool relatively wet climate, forests of the endemic pine Pinus canariensis thrive. Many of the plant species in the Canary Islands, like the Canary Island pine and the dragon tree, Dracaena draco are endemic, as noted by Sabin Berthelot and Philip Barker Webb in their work, L'Histoire Naturelle des Îles Canaries (1835–50). Climate The climate is warm subtropical and generally semidesertic, moderated by the sea and in summer by the trade winds. There are a number of microclimates and the classifications range mainly from semi-arid to desert. According to Köppen, the majority of the Canary Islands have a hot desert climate (BWh) and a hot semi-arid climate (BSh), caused partly due to the cool Canary Current. There also exists a subtropical humid climate which is very influenced by the ocean in the middle of the islands of La Gomera, Tenerife and La Palma, where laurisilva cloud forests grow. Geology The seven major islands, one minor island, and several small islets were originally volcanic islands, formed by the Canary hotspot. The Canary Islands is the only place in Spain where volcanic eruptions have been recorded during the Modern Era, with some volcanoes still active (El Hierro, 2011). Volcanic islands such as those in the Canary chain often have steep ocean cliffs caused by catastrophic debris avalanches and landslides. The island chain's most recent eruption occurred at Cumbre Vieja, a volcanic ridge on La Palma, in 2021. The Teide volcano on Tenerife is the highest mountain in Spain, and the third tallest volcano on Earth on a volcanic ocean island. All the islands except La Gomera have been active in the last million years; four of them (Lanzarote, Tenerife, La Palma and El Hierro) have historical records of eruptions since European discovery. The islands rise from Jurassic oceanic crust associated with the opening of the Atlantic. Underwater magmatism commenced during the Cretaceous, and continued to the present day. The current islands reached the ocean's surface during the Miocene. The islands were once considered as a distinct physiographic section of the Atlas Mountains province, which in turn is part of the larger African Alpine System division, but are nowadays recognized as being related to a magmatic hot spot. In the summer of 2011 a series of low-magnitude earthquakes occurred beneath El Hierro. These had a linear trend of northeast–southwest. In October a submarine eruption occurred about south of Restinga. This eruption produced gases and pumice, but no explosive activity was reported. The following table shows the highest mountains in each of the islands: Natural symbols The official natural symbols associated with Canary Islands are the bird Serinus canaria (canary) and the Phoenix canariensis palm. National parks Four of Spain's thirteen national parks are located in the Canary Islands, more than any other autonomous community. Two of these have been declared UNESCO World Heritage Sites and the other two are part of Biosphere Reserves. The parks are: Teide National Park is the oldest and largest national park in the Canary Islands and one of the oldest in Spain. Located in the geographic centre of the island of Tenerife, it is the most visited national park in Spain. In 2010, it became the most visited national park in Europe and second worldwide. The park's highlight is the Teide volcano; standing at an altitude of , it is the highest elevation of the country and the third largest volcano on Earth from its base. In 2007, the Teide National Park was declared one of the 12 Treasures of Spain. Politics Governance The regional executive body, the Parliament of the Canary Islands, is presided over by Fernando Clavijo Batlle (Canarian Coalition), the current President of the Canary Islands. The latter is invested by the members of the regional legislature, the Parliament of the Canary Islands, that consists of 70 elected legislators. The last regional election took place in May 2023. The islands have 14 seats in the Spanish Senate. Of these, 11 seats are directly elected (3 for Gran Canaria, 3 for Tenerife, and 1 each for Lanzarote (including La Graciosa), Fuerteventura, La Palma, La Gomera and El Hierro) while the other 3 are appointed by the regional legislature. Political geography The Autonomous Community of the Canary Islands consists of two provinces (), Las Palmas and Santa Cruz de Tenerife, whose capitals (Las Palmas de Gran Canaria and Santa Cruz de Tenerife) are capitals of the autonomous community. Each of the seven major islands is ruled by an island council named Cabildo Insular. Each island is subdivided into smaller municipalities (municipios); Las Palmas is divided into 34 municipalities, and Santa Cruz de Tenerife is divided into 54 municipalities. The international boundary of the Canaries is one subject of dispute in the Morocco-Spain relations. Moreover, in 2022 the UN has declared the Canary Island's territorial waters as Moroccan coast and Morocco has authorised gas and oil exploration in what the Canary Islands states to be Canarian territorial waters and Western Sahara waters. Morocco's official position is that international laws regarding territorial limits do not authorise Spain to claim seabed boundaries based on the territory of the Canaries, since the Canary Islands enjoy a large degree of autonomy. In fact, the islands do not enjoy any special degree of autonomy as each one of the Spanish regions is considered an autonomous community with equal status to the European ones. Canarian nationalism There are some pro-independence political parties, like the National Congress of the Canaries (CNC) and the Popular Front of the Canary Islands, but their popular support is almost insignificant, with no presence in either the autonomous parliament or the cabildos insulares. According to a 2012 study by the Centro de Investigaciones Sociológicas, when asked about national identity, the majority of respondents from the Canary Islands (53.8%) consider themselves Spanish and Canarian in equal measures, followed by 24% who consider themselves more Canarian than Spanish. Only 6.1% of the respondents consider themselves only Canarian while 7% consider themselves only Spanish. Defence The defence of the territory is the responsibility of the Spanish Armed Forces. As such, various components of the Army, Navy, Air Force and the Civil Guard are based in the territory. History Ancient and pre-Hispanic times Before the arrival of humans, the Canaries were inhabited by prehistoric animals; for example, the giant lizard (Gallotia goliath), the Tenerife and Gran Canaria giant rats, and giant prehistoric tortoises, Geochelone burchardi and Geochelone vulcanica. Although the original settlement of what are now called the Canary Islands is not entirely clear, linguistic, genetic, and archaeological analyses indicate that indigenous peoples were living on the Canary Islands at least 2000 years ago but possibly one thousand years or more before, and that they shared a common origin with the Berbers on the nearby North African coast. Reaching the islands may have taken place using several small boats, landing on the easternmost islands Lanzarote and Fuerteventura. These groups came to be known collectively as the Guanches, although Guanches had been the name for only the indigenous inhabitants of Tenerife. As José Farrujia describes, 'The indigenous Canarians lived mainly in natural caves, usually near the coast, 300–500m above sea level. These caves were sometimes isolated but more commonly formed settlements, with burial caves nearby'. Archaeological work has uncovered a rich culture visible through artefacts of ceramics, human figures, fishing, hunting and farming tools, plant fibre clothing and vessels, as well as cave paintings. At Lomo de los Gatos on Gran Canaria, a site occupied from 1,600 years ago up until the 1960s, round stone houses, complex burial sites, and associated artefacts have been found. Across the islands are thousands of Libyco-Berber alphabet inscriptions scattered and they have been extensively documented by many linguists. The social structure of indigenous Canarians encompassed 'a system of matrilineal descent in most of the islands, in which inheritance was passed on via the female line. Social status and wealth were hereditary and determined the individual's position in the social pyramid, which consisted of the king, the relatives of the king, the lower nobility, villeins, plebeians, and finally executioners, butchers, embalmers, and prisoners'. Their religion was animist, centring on the sun and moon, as well as natural features such as mountains. Exploration The islands may have been visited by the Phoenicians, the Greeks, and the Carthaginians. King Juba II, Caesar Augustus's Numidian protégé, is credited with discovering the islands for the Western world. According to Pliny the Elder, Juba found the islands uninhabited, but found "a small temple of stone" and "some traces of buildings". Juba dispatched a naval contingent to re-open the dye production facility at Mogador in what is now western Morocco in the early first century AD. That same naval force was subsequently sent on an exploration of the Canary Islands, using Mogador as their mission base. The names given by Romans to the individual islands were Ninguaria or Nivaria (Tenerife), Canaria (Gran Canaria), Pluvialia or Invale (Lanzarote), Ombrion (La Palma), Planasia (Fuerteventura), Iunonia or Junonia (El Hierro) and Capraria (La Gomera). From the 14th century onward, numerous visits were made by sailors from Majorca, Portugal and Genoa. Lancelotto Malocello settled on Lanzarote in 1312. The Majorcans established a mission with a bishop in the islands that lasted from 1350 to 1400. Castilian conquest In 1402, the Castilian colonisation of the islands began with the expedition of the French explorers Jean de Béthencourt and Gadifer de la Salle, nobles and vassals of Henry III of Castile, to Lanzarote. From there, they went on to conquer Fuerteventura (1405) and El Hierro. These invasions were "brutal cultural and military clashes between the indigenous population and the Castilians" lasting over a century due to formidable resistance by indigenous Canarians. Professor Mohamed Adhikari has defined the conquest of the islands as a genocide of the Guanches. Béthencourt received the title King of the Canary Islands, but still recognised King Henry III as his overlord. It was not a simple military enterprise, given the aboriginal resistance on some islands. Neither was it politically, since the particular interests of the nobility (determined to strengthen their economic and political power through the acquisition of the islands) conflicted with those of the states, particularly Castile, which were in the midst of territorial expansion and in a process of strengthening of the Crown against the nobility. Historians distinguish two periods in the conquest of the Canary Islands: Aristocratic conquest (Conquista señorial). This refers to the early conquests carried out by the nobility, for their own benefit and without the direct participation of the Crown of Castile, which merely granted rights of conquest in exchange for pacts of vassalage between the noble conqueror and the Crown. One can identify within this period an early phase known as the Betancurian or Norman Conquest, carried out by Jean de Bethencourt (who was originally from Normandy) and Gadifer de la Salle between 1402 and 1405, which involved the islands of Lanzarote, El Hierro and Fuerteventura. The subsequent phase is known as the Castilian Conquest, carried out by Castilian nobles who acquired, through purchases, assignments and marriages, the previously conquered islands and also incorporated the island of La Gomera around 1450. Royal conquest (Conquista realenga). This defines the conquest between 1478 and 1496, carried out directly by the Crown of Castile, during the reign of the Catholic Monarchs, who armed and partly financed the conquest of those islands which were still unconquered: Gran Canaria, La Palma and Tenerife. This phase of the conquest came to an end in the year 1496, with the dominion of the island of Tenerife, bringing the entire Canarian Archipelago under the control of the Crown of Castile. Béthencourt also established a base on the island of La Gomera, but it would be many years before the island was fully conquered. The natives of La Gomera, and of Gran Canaria, Tenerife, and La Palma, resisted the Castilian invaders for almost a century. In 1448 Maciot de Béthencourt sold the lordship of Lanzarote to Portugal's Prince Henry the Navigator, an action that was accepted by neither the natives nor the Castilians. Despite Pope Nicholas V ruling that the Canary Islands were under Portuguese control, the crisis swelled to a revolt which lasted until 1459 with the final expulsion of the Portuguese. In 1479, Portugal and Castile signed the Treaty of Alcáçovas, which settled disputes between Castile and Portugal over the control of the Atlantic. This treaty recognized Castilian control of the Canary Islands but also confirmed Portuguese possession of the Azores, Madeira, and the Cape Verde islands, and gave the Portuguese rights to any further islands or lands in the Atlantic that might be discovered. The Castilians continued to dominate the islands, but due to the topography and the resistance of the native Guanches, they did not achieve complete control until 1496, when Tenerife and La Palma were finally subdued by Alonso Fernández de Lugo. As a result of this 'the native pre-Hispanic population declined quickly due to war, epidemics, and slavery'. The Canaries were incorporated into the Kingdom of Castile. After the conquest and the introduction of slavery After the conquest, the Castilians imposed a new economic model, based on single-crop cultivation: first sugarcane; then wine, an important item of trade with England. Gran Canaria was conquered by the Crown of Castile on 6 March 1480, and Tenerife was conquered in 1496, and each had its own governor. There has been speculation that the abundance of Roccella tinctoria on the Canary Islands offered a profit motive for Jean de Béthencourt during his conquest of the islands. Lichen has been used for centuries to make dyes. This includes royal purple colors derived from roccella tinctoria, also known as orseille. The objective of the Spanish Crown to convert the islands into a powerhouse of cultivation required a much larger labour force. This was attained through a brutal practice of enslavement, not only of indigenous Canarians but large numbers of Africans who were forcibly taken from North and Sub-Saharan Africa. Whilst the first slave plantations in the Atlantic region were across Madeira, Cape Verde, and the Canary Islands, it was only the Canary Islands which had an indigenous population and were therefore invaded rather than newly occupied. This agriculture industry was largely based on sugarcane and the Castilians converted large swaths of the landscape for sugarcane production, and the processing and manufacturing of sugar, facilitated by enslaved labourers. The cities of Santa Cruz de Tenerife and Las Palmas de Gran Canaria became a stopping point for the Spanish traders, as well as conquistadors, and missionaries on their way to the New World. This trade route brought great wealth to the Castilian social sectors of the islands and soon were attracting merchants and adventurers from all over Europe. As the wealth grew, enslaved African workers were also forced into demeaning domestic roles for the rich Castilians on the islands such as servants in their houses. Research on the skeletons of some of these enslaved workers from the burial site of Finca Clavijo on Gran Canaria have showed that 'all of the adults buried in Finca Clavijo undertook extensive physical activity that involved significant stress on the spine and appendicular skeleton' that result from relentless hard labour, akin to the physical abnormalities found with enslaved peoples from other sugarcane plantations around the world. These findings of the physical strain that the enslaved at Finca Clavijo were subjected to in order to provide wealth for the Spanish elite has inspired a poem by British writer Ralph Hoyte, entitled Close to the Bone. As a result of the huge wealth generated, magnificent palaces and churches were built on La Palma during this busy, prosperous period. The Church of El Salvador survives as one of the island's finest examples of the architecture of the 16th century. Civilian architecture survives in forms such as Casas de los Sánchez-Ochando or Casa Quintana. The Canaries' wealth invited attacks by pirates and privateers. Ottoman Turkish admiral and privateer Kemal Reis ventured into the Canaries in 1501, while Murat Reis the Elder captured Lanzarote in 1585. The most severe attack took place in 1599, during the Dutch Revolt. A Dutch fleet of 74 ships and 12,000 men, commanded by Pieter van der Does, attacked the capital Las Palmas de Gran Canaria (the city had 3,500 of Gran Canaria's 8,545 inhabitants). The Dutch attacked the Castillo de la Luz, which guarded the harbor. The Canarians evacuated civilians from the city, and the Castillo surrendered (but not the city). The Dutch moved inland, but Canarian cavalry drove them back to Tamaraceite, near the city. The Dutch then laid siege to the city, demanding the surrender of all its wealth. They received 12 sheep and 3 calves. Furious, the Dutch sent 4,000 soldiers to attack the Council of the Canaries, who were sheltering in the village of Santa Brígida. 300 Canarian soldiers ambushed the Dutch in the village of Monte Lentiscal, killing 150 and forcing the rest to retreat. The Dutch concentrated on Las Palmas de Gran Canaria, attempting to burn it down. The Dutch pillaged Maspalomas, on the southern coast of Gran Canaria, San Sebastián on La Gomera, and Santa Cruz on La Palma, but eventually gave up the siege of Las Palmas and withdrew. In 1618 the Barbary pirates from North Africa attacked Lanzarote and La Gomera taking 1000 captives to be sold as slaves. Another noteworthy attack occurred in 1797, when Santa Cruz de Tenerife was attacked by a British fleet under Horatio Nelson on 25 July. The British were repulsed, losing almost 400 men. It was during this battle that Nelson lost his right arm. 18th to 19th century The sugar-based economy of the islands faced stiff competition from Spain's Caribbean colonies. Low sugar prices in the 19th century caused severe recessions on the islands. A new cash crop, cochineal (cochinilla), came into cultivation during this time, reinvigorating the islands' economy. During this time the Canarian-American trade was developed, in which Canarian products such as cochineal, sugarcane and rum were sold in American ports such as Veracruz, Campeche, La Guaira and Havana, among others. By the end of the 18th century, Canary Islanders had already emigrated to Spanish American territories, such as Havana, Veracruz, and Santo Domingo, San Antonio, Texas and St. Bernard Parish, Louisiana. These economic difficulties spurred mass emigration during the 19th and first half of the 20th century, primarily to the Americas. Between 1840 and 1890 as many as 40,000 Canary Islanders emigrated to Venezuela. Also, thousands of Canarians moved to Puerto Rico where the Spanish monarchy felt that Canarians would adapt to island life better than other immigrants from the mainland of Spain. Deeply entrenched traditions, such as the Mascaras Festival in the town of Hatillo, Puerto Rico, are an example of Canarian culture still preserved in Puerto Rico. Similarly, many thousands of Canarians emigrated to the shores of Cuba. During the Spanish–American War of 1898, the Spanish fortified the islands against a possible American attack, but no such event took place. Romantic period and scientific expeditions Sirera and Renn (2004) distinguish two different types of expeditions, or voyages, during the period 1770–1830, which they term "the Romantic period": First are "expeditions financed by the States, closely related with the official scientific Institutions. characterised by having strict scientific objectives (and inspired by) the spirit of Illustration and progress". In this type of expedition, Sirera and Renn include the following travellers: J. Edens, whose 1715 ascent and observations of Mt. Teide influenced many subsequent expeditions. Louis Feuillée (1724), who was sent to measure the meridian of El Hierro and to map the islands. Jean-Charles de Borda (1771, 1776) who more accurately measured the longitudes of the islands and the height of Mount Teide the Baudin-Ledru expedition (1796) which aimed to recover a valuable collection of natural history objects. The second type of expedition identified by Sirera and Renn is one that took place starting from more or less private initiatives. Among these, the key exponents were the following: Alexander von Humboldt (1799) Buch and Smith (1815) Broussonet Webb Sabin Berthelot. Sirera and Renn identify the period 1770–1830 as one in which "In a panorama dominated until that moment by France and England enters with strength and brio Germany of the Romantic period whose presence in the islands will increase". Early 20th century At the beginning of the 20th century, the British introduced a new cash-crop, the banana, the export of which was controlled by companies such as Fyffes. 30 November 1833 the Province of Canary Islands had been created with the capital being declared as Santa Cruz de Tenerife. The rivalry between the cities of Las Palmas de Gran Canaria and Santa Cruz de Tenerife for the capital of the islands led to the division of the archipelago into two provinces on 23 September 1927. During the time of the Second Spanish Republic, Marxist and anarchist workers' movements began to develop, led by figures such as Jose Miguel Perez and Guillermo Ascanio. However, outside of a few municipalities, these organisations were a minority and fell easily to Nationalist forces during the Spanish Civil War. Franco regime In 1936, Francisco Franco was appointed General Commandant of the Canaries. He joined the military revolt of 17 July which began the Spanish Civil War. Franco quickly took control of the archipelago, except for a few points of resistance on La Palma and in the town of Vallehermoso, on La Gomera. Though there was never a war in the islands, the post-war suppression of political dissent on the Canaries was most severe. During the Second World War, Winston Churchill prepared plans for the British seizure of the Canary Islands as a naval base, in the event of Gibraltar being invaded from the Spanish mainland. The planned operation was known as Operation Pilgrim. Opposition to Franco's regime did not begin to organise until the late 1950s, which experienced an upheaval of parties such as the Communist Party of Spain and the formation of various nationalist, leftist parties. During the Ifni War, the Franco regime set up concentration camps on the islands to extrajudicially imprison those in Western Sahara suspected of disloyalty to Spain, many of whom were colonial troops recruited on the spot but were later deemed to be potential fifth columnists and deported to the Canary Islands. These camps were characterised by the use of forced labour for infrastructure projects and highly unsanitary conditions resulting in the widespread occurrence of tuberculosis. Self-governance After the death of Franco, there was a pro-independence armed movement based in Algeria, the Movement for the Independence and Self-determination of the Canaries Archipelago (MAIAC). In 1968, the Organisation of African Unity recognized the MAIAC as a legitimate African independence movement, and declared the Canary Islands as an African territory still under foreign rule. After the establishment of a democratic constitutional monarchy in Spain, autonomy was granted to the Canaries via a law passed in 1982, with a newly established autonomous devolved government and parliament. In 1983, the first autonomous elections were held. The Spanish Socialist Workers' Party (PSOE) won. In the 2007 elections, the PSOE gained a plurality of seats, but the nationalist Canarian Coalition and the conservative Partido Popular (PP) formed a ruling coalition government. Capitals At present, the Canary Islands is the only autonomous community in Spain that has two capitals: Santa Cruz de Tenerife and Las Palmas de Gran Canaria, since the was created in 1982. The political capital of the archipelago did not exist as such until the nineteenth century. The first cities founded by the Europeans at the time of the conquest of the Canary Islands in the 15th century were: Telde (in Gran Canaria), San Marcial del Rubicón (in Lanzarote) and Betancuria (in Fuerteventura). These cities boasted the first European institutions present in the archipelago, including Catholic bishoprics. Although, because the period of splendor of these cities developed before the total conquest of the archipelago and its incorporation into the Crown of Castile never had a political and real control of the entire Canary archipelago. The function of a Canarian city with full jurisdiction for the entire archipelago only exists after the conquest of the Canary Islands, although originally de facto, that is, without legal and real meaning and linked to the headquarters of the Canary Islands General Captaincy. Las Palmas de Gran Canaria was the first city that exercised this function. This is because the residence of the Captain General of the Canary Islands was in this city during part of the sixteenth and seventeenth centuries. In May 1661, the Captain General of the Canary Islands, Jerónimo de Benavente y Quiñones, moved the headquarters of the captaincy to the city of San Cristóbal de La Laguna on the island of Tenerife. This was due to the fact that this island since the conquest was the most populated, productive and with the highest economic expectations. La Laguna would be considered the de facto capital of the archipelago until the official status of the capital of Canary Islands in the city of Santa Cruz de Tenerife was confirmed in the 19th century, due in part to the constant controversies and rivalries between the bourgeoisies of San Cristóbal de La Laguna and Las Palmas de Gran Canaria for the economic, political and institutional hegemony of the archipelago. Already in 1723, the Captain General of the Canary Islands Lorenzo Fernandez de Villavicencio had moved the headquarters of the General Captaincy of the Canary Islands from San Cristóbal de La Laguna to Santa Cruz de Tenerife. This decision continued without pleasing the society of the island of Gran Canaria. It would be after the creation of the Province of Canary Islands in November 1833 in which Santa Cruz would become the first fully official capital of the Canary Islands (De jure and not of de facto as happened previously). Santa Cruz de Tenerife would be the capital of the Canary archipelago until during the Government of General Primo de Rivera in 1927 the Province of Canary Islands was split in two provinces: Las Palmas with capital in Las Palmas de Gran Canaria, and Santa Cruz de Tenerife with capital in the homonymous city. Finally, with the Statute of Autonomy of the Canary Islands in 1982 and the creation of the Autonomous Community of the Canary Islands, the capital of the archipelago between Las Palmas de Gran Canaria and Santa Cruz de Tenerife is fixed, which is how it remains today. Demographics The Canary Islands have a population of 2,153,389 inhabitants (2019), making it the eighth most populous of Spain's autonomous communities. The total area of the archipelago is , resulting in a population density of 287.4 inhabitants per square kilometre. The population of the islands according to the 2019 data are: Tenerife – 917,841 Gran Canaria – 851,231 Lanzarote – 152,289 (including the population of La Graciosa) Fuerteventura – 116,886 La Palma – 82,671 La Gomera – 21,503 El Hierro – 10,968 The Canary Islands have become home to many European residents, mainly coming from Italy, Germany and the UK. Because of the vast immigration to Venezuela and Cuba during the second half of the 20th century and the later return to the Canary Islands of these people along with their families, there are many residents whose country of origin was Venezuela (66,593) or Cuba (41,807). Since the 1990s, many illegal migrants have reached the Canary Islands, Melilla and Ceuta, using them as entry points to the EU. Religion The Catholic Church has been the majority religion in the archipelago for more than five centuries, ever since the Conquest of the Canary Islands. There are also several other religious communities. Roman Catholic Church The overwhelming majority of native Canarians are Roman Catholic (76.7%) with various smaller foreign-born populations of other Christian beliefs such as Protestants. The appearance of the Virgin of Candelaria (Patron of Canary Islands) was credited with moving the Canary Islands toward Christianity. Two Catholic saints were born in the Canary Islands: Peter of Saint Joseph de Betancur and José de Anchieta. Both born on the island of Tenerife, they were respectively missionaries in Guatemala and Brazil. The Canary Islands are divided into two Catholic dioceses, each governed by a bishop: Diócesis Canariense: Includes the islands of the Eastern Province: Gran Canaria, Fuerteventura and Lanzarote. Its capital was San Marcial El Rubicón (1404) and Las Palmas de Gran Canaria (1483–present). There was a previous bishopric which was based in Telde, but it was later abolished. Diócesis Nivariense: Includes the islands of the western province: Tenerife, La Palma, La Gomera and El Hierro. Its capital is San Cristóbal de La Laguna (1819–present). Other religions Separate from the overwhelming Christian majority are a minority of Muslims. Among the followers of Islam, the Islamic Federation of the Canary Islands exists to represent the Islamic community in the Canary Islands as well as to provide practical support to members of the Islamic community. For its part, there is also the Evangelical Council of the Canary Islands in the archipelago. Other religious faiths represented include Jehovah's Witnesses, The Church of Jesus Christ of Latter-day Saints as well as Hinduism. Minority religions are also present such as the Church of the Guanche People which is classified as a neo-pagan native religion. Also present are Buddhism, Judaism, Baháʼí, African religion, and Chinese religions. According to Statista in 2019, there are 75,662 Muslims in Canary Islands. Statistics The distribution of beliefs in 2012 according to the CIS Barometer Autonomy was as follows: Catholic 84.9% Atheist/Agnostic/Unbeliever 12.3% Other religions 1.7% Population genetics Islands Ordered from west to east, the Canary Islands are El Hierro, La Palma, La Gomera, Tenerife, Gran Canaria, Fuerteventura, and Lanzarote. In addition, north of Lanzarote are the islets of La Graciosa, Montaña Clara, Alegranza, Roque del Este and Roque del Oeste, belonging to the Chinijo Archipelago, and northeast of Fuerteventura is the islet of Lobos. There are also a series of small adjacent rocks in the Canary Islands: the Roques de Anaga, Garachico and Fasnia in Tenerife, and those of Salmor and Bonanza in El Hierro. El Hierro El Hierro, the westernmost island, covers , making it the second smallest of the major islands, and the least populous with 10,798 inhabitants. The whole island was declared Reserve of the Biosphere in 2000. Its capital is Valverde. Also known as Ferro, it was once believed to be the westernmost land in the world. Fuerteventura Fuerteventura, with a surface of , is the second largest island of the archipelago. It has been declared a biosphere reserve by UNESCO. It has a population of 113,275. The oldest of the islands, it is more eroded. Its highest point is the Peak of the Bramble, at a height of . Its capital is Puerto del Rosario. Gran Canaria Gran Canaria has 846,717 inhabitants. The capital, Las Palmas de Gran Canaria (377,203 inhabitants), is the most populous city and shares the status of capital of the Canaries with Santa Cruz de Tenerife. Gran Canaria's surface area is . Roque Nublo and Pico de las Nieves ("Peak of Snow") are located in the center of the island. On the south of the island are the Maspalomas Dunes (Gran Canaria). La Gomera La Gomera has an area of and is the second least populous island with 21,136 inhabitants. Geologically it is one of the oldest of the archipelago. The insular capital is San Sebastian de La Gomera. Garajonay National Park is located on the island. Lanzarote Lanzarote is the easternmost island and one of the oldest of the archipelago, and it has shown evidence of recent volcanic activity. It has a surface of , and a population of 149,183 inhabitants, including the adjacent islets of the Chinijo Archipelago. The capital is Arrecife, with 56,834 inhabitants. Chinijo Archipelago The Chinijo Archipelago includes the islands La Graciosa, Alegranza, Montaña Clara, Roque del Este and Roque del Oeste. It has a surface of , and only La Graciosa is populated, with 658 inhabitants. With , La Graciosa, is the largest island of the Chinijo Archipelago but also the smallest inhabited island of the Canaries. La Palma La Palma, with 81,863 inhabitants covering an area of , is in its entirety a biosphere reserve. For long it showed no signs of volcanic activity, even though the volcano Teneguía entered into eruption last in 1971. On September 19, 2021, the volcanic Cumbre Vieja on the island erupted. It is the second-highest island of the Canaries, with the Roque de los Muchachos at as its highest point. Santa Cruz de La Palma (known to those on the island as simply "Santa Cruz") is its capital. Tenerife Tenerife is, with its area of , the most extensive island of the Canary Islands. In addition, with 904,713 inhabitants it is the most populated island of the archipelago and Spain. Two of the islands' principal cities are located on it: the capital, Santa Cruz de Tenerife and San Cristóbal de La Laguna (a World Heritage Site). San Cristóbal de La Laguna, the second city of the island is home to the oldest university in the Canary Islands, the University of La Laguna. Teide, with its is the highest peak of Spain and also a World Heritage Site. Tenerife is the site of the worst air disaster in the history of aviation, in which 583 people were killed in the collision of two Boeing 747s on 27 March 1977. La Graciosa Graciosa Island or commonly La Graciosa is a volcanic island in the Canary Islands of Spain, located 2 km (1.2 mi) north of the island of Lanzarote across the Strait of El Río. It was formed by the Canary hotspot. The island is part of the Chinijo Archipelago and the Chinijo Archipelago Natural Park (Parque Natural del Archipiélago Chinijo). It is administered by the municipality of Teguise. In 2018 La Graciosa officially became the eighth Canary Island. Before then, La Graciosa had the status of an islet, administratively dependent on the island of Lanzarote. It is the smallest and least populated of the main islands, with a population of about 700 people. Data Economy and environment The economy is based primarily on tourism, which makes up 32% of the GDP. The Canaries receive about 12 million tourists per year. Construction makes up nearly 20% of the GDP and tropical agriculture, primarily bananas and tobacco, are grown for export to Europe and the Americas. Ecologists are concerned that the resources, especially in the more arid islands, are being overexploited but there are still many agricultural resources like tomatoes, potatoes, onions, cochineal, sugarcane, grapes, vines, dates, oranges, lemons, figs, wheat, barley, maize, apricots, peaches and almonds. Water resources are also being overexploited, due to the high water usage by tourists. Also, some islands (such as Gran Canaria and Tenerife) overexploit the ground water. This is done in such degree that, according to European and Spanish legal regulations, the current situation is not acceptable. To address the problems, good governance and a change in the water use paradigm have been proposed. These solutions depend largely on controlling water use and on demand management. As this is administratively difficult and politically unpalatable, most action is currently directed at increasing the public offer of water through import from outside; a decision which is economically, politically and environmentally questionable. To bring in revenue for environmental protection, innovation, training and water sanitation a tourist tax was considered in 2018, along with a doubling of the ecotax and restrictions on holiday rents in the zones with the greatest pressure of demand. The economy is € 25 billion (2001 GDP figures). The islands experienced continuous growth during a 20-year period, up until 2001, at a rate of approximately 5% annually. This growth was fueled mainly by huge amounts of foreign direct investment, mostly to develop tourism real estate (hotels and apartments), and European Funds (near €11 billion in the period from 2000 to 2007), since the Canary Islands are labelled Region Objective 1 (eligible for euro structural funds). Additionally, the EU allows the Canary Islands Government to offer special tax concessions for investors who incorporate under the Zona Especial Canaria (ZEC) regime and create more than five jobs. Spain gave permission in August 2014 for Repsol and its partners to explore oil and natural gas prospects off the Canary Islands, involving an investment of €7.5 billion over four years, to commence at the end of 2016. Repsol at the time said the area could ultimately produce 100,000 barrels of oil a day, which would meet 10 percent of Spain's energy needs. However, the analysis of samples obtained did not show the necessary volume nor quality to consider future extraction, and the project was scrapped. Despite currently having very high dependence on fossil fuels, research on the renewable energy potential concluded that a high potential for renewable energy technologies exists on the archipelago. This, in such extent even that a scenario pathway to 100% renewable energy supply by 2050 has been put forward. The Canary Islands have great natural attractions, climate and beaches make the islands a major tourist destination, being visited each year by about 12 million people (11,986,059 in 2007, noting 29% of Britons, 22% of Spanish (from outside the Canaries), and 21% of Germans). Among the islands, Tenerife has the largest number of tourists received annually, followed by Gran Canaria and Lanzarote. The archipelago's principal tourist attraction is the Teide National Park (in Tenerife) where the highest mountain in Spain and third largest volcano in the world (Mount Teide), receives over 2.8 million visitors annually. The combination of high mountains, proximity to Europe, and clean air has made the Roque de los Muchachos peak (on La Palma island) a leading location for telescopes like the Grantecan. The islands, as an autonomous region of Spain, are in the European Union and the Schengen Area. They are in the European Union Customs Union but outside the VAT area. Instead of VAT there is a local Sales Tax (IGIC) which has a general rate of 7%, an increased tax rate of 13.5%, a reduced tax rate of 3% and a zero tax rate for certain basic need products and services. Consequently, some products are subject to additional VAT if being exported from the islands into mainland Spain or the rest of the EU. Canarian time is Western European Time (WET) (or GMT; in summer one hour ahead of GMT). So Canarian time is one hour behind that of mainland Spain and the same as that of the UK, Ireland and mainland Portugal all year round. Tourism statistics The number of tourists who visited the Canary Islands had been in 2018 16,150,054 and in the year 2019 15,589,290. GDP statistics The Gross Domestic Product (GDP) in the Canary Islands in 2015 was , per capita. The figures by island are as follows: Transport The Canary Islands have eight airports altogether, two of the main ports of Spain, and an extensive network of autopistas (highways) and other roads. For a road map see multimap. Traffic congestion is sometimes a problem in Tenerife and on Grand Canaria. Large ferry boats and fast ferries link most of the islands. Both types can transport large numbers of passengers, cargo, and vehicles. Fast ferries are made of aluminium and powered by modern and efficient diesel engines, while conventional ferries have a steel hull and are powered by heavy oil. Fast ferries travel in excess of ; conventional ferries travel in excess of , but are slower than fast ferries. A typical ferry ride between La Palma and Tenerife may take up to eight hours or more while a fast ferry takes about two and a half hours and between Tenerife and Gran Canaria can be about one hour. The largest airport is the Gran Canaria Airport. Tenerife has two airports, Tenerife North Airport and Tenerife South Airport. The island of Tenerife gathers the highest passenger movement of all the Canary Islands through its two airports. The two main islands (Tenerife and Gran Canaria) receive the greatest number of passengers. Tenerife 6,204,499 passengers and Gran Canaria 5,011,176 passengers. The port of Las Palmas is first in freight traffic in the islands, while the port of Santa Cruz de Tenerife is the first fishing port with approximately 7,500 tons of fish caught, according to the Spanish government publication Statistical Yearbook of State Ports. Similarly, it is the second port in Spain as regards ship traffic, only surpassed by the Port of Algeciras Bay. The port's facilities include a border inspection post (BIP) approved by the European Union, which is responsible for inspecting all types of imports from third countries or exports to countries outside the European Economic Area. The port of Los Cristianos (Tenerife) has the greatest number of passengers recorded in the Canary Islands, followed by the port of Santa Cruz de Tenerife. The Port of Las Palmas is the third port in the islands in passengers and first in number of vehicles transported. The SS America was beached at the Canary islands on 18 January 1994. However, the ocean liner broke apart after the course of several years and eventually sank beneath the surface. Rail transport The Tenerife Tram opened in 2007 and is currently the only one in the Canary Islands, travelling between the cities of Santa Cruz de Tenerife and San Cristóbal de La Laguna. Three more railway lines are being planned for the Canary Islands: Airports Tenerife South Airport – Tenerife Tenerife North Airport – Tenerife César Manrique-Lanzarote Airport – Lanzarote Fuerteventura Airport – Fuerteventura Gran Canaria Airport – Gran Canaria La Palma Airport – La Palma La Gomera Airport – La Gomera El Hierro Airport – El Hierro Ports Port of Puerto del Rosario – Fuerteventura Port of Arrecife – Lanzarote Port of Playa Blanca—Lanzarote Port of Santa Cruz de La Palma – La Palma Port of San Sebastián de La Gomera – La Gomera Port of La Estaca – El Hierro Port of Las Palmas – Gran Canaria Port of Arinaga – Gran Canaria Port of Agaete – Gran Canaria Port of Los Cristianos – Tenerife Port of Santa Cruz de Tenerife – Tenerife Port of Garachico – Tenerife Port of Granadilla – Tenerife Health The Servicio Canario de Salud is an autonomous body of administrative nature attached to the Ministry responsible for Health of the Government of the Canary Islands. The majority of the archipelago's hospitals belong to this organization: Hospital Nuestra Señora de los Reyes – El Hierro Hospital General de La Palma – La Palma Hospital Nuestra Señora de Guadalupe – La Gomera Hospital Universitario Nuestra Señora de Candelaria – Tenerife Hospital Universitario de Canarias – Tenerife Hospital del Sur de Tenerife – Tenerife Hospital del Norte de Tenerife – Tenerife Hospital Universitario de Gran Canaria Doctor Negrín – Gran Canaria Hospital Universitario Insular de Gran Canaria – Gran Canaria Hospital General de Lanzarote Doctor José Molina Orosa – Lanzarote Hospital General de Fuerteventura – Fuerteventura Wildlife Extinct fauna The Canary Islands were previously inhabited by a variety of endemic animals, such as extinct giant lizards (Gallotia goliath), giant tortoises (Centrochelys burchardi and C. vulcanica), and Tenerife and Gran Canaria giant rats (Canariomys bravoi and C. tamarani), among others. Extinct birds known only from Pleistocene and Holocene age bones include the Canary Islands quail (Coturnix gomerae), Dune shearwater (Puffinus holeae), Lava shearwater (P. olsoni), Trias greenfinch (Chloris triasi), Slender-billed greenfinch (C. aurelioi) and the Long-legged bunting (Emberiza alcoveri). Current fauna The bird life includes European and African species, such as the black-bellied sandgrouse, Canary, Graja, a subspecies of red-billed chough endemic to La Palma, Gran Canaria blue chaffinch, Tenerife blue chaffinch, Canary Islands chiffchaff, Fuerteventura chat, Tenerife goldcrest, La Palma chaffinch, Canarian Egyptian vulture, Bolle's pigeon, Laurel pigeon, Plain swift, and Houbara bustard. Terrestrial fauna includes the El Hierro giant lizard, La Gomera giant lizard, and the La Palma giant lizard. Mammals include the Canarian shrew, Canary big-eared bat, the Algerian hedgehog, and the more recently introduced mouflon. Marine life The marine life found in the Canary Islands is also varied, being a combination of North Atlantic, Mediterranean and endemic species. In recent years, the increasing popularity of both scuba diving and underwater photography have provided biologists with much new information on the marine life of the islands. Fish species found in the islands include many species of shark, ray, moray eel, bream, jack, grunt, scorpionfish, triggerfish, grouper, goby, and blenny. In addition, there are many invertebrate species, including sponge, jellyfish, anemone, crab, mollusc, sea urchin, starfish, sea cucumber and coral. There are a total of five different species of marine turtle that are sighted periodically in the islands, the most common of these being the endangered loggerhead sea turtle. The other four are the green sea turtle, hawksbill sea turtle, leatherback sea turtle and Kemp's ridley sea turtle. Currently, there are no signs that any of these species breed in the islands, and so those seen in the water are usually migrating. However, it is believed that some of these species may have bred in the islands in the past, and there are records of several sightings of leatherback sea turtle on beaches in Fuerteventura, adding credibility to the theory. Marine mammals include the large varieties of cetaceans including rare and not well-known species (see more details in the Marine life of the Canary Islands). Hooded seals have also been known to be vagrant in the Canary Islands every now and then. The Canary Islands were also formerly home to a population of the rarest pinniped in the world, the Mediterranean monk seal. Native flora gallery Holidays Some holidays of those celebrated in the Canary Islands are international and national, others are regional holidays and others are of insular character. The official day of the autonomous community is Canary Islands Day on 30 May. The anniversary of the first session of the Parliament of the Canary Islands, based in the city of Santa Cruz de Tenerife, held on 30 May 1983, is commemorated with this day. The common festive calendar throughout the Canary Islands is as follows: In addition, each of the islands has an island festival, in which it is a holiday only on that specific island. These are the festivities of island patrons saints of each island. Organized chronologically are: The most famous festivals of the Canary Islands is the carnival. It is the most famous and international festival of the archipelago. The carnival is celebrated in all the islands and all its municipalities, perhaps the two busiest are those of the two Canarian capitals; the Carnival of Santa Cruz de Tenerife (Tourist Festival of International Interest) and the Carnival of Las Palmas de Gran Canaria. It is celebrated on the streets between the months of February and March. But the rest of the islands of the archipelago have their carnivals with their own traditions among which stand out: The Festival of the Carneros of El Hierro, the Festival of the Diabletes of Teguise in Lanzarote, Los Indianos de La Palma, the Carnival of San Sebastián de La Gomera and the Carnival of Puerto del Rosario in Fuerteventura. Science and technology In the 1960s, Gran Canaria was selected as the location for one of the 14 ground stations in the Manned Space Flight Network (MSFN) to support the NASA space program. Maspalomas Station, located in the south of the island, took part in a number of space missions including the Apollo 11 Moon landings and Skylab. Today it continues to support satellite communications as part of the ESA network. Because of the remote location, a number of astronomical observatories are located in the archipelago, including the Teide Observatory on Tenerife, the Roque de los Muchachos Observatory on La Palma, and the Temisas Astronomical Observatory on Gran Canaria. Tenerife is the home of the Instituto de Astrofísica de Canarias (Astrophysical Institute of the Canaries). There is also an Instituto de Bio-Orgánica Antonio González (Antonio González Bio-Organic Institute) at the University of La Laguna. Also at that university are the Instituto de Lingüística Andrés Bello (Andrés Bello Institute of Linguistics), the Centro de Estudios Medievales y Renacentistas (Center for Medieval and Renaissance Studies), the Instituto Universitario de la Empresa (University Institute of Business), the Instituto de Derecho Regional (Regional Institute of Law), the Instituto Universitario de Ciencias Políticas y Sociales (University Institute of Political and Social Sciences) and the Instituto de Enfermedades Tropicales (Institute of Tropical Diseases). The latter is one of the seven institutions of the Red de Investigación de Centros de Enfermedades Tropicales (RICET, "Network of Research of Centers of Tropical Diseases"), located in various parts of Spain. The Instituto Volcanológico de Canarias (Volcanological Institute of the Canary Islands) is based in Tenerife. Sports A unique form of wrestling known as Canarian wrestling (lucha canaria) has opponents stand in a special area called a "terrero" and try to throw each other to the ground using strength and quick movements. Another sport is the "game of the sticks" (palo canario) where opponents fence with long sticks. This may have come about from the shepherds of the islands who would challenge each other using their long walking sticks. Furthermore, there is the shepherd's jump (salto del pastor). This involves using a long stick to vault over an open area. This sport possibly evolved from the shepherd's need to occasionally get over an open area in the hills as they were tending their sheep. The two main football teams in the archipelago are: the CD Tenerife (founded in 1912) and UD Las Palmas (founded in 1949). As of the 2023/2024 season, UD Las Palmas plays in La Liga, the top tier of Spanish football. CD Tenerife however plays in The Segunda Divisón. When in the same division, the clubs contest the Canary Islands derby. There are smaller clubs also playing in the mainland Spanish football league system, most notably UD Lanzarote and CD Laguna, although no other Canarian clubs have played in the top flight. The mountainous terrain of the Canary Islands also caters to the growing popularity of ultra running and ultramarathons as host of annual competitive long-distance events including CajaMar Tenerife Bluetrail on Tenerife, Transvulcania on La Palma, Transgrancanaria on Gran Canaria, and the Half Marathon des Sables on Fuerteventura. A yearly Ironman Triathlon has been taking place on Lanzarote since 1992. Notable athletes Paco Campos, (1916–1995); a footballer who played as a forward. With 127 goals, 120 of which were for Atlético Madrid, he is the highest scoring player from the Canary Islands in La Liga. Nicolás García Hemme, born 20 June 1988 in Las Palmas de Gran Canaria, Canary Islands, 2012 London Olympics, Taekwondo Silver Medalist in Men's Welterweight category (−80 kg). Alfredo Cabrera, (1881–1964); shortstop for the St. Louis Cardinals in 1913 Sergio Rodríguez, born in San Cristóbal de La Laguna in 1986, played point guard for the Portland Trail Blazers, Sacramento Kings, and New York Knicks. David Silva, born in Arguineguín in 1986, plays association football for Real Sociedad, member of the 2010 FIFA World Cup champion Spain national football team Juan Carlos Valerón, born in Arguineguín in 1975, played association football for Deportivo la Coruna and Las Palmas. Pedro, born in Santa Cruz de Tenerife in 1987, plays association football for Lazio, member of the 2010 FIFA World Cup champion Spain national football team Carla Suárez Navarro, born in Las Palmas de Gran Canaria in 1988, professional tennis player Paola Tirados, born in Las Palmas de Gran Canaria in 1980, synchronized swimmer, who participated in the Olympic Games of 2000, 2004 and 2008. She won the silver medal in Beijing in 2008 in the team competition category. Jesé, born in Las Palmas de Gran Canaria in 1993, plays association football for Las Palmas. Christo Bezuidenhout, born in Tenerife in 1970, played rugby union for Gloucester and South Africa. Pedri, born in Tegueste in 2002, plays association football for Barcelona. See also History Battle of Santa Cruz de Tenerife (1797) First Battle of Acentejo Pyramids of Güímar Second Battle of Acentejo Tanausu Tenerife airport disaster; the deadliest commercial aviation disaster in history. Geography Cumbre Vieja, a volcano on La Palma Guatiza (Lanzarote) La Matanza de Acentejo Los Llanos de Aridane Orotava Valley San Andrés Islands of Macaronesia Azores Madeira Cabo Verde Culture Canarian cuisine Canarian Spanish Religion in Canary Islands Isleños Military of the Canary Islands Music of the Canary Islands Silbo Gomero, a whistled language, is an indigenous variant of Spanish Tortilla canaria Virgin of Candelaria (Patron saint of Canary Islands) References Notes Citations Sources Alfred Crosby, Ecological Imperialism: The Biological Expansion of Europe, 900–1900 (Cambridge University Press) Felipe Fernández-Armesto, The Canary Islands after the Conquest: The Making of a Colonial Society in the Early-Sixteenth Century, Oxford U. Press, 1982. ; Sergio Hanquet, Diving in Canaries, Litografía A. ROMERO, 2001. Martin Wiemers: The butterflies of the Canary Islands. – A survey on their distribution, biology and ecology (Lepidoptera: Papilionoidea and Hesperioidea) – Linneana Belgica 15 (1995): 63–84 & 87–118 Further reading * External links Canary Islands Government Official Tourism Website of the Canary Islands Cloud vortices near the Canaries, March 2023 NASA Earth Observatory POTD for April 15, 2023 Archipelagoes of Spain Autonomous communities of Spain Archipelagoes of Africa North Africa NUTS 1 statistical regions of the European Union NUTS 2 statistical regions of the European Union Outermost regions of the European Union Physiographic sections
5718
https://en.wikipedia.org/wiki/Chuck%20D
Chuck D
Carlton Douglas Ridenhour (born August 1, 1960), known professionally as Chuck D, is an American rapper, best known as the leader and frontman of the hip hop group Public Enemy, which he co-founded in 1985 with Flavor Flav. Chuck D is also a member of the rock supergroup Prophets of Rage. He has released several solo albums, most notably Autobiography of Mistachuck (1996). His work with Public Enemy helped create politically and socially conscious hip hop music in the mid-1980s. The Source ranked him at No. 12 on its list of the Top 50 Hip-Hop Lyricists of All Time. Chuck D has been nominated for six Grammys throughout his career, and has received the Grammy Lifetime Achievement Award as a member of Public Enemy. He was also inducted into the Rock and Roll Hall of Fame in 2013 as a member of Public Enemy. Early life Ridenhour was born on August 1, 1960, on Long Island, New York. When he was a child, his mother played Motown and showtunes in the home and his father belonged to the Columbia Record Club. He began writing lyrics after the New York City blackout of 1977. He attended W. Tresper Clarke High School, where he was offered no formal education in music. He then went to Adelphi University on Long Island to study graphic design, where he met William Drayton (Flavor Flav). He received a Bachelor of Fine Arts from Adelphi in 1984 and later received an honorary doctorate from Adelphi in 2013. While at Adelphi, Ridenhour co-hosted hip hop radio show the Super Spectrum Mix Hour as Chuck D on Saturday nights at Long Island rock radio station WLIR, designed flyers for local hip-hop events, and drew a cartoon called Tales of the Skind for Adelphi student newspaper The Delphian. Career Ridenhour (using the nickname Chuck D) formed Public Enemy in 1985 with Flavor Flav. Upon hearing Ridenhour's demo track "Public Enemy Number One", fledgling producer/upcoming music-mogul Rick Rubin insisted on signing him to his Def Jam Records. Their major label releases were Yo! Bum Rush the Show (1987), It Takes a Nation of Millions to Hold Us Back (1988), Fear of a Black Planet (1990), Apocalypse 91... The Enemy Strikes Black (1991), the compilation album Greatest Misses (1992), and Muse Sick-n-Hour Mess Age (1994). They also released a full-length album soundtrack for the film He Got Game in 1998. Ridenhour also contributed (as Chuck D) to several episodes of the documentary series The Blues. He has appeared as a featured artist on many other songs and albums, having collaborated with artists such as Janet Jackson, Kool Moe Dee, The Dope Poet Society, Run–D.M.C., Ice Cube, Boom Boom Satellites, Rage Against the Machine, Anthrax, John Mellencamp and many others. In 1990, he appeared on "Kool Thing", a song by the alternative rock band Sonic Youth, and along with Flavor Flav, he sang on George Clinton's song "Tweakin'", which appears on his 1989 album The Cinderella Theory. In 1993, he was the executive producer for Got 'Em Running Scared, an album by Ichiban Records group Chief Groovy Loo and the Chosen Tribe. Later career In 1996, Ridenhour released Autobiography of Mistachuck on Mercury Records. Chuck D made a rare appearance at the 1998 MTV Video Music Awards, presenting the Video Vanguard Award to the Beastie Boys, commending their musicianship. In November 1998, he settled out of court with Christopher "The Notorious B.I.G." Wallace's estate over the latter's sampling of his voice in the song "Ten Crack Commandments". The specific sampling is Ridenhour counting off the numbers one to nine on the track "Shut 'Em Down". He later described the decision to sue as "stupid". In September 1999, he launched a multi-format "supersite" on the web site Rapstation.com. The site includes a TV and radio station with original programming, prominent hip hop DJs, celebrity interviews, free MP3 downloads (the first was contributed by rapper Coolio), downloadable ringtones by ToneThis, social commentary, current events, and regular features on turning rap careers into a viable living. Since 2000, he has been one of the most vocal supporters of peer-to-peer file sharing in the music industry. He loaned his voice to Grand Theft Auto: San Andreas as DJ Forth Right MC for the radio station Playback FM. In 2000, he collaborated with Public Enemy's Gary G-Whiz and MC Lyte on the theme music to the television show Dark Angel. He appeared with Henry Rollins in a cover of Black Flag's "Rise Above" for the album Rise Above: 24 Black Flag Songs to Benefit the West Memphis Three. In 2003, he was featured in the PBS documentary Godfathers and Sons in which he recorded a version of Muddy Waters' song "Mannish Boy" with Common, Electrik Mud Cats, and Kyle Jason. He was also featured on Z-Trip's album Shifting Gears on a track called "Shock and Awe"; a 12-inch of the track was released featuring artwork by Shepard Fairey. In 2008 he contributed a chapter to Sound Unbound: Sampling Digital Music and Culture (The MIT Press, 2008) edited by Paul D. Miller a.k.a. DJ Spooky, and also turned up on The Go! Team's album Proof of Youth on the track "Flashlight Fight." He also fulfilled his childhood dreams of being a sports announcer by performing the play-by-play commentary in the video game NBA Ballers: Chosen One on Xbox 360 and PlayStation 3. In 2009, Ridenhour wrote the foreword to the book The Love Ethic: The Reason Why You Can't Find and Keep Beautiful Black Love by Kamau and Akilah Butler. He also appeared on Brother Ali's album Us. In March 2011, Chuck D re-recorded vocals with The Dillinger Escape Plan for a cover of "Fight the Power". Chuck D duetted with Rock singer Meat Loaf on his 2011 album Hell in a Handbasket on the song "Mad Mad World/The Good God Is a Woman and She Don't Like Ugly". In 2016 Chuck D joined the band Prophets of Rage along with B-Real and former members of Rage Against the Machine. In July 2019, Ridenhour sued Terrordome Music Publishing and Reach Music Publishing for $1 million for withholding royalties. In 2023, Chuck D released a four-part documentary on PBS entitled "Fight the Power: How Hip Hop Changed the World." Rapping technique and creative process Chuck D is known for his powerful rapping. How to Rap says he "has a powerful, resonant voice that is often acclaimed as one of the most distinct and impressive in hip-hop". Chuck says this was based on listening to Melle Mel and sportscasters such as Marv Albert. Chuck often comes up with a title for a song first. He writes on paper, though sometimes edits using a computer. He prefers to not punch in or overdub vocals. Chuck listed his favourite rap albums in Hip Hop Connection in March 2000: N.W.A, Straight Outta Compton Boogie Down Productions, Criminal Minded Run-DMC, Tougher Than Leather Big Daddy Kane, Looks Like a Job For... Stetsasonic, In Full Gear Ice Cube, AmeriKKKa's Most Wanted Dr. Dre, The Chronic De La Soul, 3 Feet High and Rising Eric B. & Rakim, Follow the Leader Run-DMC, Raising Hell ("It was the first record that made me realise this was an album-oriented genre") Politics Chuck D identifies as Black, as opposed to African or African-American. In a 1993 issue of DIRT Magazine covering a taping of In the Mix hosted by Alimi Ballard at the Apollo, Dan Field writes, At one point, Chuck bristles a bit at the term "African-American." He thinks of himself as Black and sees nothing wrong with the term. Besides, he says, having been born in the United States and lived his whole life here, he doesn't consider himself African. Being in Public Enemy has given him the chance to travel around the world, an experience that really opened his eyes and his mind. He says visiting Africa and experiencing life on a continent where the majority of people are Black gave him a new perspective and helped him get in touch with his own history. He also credits a trip to the ancient Egyptian pyramids at Giza with helping him appreciate the relative smallness of man. Ridenhour is politically active; he co-hosted Unfiltered on Air America Radio, testified before the United States Congress in support of peer-to-peer MP3 sharing, and was involved in a 2004 rap political convention. He has continued to be an activist, publisher, lecturer, and producer. Addressing the negative views associated with rap music, he co-wrote the essay book Fight the Power: Rap, Race, and Reality with Yusuf Jah. He argues that "music and art and culture is escapism, and escapism sometimes is healthy for people to get away from reality", but sometimes the distinction is blurred and that's when "things could lead a young mind in a direction." He also founded the record company Slam Jamz and acted as narrator in Kareem Adouard's short film Bling: Consequences and Repercussions, which examines the role of conflict diamonds in bling fashion. Despite Chuck D and Public Enemy's success, Chuck D claims that popularity or public approval was never a driving motivation behind their work. He is admittedly skeptical of celebrity status, revealing in a 1999 interview with BOMB Magazine that "The key for the record companies is to just keep making more and more stars, and make the ones who actually challenge our way of life irrelevant. The creation of celebrity has clouded the minds of most people in America, Europe and Asia. It gets people off the path they need to be on as individuals." In an interview with Le Monde, published January 29, 2008, Chuck D stated that rap is devolving so much into a commercial enterprise, that the relationship between the rapper and the record label is that of slave to a master. He believes that nothing has changed for African-Americans since the debut of Public Enemy and, although he thinks that an Obama-Clinton alliance is great, he does not feel that the establishment will allow anything of substance to be accomplished. He stated that French President Nicolas Sarkozy is like any other European elite: he has profited through the murder, rape, and pillaging of those less fortunate and he refuses to allow equal opportunity for those men and women from Africa. In this article, he defended a comment made by Professor Griff in the past that he says was taken out of context by the media. The real statement was a critique of the Israeli government and its treatment of the Palestinian people. Chuck D stated that it is Public Enemy's belief that all human beings are equal. In an interview with the magazine N'Digo published in June 2008, he spoke of today's mainstream urban music seemingly relishing the addictive euphoria of materialism and sexism, perhaps being the primary cause of many people harboring resentment towards the genre and its future. However, he has expressed hope for its resurrection, saying "It's only going to be dead if it doesn't talk about the messages of life as much as the messages of death and non-movement", citing artists such as NYOil, M.I.A. and The Roots as socially conscious artists who push the envelope creatively. "A lot of cats are out there doing it, on the Web and all over. They're just not placing their career in the hands of some major corporation." In 2010, Chuck D released the track "Tear Down That Wall." He said "I talked about the wall not only just dividing the U.S. and Mexico but the states of California, New Mexico and Texas. But Arizona, it's like, come on. Now they're going to enforce a law that talks about basically racial profiling." He is on the board of the TransAfrica Forum, a Pan African organization that is focused on African, Caribbean and Latin American issues. He has been an activist with projects of The Revcoms, such as Refuse Fascism and Stop Mass Incarceration Network. Carl Dix interviewed Chuck D on The Revcoms' YouTube program The RNL – Revolution, Nothing Less! – Show. In 2022, he endorsed Conrad Tillard, formerly the Nation of Islam Minister known as Conrad Muhammad and subsequently a Baptist Minister, in his campaign for New York State Senate in District 25 (covering part of eastern and north-central Brooklyn). Personal life Chuck D has claimed on Twitter to be a maternal great-grandson of architect George Washington Foster. As of June 2023, he has three children aged 34, 30, and 10. The two oldest by his first ex-wife Deborah McClendon and the youngest by his ex-wife Gaye Theresa Johnson. Chuck D lives in California and lost his home in the Thomas Fire that occurred from December 2017 to January 2018. TV appearances Narrated and appeared on-camera for the 2005 PBS documentary Harlem Globetrotters: The Team That Changed the World. Appeared on-camera for the PBS program Independent Lens: Hip-Hop: Beyond Beats and Rhymes. Appeared in an episode of NewsRadio as himself. He appeared on The Henry Rollins Show. He was a featured panelist (with Lars Ulrich) on the May 12, 2000, episode of the Charlie Rose show. Host Charlie Rose was discussing the Internet, copyright infringement, Napster Inc., and the future of the music industry. He appeared on an episode of Space Ghost Coast to Coast with Pat Boone. While there, Space Ghost tried (and failed) to show he was "hip" to rap, saying his favorite rapper was M. C. Escher. He appeared on an episode of Johnny Bravo. He appeared via satellite to the UK, as a panelist on BBC's Newsnight on January 20, 2009, following Barack Obama's Inauguration. He appeared on a Christmas episode of Adult Swim's Aqua Teen Hunger Force. He appeared on VH1 Ultimate Albums Blood Sugar Sex Magik talking about the Red Hot Chili Peppers. He appeared on Foo Fighters: Sonic Highways in the episode talking about the beginnings of the hip-hop scene in New York City Music appearances In 1990, Chuck featured on Sonic Youth single Kool Thing. In 1993, Chuck rapped on "New Agenda" from Janet Jackson's janet. "I loved his work, but I'd never met him," said Jackson. "I called Chuck up and told him how much I admired their work. When I hear Chuck, it's like I'm hearing someone teaching, talking to a whole bunch of people. And instead of just having the rap in the bridge, as usual, I wanted him to do stuff all the way through. I sent him a tape. He said he loved the song, but he was afraid he was going to mess it up. I said 'Are you kidding?'" In 1999, Chuck D appeared on Prince's hit "Undisputed" on the album Rave Un2 the Joy Fantastic. In 2001, Chuck D appeared on the Japanese electronic duo Boom Boom Satellites track "Your Reality's a Fantasy but Your Fantasy Is Killing Me" on the album Umbra. In 2001, Chuck D provided vocals for Public Domain's Rock Da Funky Beats. In 2010, Chuck D made an appearance on the track "Transformação" (Portuguese for "Transformation") from Brazilian rapper MV Bill's album Causa E Efeito (meaning Cause and Effect). In 2003 he was featured on the track "Access to the Excess" in Junkie XL's album Radio JXL: A Broadcast from the Computer Hell Cabin. In 2011 Chuck D made an appearance on the track "Mad Mad World/The Good God Is a Woman and She Don't Like Ugly" from Meat Loaf's 2011 album Hell in a Handbasket. In 2013, he has appeared in Mat Zo's single "Pyramid Scheme". In 2013 he performed at the Rock and Roll Hall of Fame Music Masters concert tribute to The Rolling Stones. In 2014 he performed with Jahi on "People Get Ready" and "Yo!" from the first album by Public Enemy spin-off project PE 2.0. In 2016 he appeared in ASAP Ferg's album "Always Strive and Prosper" on the track "Beautiful People". In 2017 he was featured on the track "America" on Logic's album "Everybody". In 2019, he appeared on "Story of Everything", a song on Threads, an album by Sheryl Crow. The track also features Andra Day and Gary Clark Jr. Discography with Public Enemy Studio albums Yo! Bum Rush the Show (1987) It Takes a Nation of Millions to Hold Us Back (1988) Fear of a Black Planet (1990) Apocalypse 91... The Enemy Strikes Black (1991) Muse Sick-n-Hour Mess Age (1994) He Got Game (1998) There's a Poison Goin' On (1999) Revolverlution (2002) New Whirl Odor (2005) How You Sell Soul to a Soulless People Who Sold Their Soul? (2007) Most of My Heroes Still Don't Appear on No Stamp (2012) The Evil Empire of Everything (2012) Man Plans God Laughs (2015) Nothing Is Quick in the Desert (2017) What You Gonna Do When the Grid Goes Down? (2020) with Confrontation Camp Studio albums Objects in the Mirror Are Closer Than They Appear (2001) with Prophets of Rage Studio albums Prophets of Rage (2017) Studio EPs The Party's Over (2016) Solo Studio albums Autobiography of Mistachuck (1996) The Black in Man (2014) If I Can't Change the People Around Me I Change the People Around Me (2016) Celebration of Ignorance (2018) Compilation albums Action (DJ Matheos Worldwide International Remix) – Most*hifi (featuring Chuck D. and Huggy) (2010) Don't Rhyme for the Sake of Riddlin' (as Mistachuck) (2012) References Other sources Selected publications External links Public Enemy website 1960 births Living people Adelphi University alumni American talk radio hosts African-American male rappers American male rappers African-American male singers American male singers African-American television producers Television producers from New York (state) Mercury Records artists Rappers from Queens, New York Singers from New York (state) People from Roosevelt, New York Public Enemy (band) members Rappers from New York (state) Prophets of Rage members Rap metal musicians 21st-century American rappers
5719
https://en.wikipedia.org/wiki/Cutaway%20%28filmmaking%29
Cutaway (filmmaking)
In film and video, a cutaway is the interruption of a continuously filmed action by inserting a view of something else. It is usually followed by a cut back to the first shot. A cutaway scene is the interruption of a scene with the insertion of another scene, generally unrelated or only peripherally related to the original scene. The interruption is usually quick, and is usually, although not always, ended by a return to the original scene. The effect is of commentary to the original scene, frequently comic in nature. Usage The most common use of cutaway shots in dramatic films is to adjust the pace of the main action, to conceal the deletion of some unwanted part of the main shot, or to allow the joining of parts of two versions of that shot. For example, a scene may be improved by cutting a few frames out of an actor's pause; a brief view of a listener can help conceal the break. Or the actor may fumble some of his lines in a group shot; rather than discarding a good version of the shot, the director may just have the actor repeat the lines for a new shot, and cut to that alternate view when necessary. Cutaways are also used often in older horror films in place of special effects. For example, a shot of a zombie getting its head cut off may, for instance, start with a view of an axe being swung through the air, followed by a close-up of the actor swinging it, then followed by a cut back to the now severed head. George A. Romero, creator of the Dead Series, and Tom Savini pioneered effects that removed the need for cutaways in horror films. In news broadcasting and documentary work, the cutaway is used much as it would be in fiction. On location, there is usually just one camera to film an interview, and it is usually trained on the interviewee. Often, there is also only one microphone. After the interview, the interviewer usually repeats his questions while he is being filmed, with pauses that act as if the answers are listened to. These shots can be used as cutaways. Cutaways to the interviewer, called noddies, can also be used to cover cuts. The cutaway does not necessarily contribute any dramatic content of its own, but is used to help the editor assemble a longer sequence. For that reason, editors choose cutaways related to the main action, such as another action or object in the same location. For example, if the main shot is of a man walking down an alley, possible cutaways may include a shot of a cat on a nearby dumpster or a shot of a person watching from a window overhead. See also Buffer shot Cross-cutting Dissolve (filmmaking) Fast cutting Flashback Jump cut L cut Match cut Shot reverse shot Slow cutting Cutscene Cutaway gag References Cinematography Cinematic techniques Film editing
5721
https://en.wikipedia.org/wiki/Coma
Coma
A coma is a deep state of prolonged unconsciousness in which a person cannot be awakened, fails to respond normally to painful stimuli, light, or sound, lacks a normal wake-sleep cycle and does not initiate voluntary actions. The person may experience respiratory and circulatory problems due to the body's inability to maintain normal bodily functions. People in a coma often require extensive medical care to maintain their health and prevent complications such as pneumonia or blood clots. Coma patients exhibit a complete absence of wakefulness and are unable to consciously feel, speak or move. Comas can be derived by natural causes, or can be medically induced. Clinically, a coma can be defined as the consistent inability to follow a one-step command. It can also be defined as a score of ≤ 8 on the Glasgow Coma Scale (GCS) lasting ≥ 6 hours. For a patient to maintain consciousness, the components of wakefulness and awareness must be maintained. Wakefulness describes the quantitative degree of consciousness, whereas awareness relates to the qualitative aspects of the functions mediated by the cortex, including cognitive abilities such as attention, sensory perception, explicit memory, language, the execution of tasks, temporal and spatial orientation and reality judgment. From a neurological perspective, consciousness is maintained by the activation of the cerebral cortex—the gray matter that forms the outer layer of the brain—and by the reticular activating system (RAS), a structure located within the brainstem. Etymology The term 'coma', from the Greek koma, meaning deep sleep, had already been used in the Hippocratic corpus (Epidemica) and later by Galen (second century AD). Subsequently, it was hardly used in the known literature up to the middle of the 17th century. The term is found again in Thomas Willis' (1621–1675) influential De anima brutorum (1672), where lethargy (pathological sleep), 'coma' (heavy sleeping), carus (deprivation of the senses) and apoplexy (into which carus could turn and which he localized in the white matter) are mentioned. The term carus is also derived from Greek, where it can be found in the roots of several words meaning soporific or sleepy. It can still be found in the root of the term 'carotid'. Thomas Sydenham (1624–89) mentioned the term 'coma' in several cases of fever (Sydenham, 1685). Signs and symptoms General symptoms of a person in a comatose state are: Inability to voluntarily open the eyes A non-existent sleep-wake cycle Lack of response to physical (painful) or verbal stimuli Depressed brainstem reflexes, such as pupils not responding to light Abnormal, difficulty, or irregular breathing or no breathing at all when coma was caused by cardiac arrest Scores between 3 and 8 on the Glasgow Coma Scale Causes Many types of problems can cause a coma. Forty percent of comatose states result from drug poisoning. Certain drug use under certain conditions can damage or weaken the synaptic functioning in the ascending reticular activating system (ARAS) and keep the system from properly functioning to arouse the brain. Secondary effects of drugs, which include abnormal heart rate and blood pressure, as well as abnormal breathing and sweating, may also indirectly harm the functioning of the ARAS and lead to a coma. Given that drug poisoning is the cause for a large portion of patients in a coma, hospitals first test all comatose patients by observing pupil size and eye movement, through the vestibular-ocular reflex. (See Diagnosis below.) The second most common cause of coma, which makes up about 25% of cases, is lack of oxygen, generally resulting from cardiac arrest. The Central Nervous System (CNS) requires a great deal of oxygen for its neurons. Oxygen deprivation in the brain, also known as hypoxia, causes sodium and calcium from outside of the neurons to decrease and intracellular calcium to increase, which harms neuron communication. Lack of oxygen in the brain also causes ATP exhaustion and cellular breakdown from cytoskeleton damage and nitric oxide production. Twenty percent of comatose states result from an ischemic stroke, brain hemorrhage, or brain tumor. During a stroke, blood flow to part of the brain is restricted or blocked. An ischemic stroke, brain hemorrhage, or brain tumor may cause restriction of blood flow. Lack of blood to cells in the brain prevents oxygen from getting to the neurons, and consequently causes cells to become disrupted and die. As brain cells die, brain tissue continues to deteriorate, which may affect the functioning of the ARAS, causing unconsciousness and coma. Comatose cases can also result from traumatic brain injury, excessive blood loss, malnutrition, hypothermia, hyperthermia, hyperammonemia, abnormal glucose levels, and many other biological disorders. Furthermore, studies show that 1 out of 8 patients with traumatic brain injury experience a comatose state. Heart-related causes of coma include cardiac arrest, myocardial infarction, heart failure, arrhythmia when severe, cardiogenic shock, myocarditis, and pericarditis. Respiratory arrest is the only lung condition to cause coma, but many different lung conditions can cause decreased level of consciousness, but don't reach coma. Other causes of coma include severe or persistent seizures, kidney failure, liver failure, hyperglycemia, hypoglycemia, and infections involving the brain, like meningitis and encephalitis. Pathophysiology Injury to either or both of the cerebral cortex or the reticular activating system (RAS) is sufficient to cause a person to enter coma. The cerebral cortex is the outer layer of neural tissue of the cerebrum of the brain. The cerebral cortex is composed of gray matter which consists of the nuclei of neurons, whereas the inner portion of the cerebrum is composed of white matter and is composed of the axons of neuron. White matter is responsible for perception, relay of the sensory input via the thalamic pathway, and many other neurological functions, including complex thinking. The RAS, on the other hand, is a more primitive structure in the brainstem which includes the reticular formation (RF). The RAS has two tracts, the ascending and descending tract. The ascending tract, or ascending reticular activating system (ARAS), is made up of a system of acetylcholine-producing neurons, and works to arouse and wake up the brain. Arousal of the brain begins from the RF, through the thalamus, and then finally to the cerebral cortex. Any impairment in ARAS functioning, a neuronal dysfunction, along the arousal pathway stated directly above, prevents the body from being aware of its surroundings. Without the arousal and consciousness centers, the body cannot awaken, remaining in a comatose state. The severity and mode of onset of coma depends on the underlying cause. There are two main subdivisions of a coma: structural and diffuse neuronal. A structural cause, for example, is brought upon by a mechanical force that brings about cellular damage, such as physical pressure or a blockage in neural transmission. While a diffuse cause is limited to aberrations of cellular function, that fall under a metabolic or toxic subgroup. Toxin-induced comas are caused by extrinsic substances, whereas metabolic-induced comas are caused by intrinsic processes, such as body thermoregulation or ionic imbalances(e.g. sodium). For instance, severe hypoglycemia (low blood sugar) or hypercapnia (increased carbon dioxide levels in the blood) are examples of a metabolic diffuse neuronal dysfunction. Hypoglycemia or hypercapnia initially cause mild agitation and confusion, but progress to obtundation, stupor, and finally, complete unconsciousness. In contrast, coma resulting from a severe traumatic brain injury or subarachnoid hemorrhage can be instantaneous. The mode of onset may therefore be indicative of the underlying cause. Structural and diffuse causes of coma are not isolated from one another, as one can lead to the other in some situations. For instance, coma induced by a diffuse metabolic process, such as hypoglycemia, can result in a structural coma if it is not resolved. Another example is if cerebral edema, a diffuse dysfunction, leads to ischemia of the brainstem, a structural issue, due to the blockage of the circulation in the brain. Diagnosis Although diagnosis of coma is simple, investigating the underlying cause of onset can be rather challenging. As such, after gaining stabilization of the patient's airways, breathing and circulation (the basic ABCs) various diagnostic tests, such as physical examinations and imaging tools (CT scan, MRI, etc.) are employed to access the underlying cause of the coma. When an unconscious person enters a hospital, the hospital utilizes a series of diagnostic steps to identify the cause of unconsciousness. According to Young, the following steps should be taken when dealing with a patient possibly in a coma: Perform a general examination and medical history check Make sure the patient is in an actual comatose state and is not in a locked-in state or experiencing psychogenic unresponsiveness. Patients with locked-in syndrome present with voluntary movement of their eyes, whereas patients with psychogenic comas demonstrate active resistance to passive opening of the eyelids, with the eyelids closing abruptly and completely when the lifted upper eyelid is released (rather than slowly, asymmetrically and incompletely as seen in comas due to organic causes). Find the site of the brain that may be causing coma (e.g., brainstem, back of brain...) and assess the severity of the coma with the Glasgow Coma Scale Take blood work to see if drugs were involved or if it was a result of hypoventilation/hyperventilation Check for levels of serum glucose, calcium, sodium, potassium, magnesium, phosphate, urea, and creatinine Perform brain scans to observe any abnormal brain functioning using either CT or MRI scans Continue to monitor brain waves and identify seizures of patient using EEGs Initial evaluation In the initial assessment of coma, it is common to gauge the level of consciousness on the AVPU (alert, vocal stimuli, painful stimuli, unresponsive) scale by spontaneously exhibiting actions and, assessing the patient's response to vocal and painful stimuli. More elaborate scales, such as the Glasgow Coma Scale, quantify an individual's reactions such as eye opening, movement and verbal response in order to indicate their extent of brain injury. The patient's score can vary from a score of 3 (indicating severe brain injury and death) to 15 (indicating mild or no brain injury). In those with deep unconsciousness, there is a risk of asphyxiation as the control over the muscles in the face and throat is diminished. As a result, those presenting to a hospital with coma are typically assessed for this risk ("airway management"). If the risk of asphyxiation is deemed high, doctors may use various devices (such as an oropharyngeal airway, nasopharyngeal airway or endotracheal tube) to safeguard the airway. Imaging and testing Imaging basically encompasses computed tomography (CAT or CT) scan of the brain, or MRI for example, and is performed to identify specific causes of the coma, such as hemorrhage in the brain or herniation of the brain structures. Special tests such as an EEG can also show a lot about the activity level of the cortex such as semantic processing, presence of seizures, and are important available tools not only for the assessment of the cortical activity but also for predicting the likelihood of the patient's awakening. The autonomous responses such as the skin conductance response may also provide further insight on the patient's emotional processing. In the treatment of traumatic brain injury (TBI), there are 4 examination methods that have proved useful: skull x-ray, angiography, computed tomography (CT), and magnetic resonance imaging (MRI). The skull x-ray can detect linear fractures, impression fractures (expression fractures) and burst fractures. Angiography is used on rare occasions for TBIs i.e. when there is suspicion of an aneurysm, carotid sinus fistula, traumatic vascular occlusion, and vascular dissection. A CT can detect changes in density between the brain tissue and hemorrhages like subdural and intracerebral hemorrhages. MRIs are not the first choice in emergencies because of the long scanning times and because fractures cannot be detected as well as CT. MRIs are used for the imaging of soft tissues and lesions in the posterior fossa which cannot be found with the use of CT. Body movements Assessment of the brainstem and cortical function through special reflex tests such as the oculocephalic reflex test (doll's eyes test), oculovestibular reflex test (cold caloric test), corneal reflex, and the gag reflex. Reflexes are a good indicator of what cranial nerves are still intact and functioning and is an important part of the physical exam. Due to the unconscious status of the patient, only a limited number of the nerves can be assessed. These include the cranial nerves number 2 (CN II), number 3 (CN III), number 5 (CN V), number 7 (CN VII), and cranial nerves 9 and 10 (CN IX, CN X). Assessment of posture and physique is the next step. It involves general observation about the patient's positioning. There are often two stereotypical postures seen in comatose patients. Decorticate posturing is a stereotypical posturing in which the patient has arms flexed at the elbow, and arms adducted toward the body, with both legs extended. Decerebrate posturing is a stereotypical posturing in which the legs are similarly extended (stretched), but the arms are also stretched (extended at the elbow). The posturing is critical since it indicates where the damage is in the central nervous system. A decorticate posturing indicates a lesion (a point of damage) at or above the red nucleus, whereas a decerebrate posturing indicates a lesion at or below the red nucleus. In other words, a decorticate lesion is closer to the cortex, as opposed to a decerebrate posturing which indicates that the lesion is closer to the brainstem. Pupil size Pupil assessment is often a critical portion of a comatose examination, as it can give information as to the cause of the coma; the following table is a technical, medical guideline for common pupil findings and their possible interpretations: Severity A coma can be classified as (1) supratentorial (above Tentorium cerebelli), (2) infratentorial (below Tentorium cerebelli), (3) metabolic or (4) diffused. This classification is merely dependent on the position of the original damage that caused the coma, and does not correlate with severity or the prognosis. The severity of coma impairment however is categorized into several levels. Patients may or may not progress through these levels. In the first level, the brain responsiveness lessens, normal reflexes are lost, the patient no longer responds to pain and cannot hear. The Rancho Los Amigos Scale is a complex scale that has eight separate levels, and is often used in the first few weeks or months of coma while the patient is under closer observation, and when shifts between levels are more frequent. Treatment Treatment for people in a coma will depend on the severity and cause of the comatose state. Upon admittance to an emergency department, coma patients will usually be placed in an Intensive Care Unit (ICU) immediately, where maintenance of the patient's respiration and circulation become a first priority. Stability of their respiration and circulation is sustained through the use of intubation, ventilation, administration of intravenous fluids or blood and other supportive care as needed. Continued care Once a patient is stable and no longer in immediate danger, there may be a shift of priority from stabilizing the patient to maintaining the state of their physical wellbeing. Moving patients every 2–3 hours by turning them side to side is crucial to avoiding bed sores as a result of being confined to a bed. Moving patients through the use of physical therapy also aids in preventing atelectasis, contractures or other orthopedic deformities which would interfere with a coma patient's recovery. Pneumonia is also common in coma patients due to their inability to swallow which can then lead to aspiration. A coma patient's lack of a gag reflex and use of a feeding tube can result in food, drink or other solid organic matter being lodged within their lower respiratory tract (from the trachea to the lungs). This trapping of matter in their lower respiratory tract can ultimately lead to infection, resulting in aspiration pneumonia. Coma patients may also deal with restlessness or seizures. As such, soft cloth restraints may be used to prevent them from pulling on tubes or dressings and side rails on the bed should be kept up to prevent patients from falling. Caregivers Coma has a wide variety of emotional reactions from the family members of the affected patients, as well as the primary care givers taking care of the patients. Research has shown that the severity of injury causing coma was found to have no significant impact compared to how much time has passed since the injury occurred. Common reactions, such as desperation, anger, frustration, and denial are possible. The focus of the patient care should be on creating an amicable relationship with the family members or dependents of a comatose patient as well as creating a rapport with the medical staff. Although there is heavy importance of a primary care taker, secondary care takers can play a supporting role to temporarily relieve the primary care taker's burden of tasks. Prognosis Comas can last from several days to, in particularly extreme cases, years. Some patients eventually gradually come out of the coma, some progress to a vegetative state or a minimally conscious state, and others die. Some patients who have entered a vegetative state go on to regain a degree of awareness; and in some cases may remain in vegetative state for years or even decades (the longest recorded period is 42 years). Predicted chances of recovery will differ depending on which techniques were used to measure the patient's severity of neurological damage. Predictions of recovery are based on statistical rates, expressed as the level of chance the person has of recovering. Time is the best general predictor of a chance of recovery. For example, after four months of coma caused by brain damage, the chance of partial recovery is less than 15%, and the chance of full recovery is very low. The outcome for coma and vegetative state depends on the cause, location, severity and extent of neurological damage. A deeper coma alone does not necessarily mean a slimmer chance of recovery; similarly, a milder coma does not indicate a higher chance of recovery. The most common cause of death for a person in a vegetative state is secondary infection such as pneumonia, which can occur in patients who lie still for extended periods. Recovery People may emerge from a coma with a combination of physical, intellectual, and psychological difficulties that need special attention. It is common for coma patients to awaken in a profound state of confusion and experience dysarthria, the inability to articulate any speech. Recovery is usually gradual. In the first days, the patient may only awaken for a few minutes, with increased duration of wakefulness as their recovery progresses, and they may eventually recover full awareness. That said, some patients may never progress beyond very basic responses. There are reports of people coming out of a coma after long periods of time. After 19 years in a minimally conscious state, Terry Wallis spontaneously began speaking and regained awareness of his surroundings. A man with brain-damage and trapped in a coma-like state for six years was brought back to consciousness in 2003 by doctors who planted electrodes deep inside his brain. The method, called deep brain stimulation (DBS), successfully roused communication, complex movement and eating ability in the 38-year-old American man with a traumatic brain injury. His injuries left him in a minimally conscious state, a condition akin to a coma but characterized by occasional, but brief, evidence of environmental and self-awareness that coma patients lack. Society and culture Research by Eelco Wijdicks on the depiction of comas in movies was published in Neurology in May 2006. Wijdicks studied 30 films (made between 1970 and 2004) that portrayed actors in prolonged comas, and he concluded that only two films accurately depicted the state of a coma patient and the agony of waiting for a patient to awaken: Reversal of Fortune (1990) and The Dreamlife of Angels (1998). The remaining 28 were criticized for portraying miraculous awakenings with no lasting side effects, unrealistic depictions of treatments and equipment required, and comatose patients remaining muscular and tanned. Bioethics A person in a coma is said to be in an unconscious state. Perspectives on personhood, identity and consciousness come into play when discussing the metaphysical and bioethical views on comas. It has been argued that unawareness should be just as ethically relevant and important as a state of awareness and that there should be metaphysical support of unawareness as a state. In the ethical discussions about disorders of consciousness (DOCs), two abilities are usually considered as central: experiencing well-being and having interest. Well-being can broadly be understood as the positive effect related to what makes life good (according to specific standards) for the individual in question. The only condition for well-being broadly considered is the ability to experience its 'positiveness'. That said, because experiencing positiveness is a basic emotional process with phylogenetic roots, it is likely to occur at a completely unaware level and therefore, introduces the idea of an unconscious well-being. As such, the ability of having interests, is crucial for describing two abilities which those with comas are deficient in. Having an interest in a certain domain can be understood as having a stake in something that can affect what makes our life good in that domain. An interest is what directly and immediately improves life from a certain point of view or within a particular domain, or greatly increases the likelihood of life improvement enabling the subject to realize some good. That said, sensitivity to reward signals is a fundamental element in the learning process, both consciously and unconsciously. Moreover, the unconscious brain is able to interact with its surroundings in a meaningful way and to produce meaningful information processing of stimuli coming from the external environment, including other people. According to Hawkins, "1. A life is good if the subject is able to value, or more basically if the subject is able to care. Importantly, Hawkins stresses that caring has no need for cognitive commitment, i.e. for high-level cognitive activities: it requires being able to distinguish something, track it for a while, recognize it over time, and have certain emotional dispositions vis-à-vis something. 2. A life is good if the subject has the capacity for relationship with others, i.e. for meaningfully interacting with other people." This suggests that unawareness may (at least partly) fulfill both conditions identified by Hawkins for life to be good for a subject, thus making the unconscious ethically relevant. See also Brain death, lack of activity in both cortex, and lack of brainstem function Coma scale, a system to assess the severity of coma Locked-in syndrome, paralysis of most muscles, except ocular muscles of the eyes, while patient is conscious Near-death experience, type of experience registered by people in a state of coma. Persistent vegetative state (vegetative coma), deep coma without detectable awareness. Damage to the cortex, with an intact brainstem. Process Oriented Coma Work, for an approach to working with residual consciousness in comatose patients. Suspended animation, the inducement of a temporary cessation or decay of main body functions. References External links Intensive care medicine Emergency medicine Symptoms and signs of mental disorders
5722
https://en.wikipedia.org/wiki/Call%20of%20Cthulhu%20%28role-playing%20game%29
Call of Cthulhu (role-playing game)
Call of Cthulhu is a horror fiction role-playing game based on H. P. Lovecraft's story of the same name and the associated Cthulhu Mythos. The game, often abbreviated as CoC, is published by Chaosium; it was first released in 1981 and is in its seventh edition, with licensed foreign language editions available as well. Its game system is based on Chaosium's Basic Role-Playing (BRP) with additions for the horror genre. These include special rules for sanity and luck. Gameplay Setting Call of Cthulhu is set in a darker version of our world based on H. P. Lovecraft's observation (from his essay, "Supernatural Horror in Literature") that "The oldest and strongest emotion of mankind is fear, and the strongest kind of fear is fear of the unknown." The original edition, first published in 1981, uses Basic Role-Playing as its basis and is set in the 1920s, the setting of many of Lovecraft's stories. The Cthulhu by Gaslight supplement blends the occult and Holmesian mystery and is mostly set in England during the 1890s. Cthulhu Now and Delta Green are set in a modern/1980s era and deal with conspiracies. Recent settings include 1000 AD (Cthulhu: Dark Ages), the 23rd century (Cthulhu Rising) and Ancient Rome (Cthulhu Invictus). The protagonists may also travel to places that are not of this earth, such as the Dreamlands (which can be accessed through dreams as well as being physically connected to the earth), other planets, or the voids of space. In keeping with the Lovecraftian theme, the gamemaster is called the Keeper of Arcane Lore ("the keeper"), while player characters are called Investigators of the Unknown ("investigators"). While predominantly focused on Lovecraftian fiction and horror, playing in the Cthulhu Mythos is not required. The system also includes ideas for non-Lovecraft games, such as using folk horror or the settings of other authors and horror movies, or with entirely custom settings and creatures by the gamemaster and/or players. Mechanics CoC uses the Basic Role-Playing system first developed for RuneQuest and used in other Chaosium games. It is skill-based, with player characters getting better with their skills by succeeding at using them for as long as they stay functionally healthy and sane. They do not, however, gain hit points and do not become significantly harder to kill. The game does not use levels. CoC uses percentile dice (with results ranging from 1 to 100) to determine success or failure. Every player statistic is intended to be compatible with the notion that there is a probability of success for a particular action given what the player is capable of doing. For example, an artist may have a 75% chance of being able to draw something (represented by having 75 in Art skill), and thus rolling a number under 75 would yield a success. Rolling or less of the skill level (1-15 in the example) would be a "special success" (or an "impale" for combat skills) and would yield some extra bonus to be determined by the keeper. For example, the artist character might draw especially well or especially fast, or catch some unapparent detail in the drawing. The players take the roles of ordinary people drawn into the realm of the mysterious: detectives, criminals, scholars, artists, war veterans, etc. Often, happenings begin innocently enough, until more and more of the workings behind the scenes are revealed. As the characters learn more of the true horrors of the world and the irrelevance of humanity, their sanity (represented by "Sanity Points", abbreviated SAN) inevitably withers away. The game includes a mechanism for determining how damaged a character's sanity is at any given point; encountering the horrific beings usually triggers a loss of SAN points. To gain the tools they need to defeat the horrors – mystic knowledge and magic – the characters may end up losing some of their sanity, though other means such as pure firepower or simply outsmarting one's opponents also exist. CoC has a reputation as a game in which it is quite common for a player character to die in gruesome circumstances or end up in a mental institution. Eventual triumph of the players is not guaranteed. History The original conception of Call of Cthulhu was Dark Worlds, a game commissioned by the publisher Chaosium but never published. Sandy Petersen contacted them regarding writing a supplement for their popular fantasy game RuneQuest set in Lovecraft's Dreamlands. He took over the writing of Call of Cthulhu, and the game was released in 1981. Petersen oversaw the first four editions with only minor changes to the system. Once he left, development was continued by Lynn Willis, who was credited as co-author in the fifth and sixth editions. After the death of Willis, Mike Mason became Call of Cthulhu line editor in 2013, continuing its development with Paul Fricker. Together they made the most significant rules alterations than in any previous edition, culminating in the release of the 7th edition in 2014. Editions Early releases For those grounded in the RPG tradition, the very first release of Call of Cthulhu created a brand new framework for table-top gaming. Rather than the traditional format established by Dungeons & Dragons, which often involved the characters wandering through caves or tunnels and fighting different types of monsters, Sandy Petersen introduced the concept of the Onion Skin: Interlocking layers of information and nested clues that lead the player characters from seemingly minor investigations into a missing person to discovering mind-numbingly awful, global conspiracies to destroy the world. Unlike its predecessor games, CoC assumed that most investigators would not survive, alive or sane, and that the only safe way to deal with the vast majority of nasty things described in the rule books was to run away. A well-run CoC campaign should engender a sense of foreboding and inevitable doom in its players. The style and setting of the game, in a relatively modern time period, created an emphasis on real-life settings, character research, and thinking one's way around trouble. The first book of Call of Cthulhu adventures was Shadows of Yog-Sothoth. In this work, the characters come upon a secret society's foul plot to destroy mankind, and pursue it first near to home and then in a series of exotic locations. This template was to be followed in many subsequent campaigns, including Fungi from Yuggoth (later known as Curse of Cthulhu and Day of the Beast), Spawn of Azathoth, and possibly the most highly acclaimed, Masks of Nyarlathotep. Shadows of Yog-Sothoth is important not only because it represents the first published addition to the boxed first edition of Call of Cthulhu, but because its format defined a new way of approaching a campaign of linked RPG scenarios involving actual clues for the would-be detectives amongst the players to follow and link in order to uncover the dastardly plots afoot. Its format has been used by every other campaign-length Call of Cthulhu publication. The standard of CoC scenarios was well received by independent reviewers. The Asylum and Other Tales, a series of stand alone articles released in 1983, rated an overall 9/10 in Issue 47 of White Dwarf magazine. The standard of the included 'clue' material varies from scenario to scenario, but reached its zenith in the original boxed versions of the Masks of Nyarlathotep and Horror on the Orient Express campaigns. Inside these one could find matchbooks and business cards apparently defaced by non-player characters, newspaper cuttings and (in the case of Orient Express) period passports to which players could attach their photographs, increasing the sense of immersion. Indeed, during the period that these supplements were produced, third party campaign publishers strove to emulate the quality of the additional materials, often offering separately-priced 'deluxe' clue packages for their campaigns. Additional milieux were provided by Chaosium with the release of Dreamlands, a boxed supplement containing additional rules needed for playing within the Lovecraft Dreamlands, a large map and a scenario booklet, and Cthulhu By Gaslight, another boxed set which moved the action from the 1920s to the 1890s. Cthulhu Now In 1987, Chaosium issued the supplement titled Cthulhu Now, a collection of rules, supplemental source materials and scenarios for playing Call of Cthulhu in the present day. This proved to be a very popular alternative milieu, so much so that much of the supplemental material is now included in the core rule book. Lovecraft Country Lovecraft Country was a line of supplements for Call of Cthulhu released in 1990. These supplements were overseen by Keith Herber and provided backgrounds and adventures set in Lovecraft's fictional towns of Arkham, Kingsport, Innsmouth, Dunwich, and their environs. The intent was to give investigators a common base, as well as to center the action on well-drawn characters with clear motivations. Terror Australis In 1987, Terror Australis: Call of Cthulhu in the Land Down Under was published. In 2018, a revised and updated version of the 1987 game was reissued, with about triple the content and two new games. It requires the Call of Cthulhu Keeper's Rulebook (7th Edition) and is usable with Pulp Cthulhu. Recent history In the years since the collapse of the Mythos collectible card game (production ceased in 1997), the release of CoC books has been very sporadic, with up to a year between releases. Chaosium struggled with near bankruptcy for many years before finally starting their upward climb again. 2005 was Chaosium's busiest year for many years, with 10 releases for the game. Chaosium took to marketing "monographs"—short books by individual writers with editing and layout provided out-of-house—directly to the consumer, allowing the company to gauge market response to possible new works. The range of times and places in which the horrors of the Mythos can be encountered was also expanded in late 2005 onward with the addition of Cthulhu Dark Ages by Stéphane Gesbert, which gives a framework for playing games set in 11th century Europe, Secrets of Japan by Michael Dziesinski for gaming in modern-day Japan, and Secrets of Kenya by David Conyers for gaming in interwar period Africa. In July 2011, Chaosium announced it would re-release a 30th anniversary edition of the CoC 6th edition role-playing game. This 320-page book features thick (3 mm) leatherette hardcovers with the front cover and spine stamped with gold foil. The interior pages are printed in black ink, on 90 gsm matte art paper. The binding is thread sewn, square backed. Chaosium offered a one-time printing of this Collector's Edition. On May 28, 2013, a crowdfunding campaign on Kickstarter for the 7th edition of Call of Cthulhu was launched with a goal of $40,000; it ended on June 29 of the same year having collected $561,836. It included many more major revisions than any previous edition, and also split the core rules into two books, a Player's Guide and Keeper's Guide. Problems and delays fulfilling the Kickstarters for the 7th edition of Call of Cthulhu led Greg Stafford and Sandy Petersen (who had both left in 1998) to return to an active role at Chaosium in June 2015. The available milieux were also expanded with the release of Cthulhu Through the Ages, a supplement containing additional rules needed for playing within the Roman Empire, Mythic Iceland, a futuristic micro-setting, and the End Times, where the monsters of the mythos attempt to subjugate or destroy the world. Licenses Chaosium has licensed other publishers to create supplements, video, card and board games using the setting and the Call of Cthulhu brand. Many, such as Delta Green by Pagan Publishing and Arkham Horror by Fantasy Flight, have moved away completely from Call of Cthulhu. Other licensees have included Infogrames, Miskatonic River Press, Theater of the Mind Enterprises, Triad Entertainment, Games Workshop, RAFM, Goodman Games, Grenadier Models Inc. and Yog-Sothoth.com. These supplements may be set in different time frames or even different game universes from the original game. Trail of Cthulhu In February 2008, Pelgrane Press published Trail of Cthulhu, a stand-alone game created by Kenneth Hite using the GUMSHOE System developed by Robin Laws. GUMSHOE is specifically designed to be used in investigative games. Shadows of Cthulhu In September 2008, Reality Deviant Publications published Shadows of Cthulhu, a supplement that brings Lovecraftian gaming to Green Ronin's True20 system. Realms of Cthulhu In October 2009, Reality Blurs published Realms of Cthulhu, a supplement for Pinnacle Entertainment's Savage Worlds system. Delta Green Pagan Publishing published Delta Green, a series of supplements originally set in the 1990s, although later supplements add support for playing closer to the present day. In these, player characters are agents of a secret agency known as Delta Green, which fights against creatures from the Mythos and conspiracies related to them. Arc Dream Publishing released a new version of Delta Green in 2016 as a standalone game, partially using the mechanics from Call of Cthulhu. d20 Call of Cthulhu In 2001, a stand-alone version of Call of Cthulhu was released by Wizards of the Coast, for the d20 system. Intended to preserve the feeling of the original game, the d20 conversion of the game rules were supposed to make the game more accessible to the large D&D player base. The d20 system also made it possible to use Dungeons & Dragons characters in Call of Cthulhu, as well as to introduce the Cthulhu Mythos into Dungeons & Dragons games. The d20 version of the game is no longer supported by Wizards as per their contract with Chaosium. Chaosium included d20 stats as an appendix in three releases (see Lovecraft Country), but have since dropped the "dual stat" idea. Card games Mythos was a collectible card game (CCG) based on the Cthulhu Mythos that Chaosium produced and marketed during the mid-1990s. While generally praised for its fast gameplay and unique mechanics, it ultimately failed to gain a very large market presence. It bears mention because its eventual failure brought the company to hard times that affected its ability to produce material for Call of Cthulhu. Call of Cthulhu: The Card Game is a second collectible card game, produced by Fantasy Flight Games. Miniatures The first licensed Call of Cthulhu gaming miniatures were sculpted by Andrew Chernack and released by Grenadier Models in boxed sets and blister packs in 1983. The license was later transferred to RAFM. As of 2011, RAFM still produce licensed Call of Cthulhu models sculpted by Bob Murch. Both lines include investigator player character models and the iconic monsters of the Cthulhu mythos. As of July 2015, Reaper Miniatures started its third "Bones Kickstarter", a Kickstarter intended to help the company migrate some miniatures from metal to plastic, and introducing some new ones. Among the stretch goals was the second $50 expansion, devoted to the Mythos, with miniatures such as Cultists, Deep Ones, Mi'Go, and an extra $15 Shub-Niggurath "miniature" (it is, at least, 6x4 squares). It is expected for those miniatures to remain in the Reaper Miniatures catalogue after the Kickstarter project finishes. In 2020 Chaosium announced a license agreement with Ardacious for Call of Cthulhu virtual miniatures to be released on their augmented reality app Ardent Roleplay. Video games Shadow of the Comet Shadow of the Comet (later repackaged as Call of Cthulhu: Shadow of the Comet) is an adventure game developed and released by Infogrames in 1993. The game is based on H. P. Lovecraft's Cthulhu Mythos and uses many elements from Lovecraft's The Dunwich Horror and The Shadow Over Innsmouth. A follow-up game, Prisoner of Ice, is not a direct sequel. Prisoner of Ice Prisoner of Ice (also Call of Cthulhu: Prisoner of Ice) is an adventure game developed and released by Infogrames for the PC and Macintosh computers in 1995 in America and Europe. It is based on H. P. Lovecraft's Cthulhu Mythos, particularly At the Mountains of Madness, and is a follow-up to Infogrames' earlier Shadow of the Comet. In 1997, the game was ported to the Sega Saturn and PlayStation exclusively in Japan. Dark Corners of the Earth A licensed first-person shooter adventure game by Headfirst Productions, based on Call of Cthulhu campaign Escape from Innsmouth and released by Bethesda Softworks in 2005/2006 for the PC and Xbox. The Wasted Land In April 2011, Chaosium and new developer Red Wasp Design announced a joint project to produce a mobile video game based on the Call of Cthulhu RPG, entitled Call of Cthulhu: The Wasted Land. The game was released on January 30, 2012. Cthulhu Chronicles In 2018, Metarcade produced Cthulhu Chronicles, a game for iOS with a campaign of nine mobile interactive fiction stories set in 1920s England based on Call of Cthulhu. The first five stories were released on July 10, 2018. Call of Cthulhu Call of Cthulhu is a survival horror role-playing video game developed by Cyanide and published by Focus Home Interactive for PlayStation 4, Xbox One and Windows. The game features a semi-open world environment and incorporates themes of Lovecraftian and psychological horror into a story which includes elements of investigation and stealth. It is inspired by H. P. Lovecraft's short story "The Call of Cthulhu". Reception Multiple reviews of various editions appeared in Space Gamer/Fantasy Gamer. In the March 1982 edition (No. 49), William A. Barton noted that there were some shortcomings resulting from an assumption by the designers that players would have access to rules from RuneQuest that were not in Call of Cthulhu, but otherwise Barton called the game "an excellent piece of work.... The worlds of H. P. Lovecraft are truly open for the fantasy gamer." In the October–November 1987 edition (No. 80), Lisa Cohen reviewed the 3rd edition, saying, ""This book can be for collectors of art, players, or anyone interested in knowledge about old time occult. It is the one reprint that is worth the money." Multiple reviews of various editions appeared in White Dwarf. In the August 1982 edition (Issue 32), Ian Bailey admired much about the first edition of the game; his only criticism was that the game was too "U.S. orientated and consequently any Keeper... who wants to set his game in the UK will have a lot of research to do." Bailey gave the game an above average rating of 9 out of 10, saying, "Call of Cthulhu is an excellent game and a welcome addition to the world of role-playing." In the August 1986 edition (Issue 80), Ashley Shepherd thought the inclusion of much material in the 3rd edition that had been previously published as supplementary books "makes the game incredibly good value." He concluded, "This package is going to keep Call of Cthulhu at the front of the fantasy game genre." Several reviews of various editions and supplements also appeared in Dragon. In the May 1982 edition (Issue 61), David Cook thought the rules were too complex for new gamers, but said, "It is a good game for experienced role-playing gamers and ambitious judges, especially if they like Lovecraft’s type of story." In the August 1987 edition (Issue 124), Ken Rolston reviewed the Terror Australis supplement for 3rd edition that introduced an Australian setting in the 1920s. Bambra thought that "Literate, macabre doom shambles from each page. Good reading, and a good campaign setting for COC adventures." In the October 1988 edition (Issue 138), Ken Rolston gave an overview of the 3rd edition, and placed it ahead of its competitors due to superior campaign setting, tone and atmosphere, the player characters as investigators, and the use of realistic player handouts such as authentic-looking newspaper clippings. Rolston concluded, "CoC is one of role-playing’s acknowledged classics. Its various supplements over the years have maintained an exceptional level of quality; several, including Shadows of Yog-Sothoth and Masks of Nyarlathotep, deserve consideration among the greatest pinnacles of the fantasy role-playing game design." In the June 1990 edition (Issue 158), Jim Bambra liked the updated setting of the 4th edition, placing the game firmly in Lovecraft's 1920s. He also liked the number of adventures included in the 192-page rulebook: "The fourth edition contains enough adventures to keep any group happily entertained and sanity blasted." However, while Cook questioned whether owners of the 2nd or 3rd edition would get good value for their money — "You lack only the car-chase rules and the improved layout of the three books in one. The rest of the material has received minor editing but no substantial changes" — Cook strongly recommended the new edition to newcomers, saying, "If you don’t already play CoC, all I can do is urge you to give it a try.... discover for yourself why it has made so many converts since its release." In the October 1992 edition (Issue 186), Rick Swan admitted that he was skeptical that the 5th edition would offer anything new, but instead found that the new edition benefited from "fresh material, judicious editing, and thorough polishes." He concluded, "Few RPGs exceed the CoC game’s scope or match its skillful integration of background and game systems. And there’s no game more fun." In his 1990 book The Complete Guide to Role-Playing Games, game critic Rick Swan gave the game a top rating of 4 out of 4, calling it "a masterpiece, easily the best horror RPG ever published and possibly the best RPG, period ... breathtaking in scope and as richly textured as a fine novel. All role-players owe it to themselves to experience this truly remarkable game." In Issue 68 of Challenge, Craig Sheeley reviewed the fifth edition and liked the revisions. "The entire character generation process is highly streamlined and easily illustrated on a two-page flowchart." DeJong also liked the inclusion of material from all three of CoCs settings (1890s, 1920s, 1990s), calling it "One of the best features of this edition." And he was very impressed with the layout of the book, commenting, "The organization and format of this book deserve special mention. I hold that every game company should study this book to learn what to do right." DeJong concluded, "I am seriously impressed with this product. From cover to cover, it’s well done." In a reader poll conducted by UK magazine Arcane in 1996 to determine the 50 most popular roleplaying games of all time, Call of Cthulhu was ranked 1st. Editor Paul Pettengale commented: "Call of Cthulhu is fully deserved of the title as the most popular roleplaying system ever - it's a game that doesn't age, is eminently playable, and which hangs together perfectly. The system, even though it's over ten years old, it still one of the very best you'll find in any roleplaying game. Also, there's not a referee in the land who could say they've read every Lovecraft inspired book or story going, so there's a pretty-well endless supply of scenario ideas. It's simply marvellous." Scott Taylor for Black Gate in 2013 rated Call of Cthulhu as #4 in the top ten role-playing games of all time, saying "With various revisions, but never a full rewrite of its percentile-based system, Call of Cthulhu might be antiquated by today's standards, but remember it is supposed to be set in the 1920s, so to me that seems more than appropriate." Awards The game has won multiple awards: 1982, Origins Awards, Best Role Playing Game 1981, Game Designer's Guild, Select Award 1985, Games Day Award, Best Role Playing Game 1986, Games Day Award, Best Contemporary Role Playing Game 1987, Games Day Award, Best Other Role Playing Game 1993, Leeds Wargame Club, Best Role Playing Game 1994, Gamer's Choice Award, Hall of Fame 1995, Origins Award, Hall of Fame 2001, Origins Award, Best Graphic Presentation of a Book Product (for Call of Cthulhu 20th anniversary edition) 2002 Gold Ennie Award for "Best Graphic Design and Layout". 2003, GamingReport.com readers voted it as the number-one Gothic/Horror RPG 2014, ENNIE Awards - Call of Cthulhu 7th Edition Quickstart - 'Best Free Product (Silver)' 2016, UK Games Expo Awards - 'Best Roleplaying Game' 2017, Beasts of War Awards - 'Best RPG' 2017, Dragon Con Awards - 'Best Science Fiction or Fantasy Miniatures/Collectible Card/Role Playing Game' (for Pulp Cthulhu rules) 2017, ENNIE Awards - 'Best Supplement (Gold)' (for Pulp Cthulhu rules) 2017, ENNIE Awards - 'Best Cover Art (Gold)' (for Call of Cthulhu Investigator Handbook) 2017, ENNIE Awards - 'Best Cartography (Gold)' (for Call of Cthulhu Keeper Screen Pack) 2017, ENNIE Awards - 'Best Aid/Accessory (Gold)' (for Call of Cthulhu Keeper Screen Pack) 2017, ENNIE Awards - 'Best Production Values (Gold)' (for Call of Cthulhu Slipcase Set) 2018, Tabletop Gaming Magazine 'Top 150 Greatest Games of All Time' - Call of Cthulhu - Ranked #3 (Reader Poll) 2019, ENNIE Awards - 'Best Rules (Gold)' (for Call of Cthulhu Starter Set) See also Arkham Horror - a cooperative board game based on the Mythos. Cthulhu Live - a live action role-playing game version of Call of Cthulhu. CthulhuTech - another role-playing game, conceived for a "Cthulhu science-fiction setting". List of Call of Cthulhu books References Sources Review Further reading External links American role-playing games Basic Role-Playing System Cthulhu Mythos role-playing games D20 System ENnies winners Historical role-playing games Horror role-playing games Origins Award winners Role-playing games based on novels Role-playing games introduced in 1981 Sandy Petersen games Works based on The Call of Cthulhu
5723
https://en.wikipedia.org/wiki/Constellations%20%28journal%29
Constellations (journal)
Constellations: An International Journal of Critical and Democratic Theory is a quarterly peer-reviewed academic journal of critical post-Marxist and democratic theory and successor of Praxis International. It is currently edited by Simone Chambers, Cristina Lafont, and Hubertus Buchstein. Ertug Tombus is the managing editor of the journal since 2009. Seyla Benhabib, Nancy Fraser and Andrew Arato are the co-founding former editors. With an international editorial contribution, it is based at the New School in New York. Nadia Urbinati, Amy Allen, Jean L.Cohen, and Andreas Kalyvas are former co-editors. References External links Sociology journals Academic journals established in 1994 Quarterly journals Wiley-Blackwell academic journals English-language journals
5724
https://en.wikipedia.org/wiki/Cape%20Breton%20Island
Cape Breton Island
Cape Breton Island (, formerly ; or ; ) is a rugged and irregularly shaped island on the Atlantic coast of North America and part of the province of Nova Scotia, Canada. The island accounts for 18.7% of Nova Scotia's total area. Although the island is physically separated from the Nova Scotia peninsula by the Strait of Canso, the long Canso Causeway connects it to mainland Nova Scotia. The island is east-northeast of the mainland with its northern and western coasts fronting on the Gulf of Saint Lawrence with its western coast forming the eastern limits of the Northumberland Strait. The eastern and southern coasts front the Atlantic Ocean with its eastern coast also forming the western limits of the Cabot Strait. Its landmass slopes upward from south to north, culminating in the highlands of its northern cape. One of the world's larger saltwater lakes, ("Arm of Gold" in French), dominates the island's centre. The total population at the 2016 census numbered 132,010 Cape Bretoners, which is approximately 15% of the provincial population. Cape Breton Island has experienced a decline in population of approximately 2.9% since the 2011 census. Approximately 75% of the island's population is in the Cape Breton Regional Municipality (CBRM), which includes all of Cape Breton County and is often referred to as Industrial Cape Breton. Toponymy Cape Breton Island takes its name from its easternmost point, Cape Breton. At least two theories for this name have been put forward. The first connects it to the Bretons of northwestern France which discovered Canada. A Portuguese mappa mundi of 1516–1520 includes the label "terra q(ue) foy descuberta por Bertomes" in the vicinity of the Gulf of St Lawrence, which means "land discovered by Bretons". The second connects it to the Gascon fishing port of Capbreton. Basque whalers and fishermen traded with the Miꞌkmaq of this island from the early sixteenth century. The name "Cape Breton" first appears on a map of 1516, as C(abo) dos Bretoes, and became the general name for both the island and the cape toward the end of the 16th century. William Francis Ganong argued that the Portuguese term Bertomes referred to Britons, and that the name should be interpreted as "Cape of the English". This theory is nowadays disagreed upon, due to the Portuguese etymology of Bertomes, meaning the Brittonic speaking people of Wales, Cornwall, Brittany and Galicia, who has close ties to Portugal. History Cape Breton Island's first residents were likely archaic maritime natives, ancestors of the Mi'kmaq people. These peoples and their progeny inhabited the island (known as Unama'ki) for several thousand years and continue to live there to this day. Their traditional lifestyle centred around hunting and fishing because of the unfavourable agricultural conditions of their maritime home. This ocean-centric lifestyle did, however, make them among the first Indigenous peoples to discover European explorers and sailors fishing in the St Lawrence Estuary. Italian explorer (sailing for the British crown) John Cabot reportedly visited the island in 1497. However, European histories and maps of the period are of too poor quality to be sure whether Cabot first visited Newfoundland or Cape Breton Island. This discovery is commemorated by Cape Breton's Cabot Trail, and by the Cabot's Landing Historic Site & Provincial Park, near the village of Dingwall. The local Mi'kmaq peoples began trading with European fishermen when the fishermen began landing in their territories as early as the 1520s. In about 1521–22, the Portuguese under João Álvares Fagundes established a fishing colony on the island. As many as two hundred settlers lived in a village, the name of which is not known, located according to some historians at what is now Ingonish on the island's northeastern peninsula. These fishermen traded with the local population but did not maintain a permanent settlement. This Portuguese colony's fate is unknown, but it is mentioned as late as 1570. During the Anglo-French War of 1627 to 1629, under King Charles I, the Kirkes took Quebec City, James Stewart, 4th Lord Ochiltree, planted a colony on Unama'ki at Baleine, Nova Scotia, and Alexander's son, William Alexander, 1st Earl of Stirling, established the first incarnation of "New Scotland" at Port Royal. These claims, and larger ideals of European colonization were the first time the island was incorporated as European territory, though it would be several decades later that treaties would actually be signed. However, no copies of these treaties exist. These Scottish triumphs, which left Cape Sable as the only major French holding in North America, did not last. Charles I's haste to make peace with France on the terms most beneficial to him meant the new North American gains would be bargained away in the Treaty of Saint-Germain-en-Laye, which established which European power had laid claim over the territories. The French quickly defeated the Scots at Baleine, and established the first European settlements on Île Royale, which is present-day Englishtown (1629) and St. Peter's (1630). These settlements lasted only one generation, until Nicolas Denys left in 1659. The island did not have any European settlers for another fifty years before those communities along with Louisbourg were re-established in 1713, after which point European settlement was permanently established on the island. Île Royale Known as Île Royale ("Royal Island") to the French, the island also saw active settlement by France. After the French ceded their claims to Newfoundland and the Acadian mainland to the British by the Treaty of Utrecht in 1713, the French relocated the population of Plaisance, Newfoundland, to Île Royale and the French garrison was established in the central eastern part at Sainte Anne. As the harbour at Sainte Anne experienced icing problems, it was decided to build a much larger fortification at Louisbourg to improve defences at the entrance to the Gulf of Saint Lawrence and to defend France's fishing fleet on the Grand Banks. The French also built the Louisbourg Lighthouse in 1734, the first lighthouse in Canada and one of the first in North America. In addition to Cape Breton Island, the French colony of Île Royale also included Île Saint-Jean, today called Prince Edward Island, and Les Îles-de-la-Madeleine. Seven Years' War Louisbourg itself was one of the most important commercial and military centres in New France. Louisbourg was captured by New Englanders with British naval assistance in the Siege of Louisbourg (1745) and by British forces in 1758. The French population of Île Royale was deported to France after each siege. While French settlers returned to their homes in Île Royale after the Treaty of Aix-la-Chapelle was signed in 1748, the fortress was demolished after the second siege in 1758. Île Royale remained formally part of New France until it was ceded to Great Britain by the Treaty of Paris in 1763. It was then merged with the adjacent British colony of Nova Scotia (present-day peninsular Nova Scotia and New Brunswick). Acadians who had been expelled from Nova Scotia and Île Royale were permitted to settle in Cape Breton beginning in 1764, and established communities in northwestern Cape Breton, near Chéticamp, and southern Cape Breton, on and near Isle Madame. Some of the first British-sanctioned settlers on the island following the Seven Years' War were Irish, although upon settlement they merged with local French communities to form a culture rich in music and tradition. From 1763 to 1784, the island was administratively part of the colony of Nova Scotia and was governed from Halifax. The first permanently settled Scottish community on Cape Breton Island was Judique, settled in 1775 by Michael Mor MacDonald. He spent his first winter using his upside-down boat for shelter, which is reflected in the architecture of the village's Community Centre. He composed a song about the area called "O 's àlainn an t-àite", or "O, Fair is the Place." American Revolution During the American Revolution, on 1 November 1776, John Paul Jones, the father of the American Navy, set sail in command of Alfred to free hundreds of American prisoners working in the area's coal mines. Although winter conditions prevented the freeing of the prisoners, the mission did result in the capture of Mellish, a vessel carrying a vital supply of winter clothing intended for John Burgoyne's troops in Canada. Major Timothy Hierlihy and his regiment on board HMS Hope worked in and protected the coal mines at Sydney Cape Breton from privateer attacks. Sydney, Cape Breton provided a vital supply of coal for Halifax throughout the war. The British began developing the mining site at Sydney Mines in 1777. On 14 May 1778, Major Hierlihy arrived at Cape Breton. While there, Hierlihy reported that he "beat off many piratical attacks, killed some and took other prisoners." A few years into the war, there was also a naval engagement between French ships and a British convoy off Sydney, Nova Scotia, near Spanish River (1781), Cape Breton. French ships, fighting with the Americans, were re-coaling and defeated a British convoy. Six French and 17 British sailors were killed, with many more wounded. Colony of Cape Breton In 1784, Britain split the colony of Nova Scotia into three separate colonies: New Brunswick, Cape Breton Island, and present-day peninsular Nova Scotia, in addition to the adjacent colonies of St. John's Island (renamed Prince Edward Island in 1798) and Newfoundland. The colony of Cape Breton Island had its capital at Sydney on its namesake harbour fronting on Spanish Bay and the Cabot Strait. Its first Lieutenant-Governor was Joseph Frederick Wallet DesBarres (1784–1787) and his successor was William Macarmick (1787). A number of United Empire Loyalists emigrated to the Canadian colonies, including Cape Breton. David Mathews, the former Mayor of New York City during the American Revolution, emigrated with his family to Cape Breton in 1783. He succeeded Macarmick as head of the colony and served from 1795 to 1798. From 1799 to 1807, the military commandant was John Despard, brother of Edward. An order forbidding the granting of land in Cape Breton, issued in 1763, was removed in 1784. The mineral rights to the island were given over to the Duke of York by an order-in-council. The British government had intended that the Crown take over the operation of the mines when Cape Breton was made a colony, but this was never done, probably because of the rehabilitation cost of the mines. The mines were in a neglected state, caused by careless operations dating back at least to the time of the final fall of Louisbourg in 1758. Large-scale shipbuilding began in the 1790s, beginning with schooners for local trade, moving in the 1820s to larger brigs and brigantines, mostly built for British ship owners. Shipbuilding peaked in the 1850s, marked in 1851 by the full-rigged ship Lord Clarendon, which was the largest wooden ship ever built in Cape Breton. Merger with Nova Scotia In 1820, the colony of Cape Breton Island was merged for the second time with Nova Scotia. This development is one of the factors which led to large-scale industrial development in the Sydney Coal Field of eastern Cape Breton County. By the late 19th century, as a result of the faster shipping, expanding fishery and industrialization of the island, exchanges of people between the island of Newfoundland and Cape Breton increased, beginning a cultural exchange that continues to this day. The 1920s were some of the most violent times in Cape Breton. They were marked by several severe labour disputes. The famous murder of William Davis by strike breakers, and the seizing of the New Waterford power plant by striking miners led to a major union sentiment that persists to this day in some circles. William Davis Miners' Memorial Day continues to be celebrated in coal mining towns to commemorate the deaths of miners at the hands of the coal companies. 20th century The turn of the 20th century saw Cape Breton Island at the forefront of scientific achievement with the now-famous activities launched by inventors Alexander Graham Bell and Guglielmo Marconi. Following his successful invention of the telephone and being relatively wealthy, Bell acquired land near Baddeck in 1885. He chose the land, which he named Beinn Bhreagh, largely due to its resemblance to his early surroundings in Scotland. He established a summer estate complete with research laboratories, working with deaf people including Helen Keller, and continued to invent. Baddeck would be the site of his experiments with hydrofoil technologies as well as the Aerial Experiment Association, financed by his wife Mabel Gardiner Hubbard. These efforts resulted in the first powered flight in Canada when the AEA Silver Dart took off from the ice-covered waters of Bras d'Or Lake. Bell also built the forerunner to the iron lung and experimented with breeding sheep. Marconi's contributions to Cape Breton Island were also quite significant, as he used the island's geography to his advantage in transmitting the first North American trans-Atlantic radio message from a station constructed at Table Head in Glace Bay to a receiving station at Poldhu in Cornwall, England. Marconi's pioneering work in Cape Breton marked the beginning of modern radio technology. Marconi's station at Marconi Towers, on the outskirts of Glace Bay, became the chief communication centre for the Royal Canadian Navy in World War I through to the early years of World War II. Promotions for tourism beginning in the 1950s recognized the importance of the Scottish culture to the province, as the provincial government started encouraging the use of Gaelic once again. The establishment of funding for the Gaelic College of Celtic Arts and Crafts and formal Gaelic language courses in public schools are intended to address the near-loss of this culture to assimilation into Anglophone Canadian culture. In the 1960s, the Fortress of Louisbourg was partially reconstructed by Parks Canada, using the labour of unemployed coal miners. Since 2009, this National Historic Site of Canada has attracted an average of 90 000 visitors per year. Geography The irregularly-shaped rectangular island is about 100 km wide and 150 long, for a total of in area. It lies in the southeastern extremity of the Gulf of St. Lawrence. Cape Breton is separated from the Nova Scotia peninsula by the very deep Strait of Canso. The island is joined to the mainland by the Canso Causeway. Cape Breton Island is composed of rocky shores, rolling farmland, glacial valleys, barren headlands, highlands, woods and plateaus. Geology The island is characterized by a number of elevations of ancient crystalline and metamorphic rock rising up from the south to the north, and contrasted with eroded lowlands. The bedrock of blocks that developed in different places around the globe, at different times, and then were fused together via tectonics. Cape Breton is formed from three terranes. These are fragments of the earth's crust formed on a tectonic plate and attached by accretion or suture to crust lying on another plate. Each of these has its own distinctive geologic history, which is different from that of the surrounding areas. The southern half of the island formed from the Avalon terrane, which was once a microcontinent in the Paleozoic era. It is made up of volcanic rock that formed near what is now called Africa. Most of the northern half of the island is on the Bras d'Or terrane (part of the Ganderia terrane). It contains volcanic and sedimentary rock formed off the coast of what is now South America. The third terrane is the relatively small Blair River inlier on the far northwestern tip. It contains the oldest rock in the Maritimes, formed up to 1.6 billion years ago. These rocks, which can be seen in the Polletts Cove - Aspy Fault Wilderness Area north of Pleasant Bay, are likely part of the Canadian Shield, a large area of Precambrian igneous and metamorphic rock that forms the core of the North American continent. The Avalon and Bras d'Or terranes were pushed together about 500 million years ago when the supercontinent Gondwana was formed. The Blair River inlier was sandwiched in between the two when Laurussia was formed 450-360 million years ago, at which time the land was found in the tropics. This collision also formed the Appalachian Mountains. Associated rifting and faulting is now visible as the canyons of the Cape Breton Highlands. Then, during the Carboniferous period, the area was flooded, which created sedimentary rock layers such as sandstone, shale, gypsum, and conglomerate. Later, most of the island was tropical forest which later formed coal deposits. Much later, the land was shaped by repeated ice ages which left striations, till, U-shaped valleys, and carved the Bras d'Or Lake from the bedrock. Examples of U-shaped valleys are those of the Chéticamp, Grande Anse, and Clyburn River valleys. Other valleys have been eroded by water, forming V-shaped valleys and canyons. Cape Breton has many fault lines but few earthquakes. Since the North American continent is moving westward, earthquakes tend to occur on the western edge of the continent. Climate The warm summer humid continental climate is moderated by the proximity of the cold, oftentimes polar Labrador Current and its warmer counterpart the Gulf Stream, both being dominant currents in the North Atlantic Ocean. Ecology Lowlands There are lowland areas in along the western shore, around Lake Ainslie, the Bras d'Or watershed, Boularderie Island, and the Sydney coalfield. They include salt marshes, coastal beaches, and freshwater wetlands. Starting in the 1800s, many areas were cleared for farming or timber. Many farms were abandoned from the 1920s to the 1950s with fields being reclaimed by white spruce, red maple, white birch, and balsam fir. Higher slopes are dominated by yellow birch and sugar maple. In sheltered areas with sun and drainage, Acadian forest is found. Wetter areas have tamarack, and black spruce. The weather station at Ingonish records more rain than anywhere else in Nova Scotia. Behind barrier beaches and dunes at Aspy Bay are salt marshes. The Aspy, Clyburn, and Ingonish rivers have all created floodplains which support populations of black ash, fiddle head fern, swamp loosestrife, swamp milkweed, southern twayblade, and bloodroot. Red sandstone and white gypsum cliffs can be observed throughout this area. Bedrock is Carboniferous sedimentary with limestone, shale, and sandstone. Many fluvial remains from are glaciation found here. Mining has been ongoing for centuries, and more than 500 mine openings can be found, mainly in the east. Karst topography is found in Dingwall, South Harbour, Plaster Provincial Park, along the Margaree and Middle Rivers, and along the north shore of Lake Ainslie. The presence of gypsum and limestone increases soil pH and produces some rich wetlands which support giant spear, tufted fen, and other mosses, as well as vascular plants like sedges. Cape Breton Hills This ecosystem is spread throughout Cape Breton and is defined as hills and slopes 150-300m above sea level, typically covered with Acadian forest. It includes North Mountain, Kellys Mountain, and East Bay Hills. Forests in this area were cleared for timber and agriculture and are now a mosaic of habitats depending on the local terrain, soils and microclimate. Typical species include ironwood, white ash, beech, sugar maple, red maple, and yellow birch. The understory can include striped maple, beaked hazelnut, fly honeysuckle, club mosses and ferns. Ephemerals are visible in the spring, such as Dutchman's breeches and spring beauty. In ravines, shade tolerant trees like hemlock, white pine, red spruce are found. Less well-drained areas are forested with balsam fir and black spruce. Highlands and the Northern Plateau The Highlands comprise a tableland in the northern portions of Inverness and Victoria counties. An extension of the Appalachian mountain chain, elevations average 350 metres at the edges of the plateau and rise to more than 500 metres at the centre. The area has broad, gently rolling hills bisected with deep valleys and steep-walled canyons. A majority of the land is a taiga of balsam fir, with some white birch, white spruce, mountain ash, and heart-leaf birch. The northern and western edges of the plateau, particularly at high elevations, resemble arctic tundra. Trees 30–90 high, overgrown with reindeer lichens, can be 150 years old. At very high elevations some areas are exposed bedrock without any vegetation apart from Cladonia lichens. There are many barrens, or heaths, dominated by bushy species of the Ericaceae family. Spruce, killed by spruce budworm in the late 1970s, has reestablished at lower elevations, but not at higher elevations due to moose browsing. Decomposition is slow, leaving thick layers of plant litter. Ground cover includes wood aster, twinflower, liverworts, wood sorrel, bluebead lily, goldthread, various ferns, and lily-of-the-valley, with bryophyte and large-leaved goldenrod at higher elevations. The understory can include striped maple, mountain ash, ferns, and mountain maple. Near water, bog birch, alder, and mountain-ash are found. There are many open wetlands populated with stunted tamarack and black spruce. Poor drainage has led to the formation of peatlands which can support tufted clubrush, Bartram's serviceberry, coastal sedge, and bakeapple. Cape Breton coastal The eastern shore is unique in that while not at a high elevation, it has a cool climate with much rain and fog, strong winds, and low summer temperatures. It is dominated by a boreal forest of black spruce and balsam fir. Sheltered areas support tolerant hardwoods such as white birch and red maple. Many salt marshes, fens, and bogs are found there. There are many beaches on the highly crenelated coastline. Unlike elsewhere on the island, these are rocky and support plants unlike those of sandy beaches. The coast provides habitat for common coast bird species like common eider, black legged kittiwake, black guillemot, whimbrel, and great cormorant. Hydrology Land is drained into the Gulf of Saint Lawrence via the rivers Aspy, Sydney, Mira, Framboise, Margaree, and Chéticamp. The largest freshwater lake is Lake Ainslie. Government Local government on the island is provided by the Cape Breton Regional Municipality, the Municipality of the County of Inverness, the Municipality of the County of Richmond, and the Municipality of the County of Victoria, along with the Town of Port Hawkesbury. The island has five Miꞌkmaq Indian reserves: Eskasoni (the largest in population and land area), Membertou, Wagmatcook, Waycobah, and Potlotek. Demographics The island's residents can be grouped into five main cultures: Scottish, Mi'kmaq, Acadian, Irish, English, with respective languages Scottish Gaelic, Mi'kmaq, French, and English. English is now the primary language, including a locally distinctive Cape Breton accent, while Mi'kmaq, Scottish Gaelic and Acadian French are still spoken in some communities. Later migrations of Black Loyalists, Italians, and Eastern Europeans mostly settled in the island's eastern part around the industrial Cape Breton region. Cape Breton Island's population has been in decline two decades with an increasing exodus in recent years due to economic conditions. Population trend Religious groups Statistics Canada in 2001 reported a "religion" total of 145,525 for Cape Breton, including 5,245 with "no religious affiliation." Major categories included: Roman Catholic: 96,260 (includes Eastern Catholic, Polish National Catholic Church, Old Catholic) Protestant: 42,390 Christian, not included elsewhere: 580 Orthodox: 395 Jewish: 250 Muslim: 145 Economy Much of the recent economic history of Cape Breton Island can be tied to the coal industry. The island has two major coal deposits: the Sydney Coal Field in the southeastern part of the island along the Atlantic Ocean drove the Industrial Cape Breton economy throughout the 19th and 20th centuries—until after World War II, its industries were the largest private employers in Canada. the Inverness Coal Field in the western part of the island along the Gulf of St. Lawrence is significantly smaller but hosted several mines. Sydney has traditionally been the main port, with facilities in a large, sheltered, natural harbour. It is the island's largest commercial centre and home to the Cape Breton Post daily newspaper, as well as one television station, CJCB-TV (CTV), and several radio stations. The Marine Atlantic terminal at North Sydney is the terminal for large ferries traveling to Channel-Port aux Basques and seasonally to Argentia, both on the island of Newfoundland. Point Edward on the west side of Sydney Harbour is the location of Sydport, a former navy base () now converted to commercial use. The Canadian Coast Guard College is nearby at Westmount. Petroleum, bulk coal, and cruise ship facilities are also in Sydney Harbour. Glace Bay, the second largest urban community in population, was the island's main coal mining centre until its last mine closed in the 1980s. Glace Bay was the hub of the Sydney & Louisburg Railway and a major fishing port. At one time, Glace Bay was known as the largest town in Nova Scotia, based on population. Port Hawkesbury has risen to prominence since the completion of the Canso Causeway and Canso Canal created an artificial deep-water port, allowing extensive petrochemical, pulp and paper, and gypsum handling facilities to be established. The Strait of Canso is completely navigable to Seawaymax vessels, and Port Hawkesbury is open to the deepest-draught vessels on the world's oceans. Large marine vessels may also enter Bras d'Or Lake through the Great Bras d'Or channel, and small craft can use the Little Bras d'Or channel or St. Peters Canal. While commercial shipping no longer uses the St. Peters Canal, it remains an important waterway for recreational vessels. The industrial Cape Breton area faced several challenges with the closure of the Cape Breton Development Corporation's (DEVCO) coal mines and the Sydney Steel Corporation's (SYSCO) steel mill. In recent years, the Island's residents have tried to diversify the area economy by investing in tourism developments, call centres, and small businesses, as well as manufacturing ventures in fields such as auto parts, pharmaceuticals, and window glazings. While the Cape Breton Regional Municipality is in transition from an industrial to a service-based economy, the rest of Cape Breton Island outside the industrial area surrounding Sydney-Glace Bay has been more stable, with a mixture of fishing, forestry, small-scale agriculture, and tourism. Tourism in particular has grown throughout the post-Second World War era, especially the growth in vehicle-based touring, which was furthered by the creation of the Cabot Trail scenic drive. The scenery of the island is rivalled in northeastern North America by only Newfoundland; and Cape Breton Island tourism marketing places a heavy emphasis on its Scottish Gaelic heritage through events such as the Celtic Colours Festival, held each October, as well as promotions through the Gaelic College of Celtic Arts and Crafts. Whale-watching is a popular attraction for tourists. Whale-watching cruises are operated by vendors from Baddeck to Chéticamp. The most popular species of whale found in Cape Breton's waters is the pilot whale. The Cabot Trail is a scenic road circuit around and over the Cape Breton Highlands with spectacular coastal vistas; over 400,000 visitors drive the Cabot Trail each summer and fall. Coupled with the Fortress of Louisbourg, it has driven the growth of the tourism industry on the island in recent decades. The Condé Nast travel guide has rated Cape Breton Island as one of the world's best island destinations. Transport The island's primary east–west road is Highway 105, the Trans-Canada Highway, although Trunk 4 is also heavily used. Highway 125 is an important arterial route around Sydney Harbour in the Cape Breton Regional Municipality. The Cabot Trail, circling the Cape Breton Highlands, and Trunk 19, along the island's western coast, are important secondary roads. The Cape Breton and Central Nova Scotia Railway maintains railway connections between the port of Sydney to the Canadian National Railway in Truro. Cape Breton Island is served by several airports, the largest, the JA Douglas McCurdy Sydney Airport, situated on Trunk 4 between the communities of Sydney and Glace Bay, as well as smaller airports at Port Hawksbury, Margaree, and Baddeck. Culture Language Gaelic speakers in Cape Breton, as elsewhere in Nova Scotia, constituted a large proportion of the local population from the 18th century on. They brought with them a common culture of poetry, traditional songs and tales, music and dance, and used this to develop distinctive local traditions. Most Gaelic settlement in Nova Scotia happened between 1770 and 1840, with probably over 50,000 Gaelic speakers emigrating from the Scottish Highlands and the Hebrides to Nova Scotia and Prince Edward Island. Such emigration was facilitated by changes in Gaelic society and the economy, with sharp increases in rents, confiscation of land and disruption of local customs and rights. In Nova Scotia, poetry and song in Gaelic flourished. George Emmerson argues that an "ancient and rich" tradition of storytelling, song, and Gaelic poetry emerged during the 18th century and was transplanted from the Highlands of Scotland to Nova Scotia, where the language similarly took root there. The majority of those settling in Nova Scotia from the end of the 18th century through to middle of the next were from the Scottish Highlands, rather than the Lowlands, making the Highland tradition's impact more profound on the region. Gaelic settlement in Cape Breton began in earnest in the early nineteenth century. The Gaelic language became dominant from Colchester County in the west of Nova Scotia into Cape Breton County in the east. It was reinforced in Cape Breton in the first half of the 19th century with an influx of Highland Scots numbering approximately 50,000 as a result of the Highland Clearances. From 1892 to 1904, Jonathon MacKinnon published the Scottish Gaelic-language biweekly newspaper () in Sydney, Nova Scotia. During the 1920s, several Scottish Gaelic-language newspapers were printed in Sydney for distribution primarily on Cape Breton, including the (), which included Gaelic-language lessons; the United Church-affiliated (); and MacKinnon's later endeavor, (). Gaelic speakers, however, tended to be poor; they were largely illiterate and had little access to education. This situation persisted into the early days of the twentieth century. In 1921 Gaelic was approved as an optional subject in the curriculum of Nova Scotia, but few teachers could be found and children were discouraged from using the language in schools. By 1931 the number of Gaelic speakers in Nova Scotia had fallen to approximately 25,000, mostly in discrete pockets. In Cape Breton it was still a majority language, but the proportion was falling. Children were no longer being raised with Gaelic. From 1939 on, attempts were made to strengthen its position in the public school system in Nova Scotia, but funding, official commitment and the availability of teachers continued to be a problem. By the 1950s the number of speakers was less than 7,000. The advent of multiculturalism in Canada in the 1960s meant that new educational opportunities became available, with a gradual strengthening of the language at secondary and tertiary level. At present several schools in Cape Breton offer Gaelic Studies and Gaelic language programs, and the language is taught at Cape Breton University. The 2016 Canadian Census shows that there are only 40 reported speakers of Gaelic as a mother tongue in Cape Breton. On the other hand, there are families and individuals who have recommenced intergenerational transmission. They include fluent speakers from Gaelic-speaking areas of Scotland and speakers who became fluent in Nova Scotia and who in some cases studied in Scotland. Other revitalization activities include adult education, community cultural events and publishing. Traditional music Cape Breton is well known for its traditional fiddle music, which was brought to North America by Scottish immigrants during the Highland Clearances. The traditional style has been well preserved in Cape Breton, and cèilidhs have become a popular attraction for tourists. Inverness County in particular has a heavy concentration of musical activity, with regular performances in communities such as Mabou and Judique. Judique is recognized as "" () or the 'Home of Celtic Music', featuring the Celtic Music Interpretive Centre. The traditional fiddle music of Cape Breton is studied by musicians around the world, where its global recognition continues to rise. Local performers who have received significant recognition outside of Cape Breton include Angus Chisholm; Buddy MacMaster; Joseph Cormier, the first Cape Breton fiddler to record an album made available in Europe (1974); Lee Cremo; Bruce Guthro; Natalie MacMaster; Ashley MacIsaac; The Rankin Family; Aselin Debison; Gordie Sampson; John Allan Cameron; and the Barra MacNeils. The Men of the Deeps are a male choral group of current and former miners from the industrial Cape Breton area. Film and television My Bloody Valentine: 1981 slasher film shot on location in Sydney Mines. The Bay Boy: 1984 semi-autobiographical drama film about growing up in Glace Bay. Margaret's Museum 1995 drama film which tells the story of a young girl living in a coal mining town where the death of men from accidents in "the pit" (the mines) has become almost routine. Pit Pony 1999 TV series about small-town life in Glace Bay in 1904. The plot line revolves around the lives of the families of the men and boys who work in the coal mines. Photo gallery See also Canadian Gaelic Cape Breton accent Cape Breton Labour Party Cape Breton Regional Municipality Provinces and territories of Canada Province of Cape Breton Island Sydney Tar Ponds Cape Breton Highlands National Park List of people from Cape Breton Notes References Further reading External links Cape Breton Island Official Travel Guide British North America Canadian Gaelic Former British colonies and protectorates in the Americas Geographic regions of Nova Scotia Islands of Nova Scotia
5725
https://en.wikipedia.org/wiki/Cthulhu%20Mythos
Cthulhu Mythos
The Cthulhu Mythos is a mythopoeia and a shared fictional universe, originating in the works of American horror writer H. P. Lovecraft. The term was coined by August Derleth, a contemporary correspondent and protégé of Lovecraft, to identify the settings, tropes, and lore that were employed by Lovecraft and his literary successors. The name "Cthulhu" derives from the central creature in Lovecraft's seminal short story "The Call of Cthulhu", first published in the pulp magazine Weird Tales in 1928. Richard L. Tierney, a writer who also wrote Mythos tales, later applied the term "Derleth Mythos" to distinguish Lovecraft's works from Derleth's later stories, which modify key tenets of the Mythos. Authors of Lovecraftian horror in particular frequently use elements of the Cthulhu Mythos. History In his essay "H. P. Lovecraft and the Cthulhu Mythos", Robert M. Price described two stages in the development of the Cthulhu Mythos. Price called the first stage the "Cthulhu Mythos proper". This stage was formulated during Lovecraft's lifetime and was subject to his guidance. The second stage was guided by August Derleth who, in addition to publishing Lovecraft's stories after his death, attempted to categorize and expand the Mythos. First stage An ongoing theme in Lovecraft's work is the complete irrelevance of mankind in the face of the cosmic horrors that apparently exist in the universe. Lovecraft made frequent references to the "Great Old Ones", a loose pantheon of ancient, powerful deities from space who once ruled the Earth and have since fallen into a deathlike sleep. While these monstrous deities were present in almost all of Lovecraft's published work (his second short story "Dagon", published in 1919, is considered the start of the Mythos), the first story to really expand the pantheon of Great Old Ones and its themes is "The Call of Cthulhu", which was published in 1928. Lovecraft broke with other pulp writers of the time by having his main characters' minds deteriorate when afforded a glimpse of what exists outside their perceived reality. He emphasized the point by stating in the opening sentence of the story that "The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents." Writer Dirk W. Mosig noted that Lovecraft was a "mechanistic materialist" who embraced the philosophy of cosmic indifferentism and believed in a purposeless, mechanical, and uncaring universe. Human beings, with their limited faculties, can never fully understand this universe, and the cognitive dissonance caused by this revelation leads to insanity, in his view. There have been attempts at categorizing this fictional group of beings. Phillip A. Schreffler argues that by carefully scrutinizing Lovecraft's writings, a workable framework emerges that outlines the entire "pantheon"from the unreachable "Outer Ones" (e.g., Azathoth, who occupies the centre of the universe) and "Great Old Ones" (e.g., Cthulhu, imprisoned on Earth in the sunken city of R'lyeh) to the lesser castes (the lowly slave shoggoths and the Mi-Go). David E. Schultz said Lovecraft never meant to create a canonical Mythos but rather intended his imaginary pantheon to serve merely as a background element. Lovecraft himself humorously referred to his Mythos as "Yog Sothothery" (Dirk W. Mosig coincidentally suggested the term Yog-Sothoth Cycle of Myth be substituted for Cthulhu Mythos). At times, Lovecraft even had to remind his readers that his Mythos creations were entirely fictional. The view that there was no rigid structure is expounded upon by S. T. Joshi, who said Price said Lovecraft's writings could at least be divided into categories and identified three distinct themes: the "Dunsanian" (written in a similar style as Lord Dunsany), "Arkham" (occurring in Lovecraft's fictionalized New England setting), and "Cthulhu" (the cosmic tales) cycles. Writer Will Murray noted that while Lovecraft often used his fictional pantheon in the stories he ghostwrote for other authors, he reserved Arkham and its environs exclusively for those tales he wrote under his own name. Although the Mythos was not formalized or acknowledged between them, Lovecraft did correspond, meet in person, and share story elements with other contemporary writers including Clark Ashton Smith, Robert E. Howard, Robert Bloch, Frank Belknap Long, Henry Kuttner, Henry S. Whitehead, and Fritz Leibera group referred to as the "Lovecraft Circle." For example, Robert E. Howard's character Friedrich Von Junzt reads Lovecraft's Necronomicon in the short story "The Children of the Night" (1931), and in turn Lovecraft mentions Howard's Unaussprechlichen Kulten in the stories "Out of the Aeons" (1935) and "The Shadow Out of Time" (1936). Many of Howard's original unedited Conan stories also involve parts of the Cthulhu Mythos. Second stage Price denotes the second stage's commencement with August Derleth, with the principal difference between Lovecraft and Derleth being Derleth's use of hope and development of the idea that the Cthulhu Mythos essentially represented a struggle between good and evil. Derleth is credited with creating the "Elder Gods". He stated: Price said the basis for Derleth's system is found in Lovecraft: "Was Derleth's use of the rubric 'Elder Gods' so alien to Lovecraft's in At the Mountains of Madness? Perhaps not. In fact, this very story, along with some hints from "The Shadow over Innsmouth", provides the key to the origin of the 'Derleth Mythos'. For in At the Mountains of Madness is shown the history of a conflict between interstellar races, first among them the Elder Ones and the Cthulhu-spawn. Derleth said Lovecraft wished for other authors to actively write about the Mythos as opposed to it being a discrete plot device within Lovecraft's own stories. Derleth expanded the boundaries of the Mythos by including any passing reference to another author's story elements by Lovecraft as part of the genre. Just as Lovecraft made passing reference to Clark Ashton Smith's Book of Eibon, Derleth in turn added Smith's Ubbo-Sathla to the Mythos. Derleth also attempted to connect the deities of the Mythos to the four elements (air, earth, fire, and water), creating new beings representative of certain elements in order to legitimize his system of classification. He created "Cthugha" as a sort of fire elemental when a fan, Francis Towner Laney, complained that he had neglected to include the element in his schema. Laney, the editor of The Acolyte, had categorized the Mythos in an essay that first appeared in the Winter 1942 issue of the magazine. Impressed by the glossary, Derleth asked Laney to rewrite it for publication in the Arkham House collection Beyond the Wall of Sleep (1943). Laney's essay ("The Cthulhu Mythos") was later republished in Crypt of Cthulhu #32 (1985). In applying the elemental theory to beings that function on a cosmic scale (e.g., Yog-Sothoth) some authors created a fifth element that they termed aethyr. Fictional cults A number of fictional cults appear in the Cthulhu Mythos, the loosely connected series of horror stories written by Lovecraft and other writers inspired by his creations. Many of these cults serve the Outer God Nyarlathotep, the Crawling Chaos, a protean creature that appears in myriad guises. Other cults are dedicated to the cause of the Great Old Ones, a group of powerful alien beings currently imprisoned or otherwise resting in a deathlike sleep. These fictional cults have in some ways taken on a life of their own beyond the pages of Lovecraft's works. According to author John Engle, "The very real world of esoteric magical and occult practices has adopted Lovecraft and his works into its canon, which have informed the ritual practices, or even formed the bedrock, of certain cabals and magical circles". Significance The Cthulhu Mythos of H. P. Lovecraft is considered to have been highly influential for the speculative fiction genre. It has been called "the official fictional religion of fantasy, science fiction, and horror, a grab bag for writers in need of unthinkably vast, and unthinkably indifferent, eldritch entities". See also References Further reading Dziemianowicz, Stefan. "The Cthulhu Mythos: Chronicle of a Controversy". In The Lovecraft Society of New England (ed) Necronomicon: The Cthulhu Mythos Convention 1993 (convention book). Boston: NecronomiCon, 1993, pp. 25–31 External links Lovecraft Archive The Virtual World of H. P. Lovecraft a mapping of Lovecraft's imaginary version of New England Lovecraft: Fear of the Unknown – full documentary at the Snagfilms company YouTube channel Schema on Lovecraft's »The Call of Ctuhulhu« and the Cthulhu Mythos American novels adapted into films American novels adapted into plays Fictional universes Horror genres Mythopoeia Novels adapted into video games Shared universes
5726
https://en.wikipedia.org/wiki/Crane%20shot
Crane shot
In filmmaking and video production, a crane shot is a shot taken by a camera on a moving crane or jib. Filmmaker D. W. Griffith created the first crane for his 1916 epic film Intolerance, with famed special effects pioneer Eiji Tsuburaya later constructing the first iron camera crane which is still adapted worldwide today. Most cranes accommodate both the camera and an operator, but some can be moved by remote control. Crane shots are often found in what are supposed to be emotional or suspenseful scenes. One example of this technique is the shots taken by remote cranes in the car-chase sequence of the 1985 film To Live and Die in L.A. Some filmmakers place the camera on a boom arm simply to make it easier to move around between ordinary set-ups. History D. W. Griffith designed the first camera crane for his 1916 epic film Intolerance. His crane measured 140 feet tall and ascended on six four-wheeled railroad trucks. In 1929, future special effects pioneer Eiji Tsuburaya constructed a smaller replica of Griffith's wooden camera crane without blueprints or manuals. Although his wooden crane collapsed shortly after its completion, Tsuburaya created the first-ever iron shooting crane in October 1934, and an adaptation of this crane is still used worldwide today. Camera crane types Camera cranes may be small, medium, or large, depending on the load capacity and length of the loading arm. Historically, the first camera crane provided for lifting the camera together with the operator, and sometimes an assistant. The range of motion of the boom was restricted because of the high load capacity and the need to ensure operator safety. In recent years a camera crane boom tripod with a remote control has become popular. It carries on the boom only a movie or television camera without an operator and allows shooting from difficult positions as a small load capacity makes it possible to achieve a long reach of the crane boom and relative freedom of movement. The operator controls the camera from the ground through a motorized panoramic head, using remote control and video surveillance by watching the image on the monitor. A separate category consists of telescopic camera cranes. These devices allow setting an arbitrary trajectory of the camera, eliminating the characteristic jib crane radial displacement that comes with traditional spanning shots. Large camera cranes are almost indistinguishable from the usual boom-type cranes, with the exception of special equipment for smoothly moving the boom and controlling noise. Small camera cranes and crane-trucks have a lightweight construction, often without a mechanical drive. The valves are controlled manually by balancing the load-specific counterweight, facilitating manipulation. To improve usability and repeatability of movement of the crane in different takes, the axis of rotation arrows are provided with limbs and a pointer. In some cases, the camera crane is mounted on a dolly for even greater camera mobility. Such devices are called crane trolleys. In modern films robotic cranes allow use of multiple actuators for high-accuracy repeated movement of the camera in trick photography. These devices are called tap-robots; some sources use the term motion control. Manufacturers The major supplier of cranes in the cinema of the United States throughout the 1940s, 1950s, and 1960s was the Chapman Company (later Chapman-Leonard of North Hollywood), supplanted by dozens of similar manufacturers around the world. The traditional design provided seats for both the director and the camera operator, and sometimes a third seat for the cinematographer as well. Large weights on the back of the crane compensate for the weight of the people riding the crane and must be adjusted carefully to avoid the possibility of accidents. During the 1960s, the tallest crane was the Chapman Titan crane, a massive design over 20 feet high that won an Academy Scientific & Engineering award. During the last few years, camera cranes have been miniaturized and costs have dropped so dramatically that most aspiring film makers have access to these tools. What was once a "Hollywood" effect is now available for under $400. Manufacturers of camera cranes include ABC-Products, Cambo, Filmotechnic, Polecam, Panther and Matthews Studio Equipment, Sevenoak, and Newton Nordic. Camera crane technique Most such cranes were manually operated, requiring an experienced boom operator who knew how to vertically raise, lower, and "crab" the camera alongside actors while the crane platform rolled on separate tracks. The crane operator and camera operator had to precisely coordinate their moves so that focus, pan, and camera position all started and stopped at the same time, requiring great skill and rehearsal. On the back of the crane is a counter weight. This allows the crane to smooth action while in motion with minimal effort. Notable usage D. W. Griffith's Intolerance (1916) featured the first ever crane shot for a film. Atsuo Tomioka's 1935 film The Chorus of a Million featured the first iron camera crane, which was created and employed in the film in 1934 by Eiji Tsuburaya. Leni Riefenstahl had a cameraman shoot a half-circle pan shot from a crane for the 1935 Nazi propaganda film Triumph of the Will. A crane shot was used in Orson Welles' 1941 film Citizen Kane. Welles also used a crane camera during the iconic opening of Touch of Evil (1958). The camera perched on a Chapman crane begins on a close-up of a ticking time bomb and ends three-plus minutes later with a blinding explosion. The Western High Noon (1952) had a famous crane shot. The shot backs up and rises, in order to show Marshal Will Kane totally alone and isolated on the street. The 1964 film by Mikhail Kalatozov, I Am Cuba contains two of the most astonishing tracking shots ever attempted. In his film Sympathy for the Devil, Jean-Luc Godard used a crane for almost every shot in the movie, giving each scene a 360-degree tour of the tableau Godard presented to the viewer. In the final scene, he even shows the crane he was able to rent on his limited budget by including it in the scene. This was one of his traits as a filmmaker — showing off his budget — as he did with Brigitte Bardot in Le Mepris (Contempt). The closing take of Richard Attenborough's film version of Oh! What a Lovely War begins with a single war grave, gradually pulling back to reveal hundreds of identical crosses. The 1980 comedy-drama film The Stunt Man featured a crane throughout the production of the fictitious film-within-a-film (with the director played by Peter O'Toole). The television comedy Second City Television (SCTV) uses the concept of the crane shot as comedic material. After using a crane shot in one of the first NBC-produced episodes, the network complained about the exorbitant cost of renting the crane. SCTV writers responded by making the "crane shot" a ubiquitous symbol of production excess while also lampooning network executives who care nothing about artistic vision and everything about the bottom line. At the end of the second season, an inebriated Johnny LaRue (John Candy) is given his very own crane by Santa Claus, implying he would be able to have a crane shot whenever he wanted it. Director Dario Argento included an extensive scene in Tenebrae where the camera seemingly crawled over the walls and up a house wall, all in one seamless take. Due to its length, the tracking shot ended up being the production's most difficult and complex part to complete. The 2004 Johnnie To film Breaking News opens with an elaborate seven-minute single-take crane shot. Director Dennis Dugan frequently uses top-to-bottom crane shots in his comedy films. A camera crane panoramic master interior live shot opens The Late Late Show with James Corden after the pre-recorded exterior aerial-shot. Jeopardy! uses a crane to pan the camera over the audience. See also Technocrane, a telescopic camera crane U-crane, a gyro-stabilized car-mounted telescopic camera crane References Articles containing video clips Cinematic techniques Film and video technology Cranes (machines)
5729
https://en.wikipedia.org/wiki/Chariots%20of%20Fire
Chariots of Fire
Chariots of Fire is a 1981 British historical sports drama film directed by Hugh Hudson, written by Colin Welland and produced by David Puttnam. It is based on the true story of two British athletes in the 1924 Olympics: Eric Liddell, a devout Scottish Christian who runs for the glory of God, and Harold Abrahams, an English Jew who runs to overcome prejudice. Ben Cross and Ian Charleson star as Abrahams and Liddell, alongside Nigel Havers, Ian Holm, John Gielgud, Lindsay Anderson, Cheryl Campbell, Alice Krige, Brad Davis and Dennis Christopher in supporting roles. Kenneth Branagh makes his debut in a minor role. Chariots of Fire was nominated for seven Academy Awards and won four, including Best Picture, Best Original Screenplay and Best Original Score for Vangelis' electronic theme tune. At the 35th British Academy Film Awards, the film was nominated in 11 categories and won in three, including Best Film. It is ranked 19th in the British Film Institute's list of Top 100 British films. The film's title was inspired by the line "Bring me my Chariot of fire!" from the William Blake poem adapted into the British hymn and unofficial English anthem "Jerusalem"; the hymn is heard at the end of the film. The original phrase "chariot(s) of fire" is from 2 Kings 2:11 and 6:17 in the Bible. Plot During a 1978 funeral service in London in honour of the life of Harold Abrahams, headed by his former colleague Lord Andrew Lindsay, there is a flashback to when he was young and in a group of athletes running along a beach. In 1919, Harold Abrahams enters the University of Cambridge, where he experiences antisemitism from the staff but enjoys participating in the Gilbert and Sullivan club. He becomes the first person ever to complete the Trinity Great Court Run, running around the college courtyard in the time it takes for the clock to strike 12, and achieves an undefeated string of victories in various national running competitions. Although focused on his running, he falls in love with Sybil Gordon, a leading Gilbert and Sullivan soprano. Eric Liddell, born in China to Scottish missionary parents, is in Scotland. His devout sister Jennie disapproves of Liddell's plans to pursue competitive running. Still, Liddell sees running as a way of glorifying God before returning to China to work as a missionary. When they first race against each other, Liddell beats Abrahams. Abrahams takes it poorly, but Sam Mussabini, a professional trainer he had approached earlier, offers to take him on to improve his technique. This attracts criticism from the Cambridge college masters, who allege it is not gentlemanly for an amateur to "play the tradesman" by employing a professional coach. Abrahams dismisses this concern, interpreting it as cover for antisemitic and class-based prejudice. When Liddell accidentally misses a church prayer meeting because of his running, Jennie upbraids him and accuses him of no longer caring about God. Eric tells her that though he intends to return eventually to the China mission, he feels divinely inspired when running and that not to run would be to dishonour God. After years of training and racing, the two athletes are accepted to represent Great Britain in the 1924 Olympics in Paris. Also accepted are Abrahams' Cambridge friends, Andrew Lindsay, Aubrey Montague, and Henry Stallard. While boarding the boat to France for the Olympics, Liddell discovers the heats for his 100-metre race will be on a Sunday. Despite intense pressure from the Prince of Wales and the British Olympic Committee, he refuses to run the race because his Christian convictions prevent him from running on the Lord's Day. A solution is found thanks to Liddell's teammate Lindsay, who, having already won a silver medal in the 400 metres hurdles, offers to give his place in the 400-metre race on the following Thursday to Liddell, who gratefully accepts. Liddell's religious convictions in the face of national athletic pride make headlines around the world; he delivers a sermon at the Paris Church of Scotland that Sunday, and quotes from Isaiah 40. Abrahams is badly beaten by the heavily favoured United States runners in the 200 metre race. He knows his last chance for a medal will be the 100 metres. He competes in the race and wins. His coach Mussabini, who was barred from the stadium, is overcome that the years of dedication and training have paid off with an Olympic gold medal. Now Abrahams can get on with his life and reunite with his girlfriend Sybil, whom he had neglected for the sake of running. Before Liddell's race, the American coach remarks dismissively to his runners that Liddell has little chance of doing well in his now, far longer, 400 metre race. But one of the American runners, Jackson Scholz, hands Liddell a note of support that quotes . Liddell defeats the American favourites and wins the gold medal. The British team returns home triumphant. A textual epilogue reveals that Abrahams married Sybil and became the elder statesman of British athletics while Liddell went on to do missionary work and was mourned by all of Scotland following his death in Japanese-occupied China. Cast Other actors in smaller roles include John Young as Eric and Jennie's father Reverend J.D. Liddell, Yvonne Gilan as their mother Mary, Benny Young as their older brother Rob, Yves Beneyton as French runner Géo André, Philip O'Brien as American coach George Collins, Patrick Doyle as Jimmie, and Ruby Wax as Bunty. Kenneth Branagh, who worked as a set gofer, appears as an extra in the Cambridge Society Day sequence. Stephen Fry has a likewise uncredited role as a Gilbert-and-Sullivan Club singer. Production Screenplay Producer David Puttnam was looking for a story in the mould of A Man for All Seasons (1966), regarding someone who follows his conscience, and felt that sport provided clear situations in this sense. He discovered Eric Liddell's story by accident in 1977, when he happened upon An Approved History of the Olympic Games, a reference book on the Olympics, while housebound from the flu, in a rented house in Malibu. Screenwriter Colin Welland, commissioned by Puttnam, did an enormous amount of research for his Academy Award-winning script. Among other things, he took out advertisements in London newspapers seeking memories of the 1924 Olympics, went to the National Film Archives for pictures and footage of the 1924 Olympics, and interviewed everyone involved who was still alive. Welland just missed Abrahams, who died on 14 January 1978, but he did attend Abrahams' February 1978 memorial service, which inspired the present-day framing device of the film. Aubrey Montague's son saw Welland's newspaper ad and sent him copies of the letters his father had sent home – which gave Welland something to use as a narrative bridge in the film. Except for changes in the greetings of the letters from "Darling Mummy" to "Dear Mum" and the change from Oxford to Cambridge, all of the readings from Montague's letters are from the originals. Welland's original script also featured, in addition to Eric Liddell and Harold Abrahams, a third protagonist, 1924 Olympic gold medallist Douglas Lowe, who was presented as a privileged aristocratic athlete. However, Lowe refused to have anything to do with the film, and his character was written out and replaced by the fictional character of Lord Andrew Lindsay. Initial financing towards development costs was provided by Goldcrest Films, who then sold the project to Mohamed Al-Fayed's Allied Stars, but kept a percentage of the profits. Ian Charleson wrote Eric Liddell's speech to the post-race workingmen's crowd at the Scotland v. Ireland races. Charleson, who had studied the Bible intensively in preparation for the role, told director Hugh Hudson that he didn't feel the portentous and sanctimonious scripted speech was either authentic or inspiring. Hudson and Welland allowed him to write words he personally found inspirational instead. Puttnam chose Hugh Hudson, a multiple award-winning advertising and documentary filmmaker who had never helmed a feature film, to direct Chariots of Fire. Hudson and Puttnam had known each other since the 1960s when Puttnam was an advertising executive and Hudson was making films for ad agencies. In 1977, Hudson had also been second-unit director on the Puttnam-produced film Midnight Express. Casting Director Hugh Hudson was determined to cast young, unknown actors in all the major roles of the film, and to back them up by using veterans like John Gielgud, Lindsay Anderson, and Ian Holm as their supporting cast. Hudson and producer David Puttnam did months of fruitless searching for the perfect actor to play Eric Liddell. They then saw Scottish stage actor Ian Charleson performing the role of Pierre in the Royal Shakespeare Company's production of Piaf, and knew immediately they had found their man. Unbeknownst to them, Charleson had heard about the film from his father, and desperately wanted to play the part, feeling it would "fit like a kid glove". Ben Cross, who plays Harold Abrahams, was discovered while playing Billy Flynn in Chicago. In addition to having a natural pugnaciousness, he had the desired ability to sing and play the piano. Cross was thrilled to be cast, and said he was moved to tears by the film's script. 20th Century-Fox, which put up half of the production budget in exchange for distribution rights outside of North America, insisted on having a couple of notable American names in the cast. Thus the small parts of the two American champion runners, Jackson Scholz and Charley Paddock, were cast with recent headliners: Brad Davis had recently starred in Midnight Express (also produced by Puttnam), and Dennis Christopher had recently starred, as a young bicycle racer, in the popular indie film Breaking Away. All of the actors portraying runners underwent an intensive three-month training regimen with renowned running coach Tom McNab. This training and isolation of the actors also created a strong bond and sense of camaraderie among them. Filming The beach scenes showing the athletes running towards the Carlton Hotel at Broadstairs, Kent, were shot in Scotland on West Sands, St Andrews next to the 18th hole of the Old Course at St Andrews Links. A plaque now commemorates the filming. The impact of these scenes (as the athletes run in slow motion to Vangelis's music) prompted Broadstairs town council to commemorate them with a seafront plaque. All of the Cambridge scenes were actually filmed at Hugh Hudson's alma mater Eton College, because Cambridge refused filming rights, fearing depictions of anti-Semitism. The Cambridge administration greatly regretted the decision after the film's enormous success. Liverpool Town Hall was the setting for the scenes depicting the British Embassy in Paris. The Colombes Olympic Stadium in Paris was represented by the Oval Sports Centre, Bebington, Merseyside. The nearby Woodside ferry terminal was used to represent the embarkation scenes set in Dover. The railway station scenes were filmed in York, using locomotives from the National Railway Museum. The filming of the Scotland–France international athletic meeting took place at Goldenacre Sports Ground, owned by George Heriot's School while the Scotland–Ireland meeting was at the nearby Inverleith Sports Ground. The scene depicting a performance of The Mikado was filmed in the Royal Court Theatre, Liverpool, with members of the D'Oyly Carte Opera Company who were on tour. Editing The film was slightly altered for the U.S. audience. A brief scene depicting a pre-Olympics cricket game between Abrahams, Liddell, Montague, and the rest of the British track team appears shortly after the beginning of the original film. For the American audience, this brief scene was deleted. In the U.S., to avoid the initial G rating, which had been strongly associated with children's films and might have hindered box office sales, a different scene was used – one depicting Abrahams and Montague arriving at a Cambridge railway station and encountering two First World War veterans who use an obscenity – in order to be given a PG rating. An off camera retort of "Win It For Israel" among exhortations of Abraham's fellow students before he takes on the challenge of The Great Court Run was curiously absent from the final cuts theatrically distributed in the U.S. but can be heard in versions broadcast on such cable outlets as TCM. Soundtrack Although the film is a period piece, set in the 1920s, the Academy Award-winning original soundtrack composed by Vangelis uses a modern 1980s electronic sound, with a strong use of synthesizer and piano among other instruments. This was a departure from earlier period films, which employed sweeping orchestral instrumentals. The title theme of the film has been used in subsequent films and television shows during slow-motion segments. Vangelis, a Greek-born electronic composer who moved to Paris in the late 1960s, had been living in London since 1974. Director Hugh Hudson had collaborated with him on documentaries and commercials, and was also particularly impressed with his 1979 albums Opera Sauvage and China. David Puttnam also greatly admired Vangelis's body of work, having originally selected his compositions for his previous film Midnight Express. Hudson made the choice for Vangelis and for a modern score: "I knew we needed a piece which was anachronistic to the period to give it a feel of modernity. It was a risky idea but we went with it rather than have a period symphonic score." The soundtrack had a personal significance to Vangelis: after composing the theme he told Puttnam, "My father is a runner, and this is an anthem to him." Hudson originally wanted Vangelis's 1977 tune "L'Enfant", from his Opera Sauvage album, to be the title theme of the film, and the beach running sequence was actually filmed with "L'Enfant" playing on loudspeakers for the runners to pace to. Vangelis finally convinced Hudson he could create a new and better piece for the film's main theme – and when he played the "Chariots of Fire" theme for Hudson, it was agreed the new tune was unquestionably better. The "L'Enfant" melody still made it into the film: when the athletes reach Paris and enter the stadium, a brass band marches through the field, and first plays a modified, acoustic performance of the piece. Vangelis's electronic "L'Enfant" track eventually was used prominently in the 1982 film The Year of Living Dangerously. Some pieces of Vangelis's music in the film did not end up on the film's soundtrack album. One of them is the background music to the race Eric Liddell runs in the Scottish highlands. This piece is a version of "Hymne", the original version of which appears on Vangelis's 1979 album, Opéra sauvage. Various versions are also included on Vangelis's compilation albums Themes, Portraits, and Odyssey: The Definitive Collection, though none of these include the version used in the film. Five lively Gilbert and Sullivan tunes also appear in the soundtrack, and serve as jaunty period music which counterpoints Vangelis's modern electronic score. These are: "He is an Englishman" from H.M.S. Pinafore, "Three Little Maids From School Are We" from The Mikado, "With Catlike Tread" from The Pirates of Penzance, "The Soldiers of Our Queen" from Patience, and "There Lived a King" from The Gondoliers. The film also incorporates a major traditional work: "Jerusalem", sung by a British choir at the 1978 funeral of Harold Abrahams. The words, written by William Blake in 1804–08, were set to music by Hubert Parry in 1916 as a celebration of England. This hymn has been described as "England's unofficial national anthem", concludes the film and inspired its title. A handful of other traditional anthems and hymns and period-appropriate instrumental ballroom-dance music round out the film's soundtrack. Release The film was distributed by 20th Century-Fox and selected for the 1981 Royal Film Performance with its premiere on 30 March 1981 at the Odeon Haymarket before opening to the public the following day. It opened in Edinburgh on 4 April and in Oxford and Cambridge on 5 April with other openings in Manchester and Liverpool before expanding further in May into 20 additional London cinemas and 11 others nationally. It was shown in competition at the 1981 Cannes Film Festival on 20 May. The film was distributed by The Ladd Company through Warner Bros. in North America and released on 25 September 1981 in Los Angeles, California and in New York Film Festival, on 26 September 1981 in New York and on 9 April 1982 in the United States. Reception Since its release, Chariots of Fire has received generally positive reviews from critics. , the film holds an 83% "Certified Fresh" rating on the review aggregator website Rotten Tomatoes, based on 111 reviews, with a weighted average of 7.7/10. The site's consensus reads: "Decidedly slower and less limber than the Olympic runners at the center of its story, Chariots of Fire nevertheless manages to make effectively stirring use of its spiritual and patriotic themes." On Metacritic, the film has a score of 78 out of 100 based on 19 critics' reviews, indicating "generally favorable reviews". For its 2012 re-release, Kate Muir of The Times gave the film five stars, writing: "In a time when drug tests and synthetic fibres have replaced gumption and moral fibre, the tale of two runners competing against each other in the 1924 Olympics has a simple, undiminished power. From the opening scene of pale young men racing barefoot along the beach, full of hope and elation, backed by Vangelis's now famous anthem, the film is utterly compelling." In its first four weeks at the Odeon Haymarket it grossed £106,484. The film was the highest-grossing British film for the year with theatrical rentals of £1,859,480. Its gross of almost $59 million in the United States and Canada made it the highest-grossing film import into the US (i.e. a film without any US input) at the time, surpassing Meatballs $43 million. Accolades The film was nominated for seven Academy Awards, winning four (including Best Picture). When accepting his Oscar for Best Original Screenplay, Colin Welland famously announced "The British are coming". It was the first film released by Warner Bros. to win Best Picture since My Fair Lady in 1964. American Film Institute recognition 1998: AFI's 100 Years...100 Movies - Nominated 2005: AFI's 100 Years of Film Scores - Nominated 2006: AFI's 100 Years...100 Cheers - No. 100 2007: AFI's 100 Years...100 Movies (10th Anniversary Edition) - Nominated 2008: AFI's 10 Top 10 - Nominated Sports Movie Other honours BFI Top 100 British films (1999) – rank 19 Hot 100 No. 1 Hits of 1982 (USA) (8 May) – Vangelis, Chariots of Fire theme Historical accuracy Chariots of Fire is a film about achieving victory through self sacrifice and moral courage. While the producers' intent was to make a cinematic work that was historically authentic, the film was not intended to be historically accurate. Numerous liberties were taken with the actual historical chronology, the inclusion and exclusion of notable people, and the creation of fictional scenes for dramatic purpose, plot pacing and exposition. Characters The film depicts Abrahams as attending Gonville and Caius College, Cambridge, with three other Olympic athletes: Henry Stallard, Aubrey Montague, and Lord Andrew Lindsay. Abrahams and Stallard were, in fact, students there and competed in the 1924 Olympics. Montague also competed in the Olympics as depicted, but he attended Oxford, not Cambridge. Aubrey Montague sent daily letters to his mother about his time at Oxford and the Olympics; these letters were the basis of Montague's narration in the film. The character of Lindsay was based partially on Lord Burghley, a significant figure in the history of British athletics. Although Burghley did attend Cambridge, he was not a contemporary of Harold Abrahams, as Abrahams was an undergraduate from 1919 to 1923 and Burghley was at Cambridge from 1923 to 1927. One scene in the film depicts the Burghley-based "Lindsay" as practising hurdles on his estate with full champagne glasses placed on each hurdle – this was something the wealthy Burghley did, although he used matchboxes instead of champagne glasses. The fictional character of Lindsay was created when Douglas Lowe, who was Britain's third athletics gold medallist in the 1924 Olympics, was not willing to be involved with the film. Another scene in the film recreates the Great Court Run, in which the runners attempt to run around the perimeter of the Great Court at Trinity College, Cambridge in the time it takes the clock to strike 12 at midday. The film shows Abrahams performing the feat for the first time in history. In fact, Abrahams never attempted this race, and at the time of filming the only person on record known to have succeeded was Lord Burghley, in 1927. In Chariots of Fire, Lindsay, who is based on Lord Burghley, runs the Great Court Run with Abrahams in order to spur him on, and crosses the finish line just a moment too late. Since the film's release, the Great Court Run has also been successfully run by Trinity undergraduate Sam Dobin, in October 2007. In the film, Eric Liddell is tripped up by a Frenchman in the 400-metre event of a Scotland–France international athletic meeting. He recovers, makes up a 20-metre deficit, and wins. This was based on fact; the actual race was the 440 yards at a Triangular Contest meet between Scotland, England, and Ireland at Stoke-on-Trent in England in July 1923. His achievement was remarkable as he had already won the 100- and 220-yard events that day. Also unmentioned with regard to Liddell is that it was he who introduced Abrahams to Sam Mussabini. This is alluded to: in the film, Abrahams first encounters Mussabini while he is watching Liddell race. Abrahams and Liddell did race against each other twice, but not as depicted in the film, which shows Liddell winning the final of the 100 yards against a shattered Abrahams at the 1923 AAA Championship at Stamford Bridge. In fact, they raced only in a heat of the 220 yards, which Liddell won, five yards ahead of Abrahams, who did not progress to the final. In the 100 yards, Abrahams was eliminated in the heats and did not race against Liddell, who won the finals of both races the next day. They also raced against each other in the 200 m final at the 1924 Olympics, and this was also not shown in the film. Abrahams' fiancée is misidentified as Sybil Gordon, a soprano with the D'Oyly Carte Opera Company. In fact, in 1936, Abrahams married Sybil Evers, who also performed with D'Oyly Carte, but they did not meet until 1934. Also, in the film, Sybil is depicted as singing the role of Yum-Yum in The Mikado, but neither Gordon nor Evers ever sang that role with D'Oyly Carte, although Evers was known for her charm in singing Peep-Bo, one of the two other "little maids from school". Harold Abrahams' love of and heavy involvement with Gilbert and Sullivan, as depicted in the film, is factual. Liddell's sister was several years younger than she was portrayed in the film. Her disapproval of Liddell's track career was creative licence; she actually fully supported his sporting work. Jenny Liddell Somerville cooperated fully with the making of the film and has a brief cameo in the Paris Church of Scotland during Liddell's sermon. At the memorial service for Harold Abrahams, which opens the film, Lord Lindsay mentions that he and Aubrey Montague are the only members of the 1924 Olympic team still alive. However, Montague died in 1948, 30 years before Abrahams' death. Paris Olympics 1924 In the film, the 100m bronze medallist is a character called "Tom Watson"; the real medallist was Arthur Porritt of New Zealand, who refused permission for his name to be used in the film, allegedly out of modesty, and his wish was accepted by the film's producers, even though his permission was not necessary. However, the brief back-story given for Watson, who is called up to the New Zealand team from the University of Oxford, substantially matches Porritt's history. With the exception of Porritt, all the runners in the 100m final are identified correctly when they line up for inspection by the Prince of Wales. Jackson Scholz is depicted as handing Liddell an inspirational Bible-quotation message before the 400 metres final: "It says in the Old Book, 'He that honors me, I will honor.' Good luck." In reality, the note was from members of the British team, and was handed to Liddell before the race by his attending masseur at the team's Paris hotel. For dramatic purposes, screenwriter Welland asked Scholz if he could be depicted handing the note, and Scholz readily agreed, saying "Yes, great, as long as it makes me look good." The events surrounding Liddell's refusal to race on a Sunday are fictional. In the film, he does not learn that the 100-metre heat is to be held on the Christian Sabbath until he is boarding the boat to Paris. In fact, the schedule was made public several months in advance; Liddell did however face immense pressure to run on that Sunday and to compete in the 100 metres, getting called before a grilling by the British Olympic Committee, the Prince of Wales, and other grandees, and his refusal to run made headlines around the world. The decision to change races was, even so, made well before embarking to Paris, and Liddell spent the intervening months training for the 400 metres, an event in which he had previously excelled. It is true, nonetheless, that Liddell's success in the Olympic 400m was largely unexpected. The film depicts Lindsay, having already won a medal in the 400-metre hurdles, giving up his place in the 400-metre race for Liddell. In fact Burghley, on whom Lindsay is loosely based, was eliminated in the heats of the 110 hurdles (he would go on to win a gold medal in the 400 hurdles at the 1928 Olympics), and was not entered for the 400 metres. The film reverses the order of Abrahams' 100m and 200m races at the Olympics. In reality, after winning the 100 metres race, Abrahams ran the 200 metres but finished last, Jackson Scholz taking the gold medal. In the film, before his triumph in the 100m, Abrahams is shown losing the 200m and being scolded by Mussabini. And during the following scene in which Abrahams speaks with his friend Montague while receiving a massage from Mussabini, there is a French newspaper clipping showing Scholz and Charley Paddock with a headline which states that the 200 metres was a triumph for the United States. In the same conversation, Abrahams laments getting "beaten out of sight" in the 200. The film thus has Abrahams overcoming the disappointment of losing the 200 by going on to win the 100, a reversal of the real order. Eric Liddell actually also ran in the 200m race, and finished third, behind Paddock and Scholz. This was the only time in reality that Liddell and Abrahams competed in the same finals race. While their meeting in the 1923 AAA Championship in the film was fictitious, Liddell's record win in that race did spur Abrahams to train even harder. Abrahams also won a silver medal as an opening runner for the 4 x 100 metres relay team, not shown in the film, and Aubrey Montague placed sixth in the steeplechase, as depicted. London Olympics' 2012 revival Chariots of Fire became a recurring theme in promotions for the 2012 Summer Olympics in London. The film's theme was featured at the opening of the 2012 London New Year's fireworks celebrating the Olympics. The runners who first tested the new Olympic Park were spurred on by the Chariots of Fire theme, and the music was also used to fanfare the carriers of the Olympic flame on parts of its route through the UK. The beach-running sequence was also recreated at St. Andrews and filmed as part of the Olympic torch relay. The film's theme was also performed by the London Symphony Orchestra, conducted by Simon Rattle, during the Opening Ceremony of the games; the performance was accompanied by a comedy skit by Rowan Atkinson (as Mr. Bean) which included the opening beach-running footage from the film. The film's theme was again played during each medal ceremony of the 2012 Olympics. As an official part of the London 2012 Festival celebrations, a new digitally re-mastered version of the film screened in 150 cinemas throughout the UK. The re-release began 13 July 2012, two weeks before the opening ceremony of the London Olympics. A Blu-ray of the film was released on 10 July 2012 in North America, and was released 16 July 2012 in the UK. The release includes nearly an hour of special features, a CD sampler, and a 32-page "digibook". Stage adaptation A stage adaptation of Chariots of Fire was mounted in honour of the 2012 Olympics. The play, Chariots of Fire, which was adapted by playwright Mike Bartlett and included the Vangelis score, ran from 9 May to 16 June 2012 at London's Hampstead Theatre, and transferred to the Gielgud Theatre in the West End on 23 June, where it ran until 5 January 2013. It starred Jack Lowden as Eric Liddell and James McArdle as Harold Abrahams, and Edward Hall directed. Stage designer Miriam Buether transformed each theatre into an Olympic stadium, and composer Jason Carr wrote additional music. Vangelis also created several new pieces of music for the production. The stage version for the London Olympic year was the idea of the film's director, Hugh Hudson, who co-produced the play; he stated, "Issues of faith, of refusal to compromise, standing up for one's beliefs, achieving something for the sake of it, with passion, and not just for fame or financial gain, are even more vital today." Another play, Running for Glory, written by Philip Dart, based on the 1924 Olympics, and focusing on Abrahams and Liddell, toured parts of Britain from 25 February to 1 April 2012. It starred Nicholas Jacobs as Harold Abrahams, and Tom Micklem as Eric Liddell. See also List of films about the sport of athletics Chariots of Fire, a race, inspired by the film, held in Cambridge since 1991 Great Britain at the 1924 Summer Olympics Sabbath breaking References Notes External links Critics' Picks: Chariots of Fire retrospective video by A. O. Scott, The New York Times (2008) Four speeches from the movie in text and audio from AmericanRhetoric.com Chariots of Fire review by Roger Ebert Chariots of Fire review in Variety Chariots of Fire at the Arts & Faith Top 100 Spiritually Significant Films Chariots of Fire Filming locations Chariots of Fire screenplay, second draft, February 1980 Great Court Run Chariots of Fire play – Hampstead Theatre 1981 films 1980s English-language films 1980s French-language films 20th Century Fox films 1980s biographical drama films Best Foreign Language Film Golden Globe winners Best Picture Academy Award winners British biographical drama films British sports drama films University of Cambridge in fiction Films about Christianity Films about competitions Films about religion Films directed by Hugh Hudson Films set in 1919 Films set in 1920 Films set in 1923 Films set in 1924 Films set in 1978 Films set in Cambridge Films set in Kent Films set in London Films set in Paris Films set in England Films set in France Films set in Scotland Films set in the University of Cambridge Films whose writer won the Best Original Screenplay Academy Award Films that won the Best Original Score Academy Award Goldcrest Films films Biographical films about Jewish people Films about the 1924 Summer Olympics Films about Olympic track and field Running films Sport at the University of Cambridge Sports films based on actual events Warner Bros. films Films scored by Vangelis Films that won the Best Costume Design Academy Award Films set on beaches Religion and sports Films shot in Edinburgh Best Film BAFTA Award winners Films produced by David Puttnam The Ladd Company films Biographical films about sportspeople Cultural depictions of track and field athletes 1981 directorial debut films 1981 drama films Films shot in York Films shot in North Yorkshire Films shot in Yorkshire Films shot in Liverpool Films shot in Merseyside Films shot in Kent Films about antisemitism 1980s British films Toronto International Film Festival People's Choice Award winners Cultural depictions of Edward VIII and Wallis Simpson
5734
https://en.wikipedia.org/wiki/Consequentialism
Consequentialism
In ethical philosophy, consequentialism is a class of normative, teleological ethical theories that holds that the consequences of one's conduct are the ultimate basis for judgement about the rightness or wrongness of that conduct. Thus, from a consequentialist standpoint, a morally right act (or omission from acting) is one that will produce a good outcome. Consequentialism, along with eudaimonism, falls under the broader category of teleological ethics, a group of views which claim that the moral value of any act consists in its tendency to produce things of intrinsic value. Consequentialists hold in general that an act is right if and only if the act (or in some views, the rule under which it falls) will produce, will probably produce, or is intended to produce, a greater balance of good over evil than any available alternative. Different consequentialist theories differ in how they define moral goods, with chief candidates including pleasure, the absence of pain, the satisfaction of one's preferences, and broader notions of the "general good". Consequentialism is usually contrasted with deontological ethics (or deontology): deontology, in which rules and moral duty are central, derives the rightness or wrongness of one's conduct from the character of the behaviour itself, rather than the outcomes of the conduct. It is also contrasted with both virtue ethics which focuses on the character of the agent rather than on the nature or consequences of the act (or omission) itself, and pragmatic ethics which treats morality like science: advancing collectively as a society over the course of many lifetimes, such that any moral criterion is subject to revision. Some argue that consequentialist theories (such as utilitarianism) and deontological theories (such as Kantian ethics) are not necessarily mutually exclusive. For example, T. M. Scanlon advances the idea that human rights, which are commonly considered a "deontological" concept, can only be justified with reference to the consequences of having those rights. Similarly, Robert Nozick argued for a theory that is mostly consequentialist, but incorporates inviolable "side-constraints" which restrict the sort of actions agents are permitted to do. Derek Parfit argued that in practice, when understood properly, rule consequentialism, Kantian deontology, and contractualism would all end up prescribing the same behavior. Forms of consequentialism Utilitarianism In summary, Jeremy Bentham states that people are driven by their interests and their fears, but their interests take precedence over their fears; their interests are carried out in accordance with how people view the consequences that might be involved with their interests. Happiness, in this account, is defined as the maximization of pleasure and the minimization of pain. It can be argued that the existence of phenomenal consciousness and "qualia" is required for the experience of pleasure or pain to have an ethical significance. Historically, hedonistic utilitarianism is the paradigmatic example of a consequentialist moral theory. This form of utilitarianism holds that what matters is the aggregate happiness; the happiness of everyone, and not the happiness of any particular person. John Stuart Mill, in his exposition of hedonistic utilitarianism, proposed a hierarchy of pleasures, meaning that the pursuit of certain kinds of pleasure is more highly valued than the pursuit of other pleasures. However, some contemporary utilitarians, such as Peter Singer, are concerned with maximizing the satisfaction of preferences, hence preference utilitarianism. Other contemporary forms of utilitarianism mirror the forms of consequentialism outlined below. Rule consequentialism In general, consequentialist theories focus on actions. However, this need not be the case. Rule consequentialism is a theory that is sometimes seen as an attempt to reconcile consequentialism with deontology, or rules-based ethics—and in some cases, this is stated as a criticism of rule consequentialism. Like deontology, rule consequentialism holds that moral behavior involves following certain rules. However, rule consequentialism chooses rules based on the consequences that the selection of those rules has. Rule consequentialism exists in the forms of rule utilitarianism and rule egoism. Various theorists are split as to whether the rules are the only determinant of moral behavior or not. For example, Robert Nozick held that a certain set of minimal rules, which he calls "side-constraints," are necessary to ensure appropriate actions. There are also differences as to how absolute these moral rules are. Thus, while Nozick's side-constraints are absolute restrictions on behavior, Amartya Sen proposes a theory that recognizes the importance of certain rules, but these rules are not absolute. That is, they may be violated if strict adherence to the rule would lead to much more undesirable consequences. One of the most common objections to rule-consequentialism is that it is incoherent, because it is based on the consequentialist principle that what we should be concerned with is maximizing the good, but then it tells us not to act to maximize the good, but to follow rules (even in cases where we know that breaking the rule could produce better results). In Ideal Code, Real World, Brad Hooker avoids this objection by not basing his form of rule-consequentialism on the ideal of maximizing the good. He writes: [T]he best argument for rule-consequentialism is not that it derives from an overarching commitment to maximise the good. The best argument for rule-consequentialism is that it does a better job than its rivals of matching and tying together our moral convictions, as well as offering us help with our moral disagreements and uncertainties. Derek Parfit described Hooker's book as the "best statement and defence, so far, of one of the most important moral theories." State consequentialism State consequentialism, also known as Mohist consequentialism, is an ethical theory that evaluates the moral worth of an action based on how much it contributes to the welfare of a state. According to the Stanford Encyclopedia of Philosophy, Mohist consequentialism, dating back to the 5th century BCE, is the "world's earliest form of consequentialism, a remarkably sophisticated version based on a plurality of intrinsic goods taken as constitutive of human welfare." Unlike utilitarianism, which views utility as the sole moral good, "the basic goods in Mohist consequentialist thinking are...order, material wealth, and increase in population." During the time of Mozi, war and famine were common, and population growth was seen as a moral necessity for a harmonious society. The "material wealth" of Mohist consequentialism refers to basic needs, like shelter and clothing; and "order" refers to Mozi's stance against warfare and violence, which he viewed as pointless and a threat to social stability. In The Cambridge History of Ancient China, Stanford sinologist David Shepherd Nivison writes that the moral goods of Mohism "are interrelated: more basic wealth, then more reproduction; more people, then more production and wealth...if people have plenty, they would be good, filial, kind, and so on unproblematically." The Mohists believed that morality is based on "promoting the benefit of all under heaven and eliminating harm to all under heaven." In contrast to Jeremy Bentham's views, state consequentialism is not utilitarian because it is not hedonistic or individualistic. The importance of outcomes that are good for the community outweigh the importance of individual pleasure and pain. The term state consequentialism has also been applied to the political philosophy of the Confucian philosopher Xunzi. On the other hand, "legalist" Han Fei "is motivated almost totally from the ruler's point of view." Ethical egoism Ethical egoism can be understood as a consequentialist theory according to which the consequences for the individual agent are taken to matter more than any other result. Thus, egoism will prescribe actions that may be beneficial, detrimental, or neutral to the welfare of others. Some, like Henry Sidgwick, argue that a certain degree of egoism promotes the general welfare of society for two reasons: because individuals know how to please themselves best, and because if everyone were an austere altruist then general welfare would inevitably decrease. Ethical altruism Ethical altruism can be seen as a consequentialist theory which prescribes that an individual take actions that have the best consequences for everyone except for himself. This was advocated by Auguste Comte, who coined the term altruism, and whose ethics can be summed up in the phrase "Live for others." Two-level consequentialism The two-level approach involves engaging in critical reasoning and considering all the possible ramifications of one's actions before making an ethical decision, but reverting to generally reliable moral rules when one is not in a position to stand back and examine the dilemma as a whole. In practice, this equates to adhering to rule consequentialism when one can only reason on an intuitive level, and to act consequentialism when in a position to stand back and reason on a more critical level. This position can be described as a reconciliation between act consequentialism—in which the morality of an action is determined by that action's effects—and rule consequentialism—in which moral behavior is derived from following rules that lead to positive outcomes. The two-level approach to consequentialism is most often associated with R. M. Hare and Peter Singer. Motive consequentialism Another consequentialist version is motive consequentialism, which looks at whether the state of affairs that results from the motive to choose an action is better or at least as good as each alternative state of affairs that would have resulted from alternative actions. This version gives relevance to the motive of an act and links it to its consequences. An act can therefore not be wrong if the decision to act was based on a right motive. A possible inference is that one can not be blamed for mistaken judgments if the motivation was to do good. Negative consequentialism Most consequentialist theories focus on promoting some sort of good consequences. However, negative utilitarianism lays out a consequentialist theory that focuses solely on minimizing bad consequences. One major difference between these two approaches is the agent's responsibility. Positive consequentialism demands that we bring about good states of affairs, whereas negative consequentialism requires that we avoid bad ones. Stronger versions of negative consequentialism will require active intervention to prevent bad and ameliorate existing harm. In weaker versions, simple forbearance from acts tending to harm others is sufficient. An example of this is the slippery-slope argument, which encourages others to avoid a specified act on the grounds that it may ultimately lead to undesirable consequences. Often "negative" consequentialist theories assert that reducing suffering is more important than increasing pleasure. Karl Popper, for example, claimed that "from the moral point of view, pain cannot be outweighed by pleasure." (While Popper is not a consequentialist per se, this is taken as a classic statement of negative utilitarianism.) When considering a theory of justice, negative consequentialists may use a statewide or global-reaching principle: the reduction of suffering (for the disadvantaged) is more valuable than increased pleasure (for the affluent or luxurious). Acts and omissions Since pure consequentialism holds that an action is to be judged solely by its result, most consequentialist theories hold that a deliberate action is no different from a deliberate decision not to act. This contrasts with the "acts and omissions doctrine", which is upheld by some medical ethicists and some religions: it asserts there is a significant moral distinction between acts and deliberate non-actions which lead to the same outcome. This contrast is brought out in issues such as voluntary euthanasia. Actualism and possibilism The normative status of an action depends on its consequences according to consequentialism. The consequences of the actions of an agent may include other actions by this agent. Actualism and possibilism disagree on how later possible actions impact the normative status of the current action by the same agent. Actualists assert that it is only relevant what the agent would actually do later for assessing the value of an alternative. Possibilists, on the other hand, hold that we should also take into account what the agent could do, even if she would not do it. For example, assume that Gifre has the choice between two alternatives, eating a cookie or not eating anything. Having eaten the first cookie, Gifre could stop eating cookies, which is the best alternative. But after having tasted one cookie, Gifre would freely decide to continue eating cookies until the whole bag is finished, which would result in a terrible stomach ache and would be the worst alternative. Not eating any cookies at all, on the other hand, would be the second-best alternative. Now the question is: should Gifre eat the first cookie or not? Actualists are only concerned with the actual consequences. According to them, Gifre should not eat any cookies at all since it is better than the alternative leading to a stomach ache. Possibilists, however, contend that the best possible course of action involves eating the first cookie and this is therefore what Gifre should do. One counterintuitive consequence of actualism is that agents can avoid moral obligations simply by having an imperfect moral character. For example, a lazy person might justify rejecting a request to help a friend by arguing that, due to her lazy character, she would not have done the work anyway, even if she had accepted the request. By rejecting the offer right away, she managed at least not to waste anyone's time. Actualists might even consider her behavior praiseworthy since she did what, according to actualism, she ought to have done. This seems to be a very easy way to "get off the hook" that is avoided by possibilism. But possibilism has to face the objection that in some cases it sanctions and even recommends what actually leads to the worst outcome. Douglas W. Portmore has suggested that these and other problems of actualism and possibilism can be avoided by constraining what counts as a genuine alternative for the agent. On his view, it is a requirement that the agent has rational control over the event in question. For example, eating only one cookie and stopping afterward only is an option for Gifre if she has the rational capacity to repress her temptation to continue eating. If the temptation is irrepressible then this course of action is not considered to be an option and is therefore not relevant when assessing what the best alternative is. Portmore suggests that, given this adjustment, we should prefer a view very closely associated with possibilism called maximalism. Issues Action guidance One important characteristic of many normative moral theories such as consequentialism is the ability to produce practical moral judgements. At the very least, any moral theory needs to define the standpoint from which the goodness of the consequences are to be determined. What is primarily at stake here is the responsibility of the agent. The ideal observer One common tactic among consequentialists, particularly those committed to an altruistic (selfless) account of consequentialism, is to employ an ideal, neutral observer from which moral judgements can be made. John Rawls, a critic of utilitarianism, argues that utilitarianism, in common with other forms of consequentialism, relies on the perspective of such an ideal observer. The particular characteristics of this ideal observer can vary from an omniscient observer, who would grasp all the consequences of any action, to an ideally informed observer, who knows as much as could reasonably be expected, but not necessarily all the circumstances or all the possible consequences. Consequentialist theories that adopt this paradigm hold that right action is the action that will bring about the best consequences from this ideal observer's perspective. The real observer In practice, it is very difficult, and at times arguably impossible, to adopt the point of view of an ideal observer. Individual moral agents do not know everything about their particular situations, and thus do not know all the possible consequences of their potential actions. For this reason, some theorists have argued that consequentialist theories can only require agents to choose the best action in line with what they know about the situation. However, if this approach is naïvely adopted, then moral agents who, for example, recklessly fail to reflect on their situation, and act in a way that brings about terrible results, could be said to be acting in a morally justifiable way. Acting in a situation without first informing oneself of the circumstances of the situation can lead to even the most well-intended actions yielding miserable consequences. As a result, it could be argued that there is a moral imperative for agents to inform themselves as much as possible about a situation before judging the appropriate course of action. This imperative, of course, is derived from consequential thinking: a better-informed agent is able to bring about better consequences. Consequences for whom Moral action always has consequences for certain people or things. Varieties of consequentialism can be differentiated by the beneficiary of the good consequences. That is, one might ask "Consequences for whom?" Agent-focused or agent-neutral A fundamental distinction can be drawn between theories which require that agents act for ends perhaps disconnected from their own interests and drives, and theories which permit that agents act for ends in which they have some personal interest or motivation. These are called "agent-neutral" and "agent-focused" theories respectively. Agent-neutral consequentialism ignores the specific value a state of affairs has for any particular agent. Thus, in an agent-neutral theory, an actor's personal goals do not count any more than anyone else's goals in evaluating what action the actor should take. Agent-focused consequentialism, on the other hand, focuses on the particular needs of the moral agent. Thus, in an agent-focused account, such as one that Peter Railton outlines, the agent might be concerned with the general welfare, but the agent is more concerned with the immediate welfare of herself and her friends and family. These two approaches could be reconciled by acknowledging the tension between an agent's interests as an individual and as a member of various groups, and seeking to somehow optimize among all of these interests. For example, it may be meaningful to speak of an action as being good for someone as an individual, but bad for them as a citizen of their town. Human-centered? Many consequentialist theories may seem primarily concerned with human beings and their relationships with other human beings. However, some philosophers argue that we should not limit our ethical consideration to the interests of human beings alone. Jeremy Bentham, who is regarded as the founder of utilitarianism, argues that animals can experience pleasure and pain, thus demanding that 'non-human animals' should be a serious object of moral concern. More recently, Peter Singer has argued that it is unreasonable that we do not give equal consideration to the interests of animals as to those of human beings when we choose the way we are to treat them. Such equal consideration does not necessarily imply identical treatment of humans and non-humans, any more than it necessarily implies identical treatment of all humans. Value of consequences One way to divide various consequentialisms is by the types of consequences that are taken to matter most, that is, which consequences count as good states of affairs. According to utilitarianism, a good action is one that results in an increase in pleasure, and the best action is one that results in the most pleasure for the greatest number. Closely related is eudaimonic consequentialism, according to which a full, flourishing life, which may or may not be the same as enjoying a great deal of pleasure, is the ultimate aim. Similarly, one might adopt an aesthetic consequentialism, in which the ultimate aim is to produce beauty. However, one might fix on non-psychological goods as the relevant effect. Thus, one might pursue an increase in material equality or political liberty instead of something like the more ephemeral "pleasure". Other theories adopt a package of several goods, all to be promoted equally. As the consequentialist approach contains an inherent assumption that the outcomes of a moral decision can be quantified in terms of "goodness" or "badness," or at least put in order of increasing preference, it is an especially suited moral theory for a probabilistic and decision theoretical approach. Virtue ethics Consequentialism can also be contrasted with aretaic moral theories such as virtue ethics. Whereas consequentialist theories posit that consequences of action should be the primary focus of our thinking about ethics, virtue ethics insists that it is the character rather than the consequences of actions that should be the focal point. Some virtue ethicists hold that consequentialist theories totally disregard the development and importance of moral character. For example, Philippa Foot argues that consequences in themselves have no ethical content, unless it has been provided by a virtue such as benevolence. However, consequentialism and virtue ethics need not be entirely antagonistic. Iain King has developed an approach that reconciles the two schools. Other consequentialists consider effects on the character of people involved in an action when assessing consequence. Similarly, a consequentialist theory may aim at the maximization of a particular virtue or set of virtues. Finally, following Foot's lead, one might adopt a sort of consequentialism that argues that virtuous activity ultimately produces the best consequences. Ultimate end The ultimate end is a concept in the moral philosophy of Max Weber, in which individuals act in a faithful, rather than rational, manner. Teleological ethics Teleological ethics (Greek: telos, 'end, purpose' + logos, 'science') is a broader class of views in moral philosophy which consequentialism falls under. In general, proponents of teleological ethics argue that the moral value of any act consists in its tendency to produce things of intrinsic value, meaning that an act is right if and only if it, or the rule under which it falls, produces, will probably produce, or is intended to produce, a greater balance of good over evil than any alternative act. This concept is exemplified by the famous aphorism, "the end justifies the means," variously attributed to Machiavelli or Ovid i.e. if a goal is morally important enough, any method of achieving it is acceptable. Teleological theories differ among themselves on the nature of the particular end that actions ought to promote. The two major families of views in teleological ethics are virtue ethics and consequentialism. Teleological ethical theories are often discussed in opposition to deontological ethical theories, which hold that acts themselves are inherently good or bad, rather than good or bad because of extrinsic factors (such as the act's consequences or the moral character of the person who acts). Etymology The term consequentialism was coined by G. E. M. Anscombe in her essay "Modern Moral Philosophy" in 1958, to describe what she saw as the central error of certain moral theories, such as those propounded by Mill and Sidgwick. The phrase and concept of "the end justifies the means" are at least as old as the first century BC. Ovid wrote in his Heroides that Exitus acta probat ("The result justifies the deed"). Criticisms G. E. M. Anscombe objects to the consequentialism of Sidgwick on the grounds that the moral worth of an action is premised on the predictive capabilities of the individual, relieving them of the responsibility for the "badness" of an act should they "make out a case for not having foreseen" negative consequences. The future amplification of the effects of small decisions is an important factor that makes it more difficult to predict the ethical value of consequences, even though most would agree that only predictable consequences are charged with a moral responsibility. Bernard Williams has argued that consequentialism is alienating because it requires moral agents to put too much distance between themselves and their own projects and commitments. Williams argues that consequentialism requires moral agents to take a strictly impersonal view of all actions, since it is only the consequences, and not who produces them, that are said to matter. Williams argues that this demands too much of moral agents—since (he claims) consequentialism demands that they be willing to sacrifice any and all personal projects and commitments in any given circumstance in order to pursue the most beneficent course of action possible. He argues further that consequentialism fails to make sense of intuitions that it can matter whether or not someone is personally the author of a particular consequence. For example, that participating in a crime can matter, even if the crime would have been committed anyway, or would even have been worse, without the agent's participation. Some consequentialists—most notably Peter Railton—have attempted to develop a form of consequentialism that acknowledges and avoids the objections raised by Williams. Railton argues that Williams's criticisms can be avoided by adopting a form of consequentialism in which moral decisions are to be determined by the sort of life that they express. On his account, the agent should choose the sort of life that will, on the whole, produce the best overall effects. Notable consequentialists R. M. Adams (born 1937) Jonathan Baron (born 1944) Jeremy Bentham (1748–1832) Richard B. Brandt (1910–1997) John Dewey (1857–1952) Julia Driver (1961- ) Milton Friedman (1912–2006) David Friedman (born 1945) William Godwin (1756–1836) R. M. Hare (1919–2002) John Harsanyi (1920–2000) Brad Hooker (born 1957) Francis Hutcheson (1694–1746) Shelly Kagan (born 1963) Niccolò Machiavelli (1469–1527) James Mill (1773–1836) John Stuart Mill (1806–1873) G. E. Moore (1873–1958) Mozi (470–391 BCE) Philip Pettit (born 1945) Peter Railton (born 1950) Henry Sidgwick (1838–1900) Peter Singer (born 1946) J. J. C. Smart (1920–2012) Notable utilitarian consequentialists See also Charvaka Demandingness objection Dharma-yuddha Effective altruism Instrumental and intrinsic value Lesser of two evils principle Mental reservation Mohism Omission bias Principle of double effect Situational ethics Utilitarianism Welfarism References Further reading External links University of Texas. Ethics Unwrapped – Consequentialism Normative ethics Teleology Ethical theories
5735
https://en.wikipedia.org/wiki/Conscription
Conscription
Conscription (also called the draft in the United States) is the state-mandated enlistment of people in a national service, mainly a military service. Conscription dates back to antiquity and it continues in some countries to the present day under various names. The modern system of near-universal national conscription for young men dates to the French Revolution in the 1790s, where it became the basis of a very large and powerful military. Most European nations later copied the system in peacetime, so that men at a certain age would serve 1–8 years on active duty and then transfer to the reserve force. Conscription is controversial for a range of reasons, including conscientious objection to military engagements on religious or philosophical grounds; political objection, for example to service for a disliked government or unpopular war; sexism, in that historically men have been subject to the draft in the most cases; and ideological objection, for example, to a perceived violation of individual rights. Those conscripted may evade service, sometimes by leaving the country, and seeking asylum in another country. Some selection systems accommodate these attitudes by providing alternative service outside combat-operations roles or even outside the military, such as (alternative civil service) in Finland, (compulsory community service) in Austria, Germany and Switzerland. Several countries conscript male soldiers not only for armed forces, but also for paramilitary agencies, which are dedicated to police-like domestic only service like internal troops, border guards or non-combat rescue duties like civil defence. As of 2023, many states no longer conscript their citizens, relying instead upon professional militaries with volunteers. The ability to rely on such an arrangement, however, presupposes some degree of predictability with regard to both war-fighting requirements and the scope of hostilities. Many states that have abolished conscription still, therefore, reserve the power to resume conscription during wartime or times of crisis. States involved in wars or interstate rivalries are most likely to implement conscription, and democracies are less likely than autocracies to implement conscription. With a few exceptions, such as Singapore and Egypt, former British colonies are less likely to have conscription, as they are influenced by British anti-conscription norms that can be traced back to the English Civil War; the United Kingdom abolished conscription in 1960. History In pre-modern times Ilkum Around the reign of Hammurabi (1791–1750 BC), the Babylonian Empire used a system of conscription called Ilkum. Under that system those eligible were required to serve in the royal army in time of war. During times of peace they were instead required to provide labour for other activities of the state. In return for this service, people subject to it gained the right to hold land. It is possible that this right was not to hold land per se but specific land supplied by the state. Various forms of avoiding military service are recorded. While it was outlawed by the Code of Hammurabi, the hiring of substitutes appears to have been practiced both before and after the creation of the code. Later records show that Ilkum commitments could become regularly traded. In other places, people simply left their towns to avoid their Ilkum service. Another option was to sell Ilkum lands and the commitments along with them. With the exception of a few exempted classes, this was forbidden by the Code of Hammurabi. Medieval levies Under the feudal laws on the European continent, landowners in the medieval period enforced a system whereby all peasants, freemen commoners and noblemen aged 15 to 60 living in the countryside or in urban centers, were summoned for military duty when required by either the king or the local lord, bringing along the weapons and armor according to their wealth. These levies fought as footmen, sergeants, and men at arms under local superiors appointed by the king or the local lord such as the arrière-ban in France. Arrière-ban denoted a general levy, where all able-bodied males age 15 to 60 living in the Kingdom of France were summoned to go to war by the King (or the constable and the marshals). Men were summoned by the bailiff (or the sénéchal in the south). Bailiffs were military and political administrators installed by the King to steward and govern a specific area of a province following the king's commands and orders. The men summoned in this way were then summoned by the lieutenant who was the King's representative and military governor over an entire province comprising many bailiwicks, seneschalties and castellanies. All men from the richest noble to the poorest commoner were summoned under the arrière-ban and they were supposed to present themselves to the King or his officials. In medieval Scandinavia the leiðangr (Old Norse), leidang (Norwegian), leding, (Danish), ledung (Swedish), lichting (Dutch), expeditio (Latin) or sometimes leþing (Old English), was a levy of free farmers conscripted into coastal fleets for seasonal excursions and in defence of the realm. The bulk of the Anglo-Saxon English army, called the fyrd, was composed of part-time English soldiers drawn from the freemen of each county. In the 690s laws of Ine of Wessex, three levels of fines are imposed on different social classes for neglecting military service. Some modern writers claim military service in Europe was restricted to the landowning minor nobility. These thegns were the land-holding aristocracy of the time and were required to serve with their own armour and weapons for a certain number of days each year. The historian David Sturdy has cautioned about regarding the fyrd as a precursor to a modern national army composed of all ranks of society, describing it as a "ridiculous fantasy": The persistent old belief that peasants and small farmers gathered to form a national army or fyrd is a strange delusion dreamt up by antiquarians in the late eighteenth or early nineteenth centuries to justify universal military conscription. In feudal Japan the shogun decree of 1393 exempted money lenders from religious or military levies, in return for a yearly tax. The Ōnin War weakened the shogun and levies were imposed again on money lenders. This overlordism was arbitrary and unpredictable for commoners. While the money lenders were not poor, several overlords tapped them for income. Levies became necessary for the survival of the overlord, allowing the lord to impose taxes at will. These levies included tansen tax on agricultural land for ceremonial expenses. Yakubu takumai tax was raised on all land to rebuild the Ise Grand Shrine, and munabechisen tax was imposed on all houses. At the time, land in Kyoto was acquired by commoners through usury and in 1422 the shogun threatened to repossess the land of those commoners who failed to pay their levies. Military slavery The system of military slaves was widely used in the Middle East, beginning with the creation of the corps of Turkic slave-soldiers (ghulams or mamluks) by the Abbasid caliph al-Mu'tasim in the 820s and 830s. The Turkish troops soon came to dominate the government, establishing a pattern throughout the Islamic world of a ruling military class, often separated by ethnicity, culture and even religion by the mass of the population, a paradigm that found its apogee in the Mamluks of Egypt and the Janissary corps of the Ottoman Empire, institutions that survived until the early 19th century. In the middle of the 14th century, Ottoman Sultan Murad I developed personal troops to be loyal to him, with a slave army called the Kapıkulu. The new force was built by taking Christian children from newly conquered lands, especially from the far areas of his empire, in a system known as the devşirme (translated "gathering" or "converting"). The captive children were forced to convert to Islam. The Sultans had the young boys trained over several years. Those who showed special promise in fighting skills were trained in advanced warrior skills, put into the sultan's personal service, and turned into the Janissaries, the elite branch of the Kapıkulu. A number of distinguished military commanders of the Ottomans, and most of the imperial administrators and upper-level officials of the Empire, such as Pargalı İbrahim Pasha and Sokollu Mehmet Paşa, were recruited in this way. By 1609, the Sultan's Kapıkulu forces increased to about 100,000. In later years, Sultans turned to the Barbary Pirates to supply their Jannissaries corps. Their attacks on ships off the coast of Africa or in the Mediterranean, and subsequent capture of able-bodied men for ransom or sale provided some captives for the Sultan's system. Starting in the 17th century, Christian families living under the Ottoman rule began to submit their sons into the Kapikulu system willingly, as they saw this as a potentially invaluable career opportunity for their children. Eventually the Sultan turned to foreign volunteers from the warrior clans of Circassians in southern Russia to fill his Janissary armies. As a whole the system began to break down, the loyalty of the Jannissaries became increasingly suspect. Mahmud II forcibly disbanded the Janissary corps in 1826. Similar to the Janissaries in origin and means of development were the Mamluks of Egypt in the Middle Ages. The Mamluks were usually captive non-Muslim Iranian and Turkish children who had been kidnapped or bought as slaves from the Barbary coasts. The Egyptians assimilated and trained the boys and young men to become Islamic soldiers who served the Muslim caliphs and the Ayyubid sultans during the Middle Ages. The first mamluks served the Abbasid caliphs in 9th-century Baghdad. Over time they became a powerful military caste. On more than one occasion, they seized power, for example, ruling Egypt from 1250 to 1517. From 1250 Egypt had been ruled by the Bahri dynasty of Kipchak origin. Slaves from the Caucasus served in the army and formed an elite corps of troops. They eventually revolted in Egypt to form the Burgi dynasty. The Mamluks' excellent fighting abilities, massed Islamic armies, and overwhelming numbers succeeded in overcoming the Christian Crusader fortresses in the Holy Land. The Mamluks were the most successful defence against the Mongol Ilkhanate of Persia and Iraq from entering Egypt. On the western coast of Africa, Berber Muslims captured non-Muslims to put to work as laborers. They generally converted the younger people to Islam and many became quite assimilated. In Morocco, the Berber looked south rather than north. The Moroccan Sultan Moulay Ismail, called "the Bloodthirsty" (1672–1727), employed a corps of 150,000 black slaves, called his Black Guard. He used them to coerce the country into submission. In modern times Modern conscription, the massed military enlistment of national citizens (), was devised during the French Revolution, to enable the Republic to defend itself from the attacks of European monarchies. Deputy Jean-Baptiste Jourdan gave its name to the 5 September 1798 Act, whose first article stated: "Any Frenchman is a soldier and owes himself to the defense of the nation." It enabled the creation of the , what Napoleon Bonaparte called "the nation in arms", which overwhelmed European professional armies that often numbered only into the low tens of thousands. More than 2.6 million men were inducted into the French military in this way between the years 1800 and 1813. The defeat of the Prussian Army in particular shocked the Prussian establishment, which had believed it was invincible after the victories of Frederick the Great. The Prussians were used to relying on superior organization and tactical factors such as order of battle to focus superior troops against inferior ones. Given approximately equivalent forces, as was generally the case with professional armies, these factors showed considerable importance. However, they became considerably less important when the Prussian armies faced Napoleon's forces that outnumbered their own in some cases by more than ten to one. Scharnhorst advocated adopting the , the military conscription used by France. The was the beginning of short-term compulsory service in Prussia, as opposed to the long-term conscription previously used. In the Russian Empire, the military service time "owed" by serfs was 25 years at the beginning of the 19th century. In 1834 it was decreased to 20 years. The recruits were to be not younger than 17 and not older than 35. In 1874 Russia introduced universal conscription in the modern pattern, an innovation only made possible by the abolition of serfdom in 1861. New military law decreed that all male Russian subjects, when they reached the age of 20, were eligible to serve in the military for six years. In the decades prior to World War I universal conscription along broadly Prussian lines became the norm for European armies, and those modeled on them. By 1914 the only substantial armies still completely dependent on voluntary enlistment were those of Britain and the United States. Some colonial powers such as France reserved their conscript armies for home service while maintaining professional units for overseas duties. World Wars The range of eligible ages for conscripting was expanded to meet national demand during the World Wars. In the United States, the Selective Service System drafted men for World War I initially in an age range from 21 to 30 but expanded its eligibility in 1918 to an age range of 18 to 45. In the case of a widespread mobilization of forces where service includes homefront defense, ages of conscripts may range much higher, with the oldest conscripts serving in roles requiring lesser mobility. Expanded-age conscription was common during the Second World War: in Britain, it was commonly known as "call-up" and extended to age 51. Nazi Germany termed it ("People's Storm") and included children as young as 16 and men as old as 60. During the Second World War, both Britain and the Soviet Union conscripted women. The United States was on the verge of drafting women into the Nurse Corps because it anticipated it would need the extra personnel for its planned invasion of Japan. However, the Japanese surrendered and the idea was abandoned. During the Great Patriotic War, the Red Army conscripted nearly 30 million men. Arguments against conscription Sexism Men's rights activists, feminists, and opponents of discrimination against men have criticized military conscription, or compulsory military service, as sexist. The National Coalition for Men, a men's rights group, sued the US Selective Service System in 2019, leading to it being declared unconstitutional by a US Federal Judge. The federal district judge's opinion was unanimously overturned on appeal to the U.S. Court of Appeals for the 5th Circuit. In September 2021, the House of Representatives passed the annual Defense Authorization Act, which included an amendment that states that "all Americans between the ages of 18 and 25 must register for selective service." This amendment omitted the word "male," which would have extended a potential draft to women; however, the amendment was removed before the National Defense Authorization Act was passed. Feminists have argued, first, that military conscription is sexist because wars serve the interests of what they view as the patriarchy; second, that the military is a sexist institution and that conscripts are therefore indoctrinated into sexism; and third, that conscription of men normalizes violence by men as socially acceptable. Feminists have been organizers and participants in resistance to conscription in several countries. Conscription has also been criticized on the ground that, historically, only men have been subjected to conscription. Men who opt out or are deemed unfit for military service must often perform alternative service, such as Zivildienst in Austria, Germany and Switzerland, or pay extra taxes, whereas women do not have these obligations. In the US, men who do not register with the Selective Service cannot apply for citizenship, receive federal financial aid, grants or loans, be employed by the federal government, be admitted to public colleges or universities, or, in some states, obtain a driver's license. Involuntary servitude Many American libertarians oppose conscription and call for the abolition of the Selective Service System, arguing that impressment of individuals into the armed forces amounts to involuntary servitude. For example, Ron Paul, a former U.S. Libertarian Party presidential nominee, has said that conscription "is wrongly associated with patriotism, when it really represents slavery and involuntary servitude". The philosopher Ayn Rand opposed conscription, opining that "of all the statist violations of individual rights in a mixed economy, the military draft is the worst. It is an abrogation of rights. It negates man's fundamental right—the right to life—and establishes the fundamental principle of statism: that a man's life belongs to the state, and the state may claim it by compelling him to sacrifice it in battle." In 1917, a number of radicals and anarchists, including Emma Goldman, challenged the new draft law in federal court, arguing that it was a violation of the Thirteenth Amendment's prohibition against slavery and involuntary servitude. However, the Supreme Court unanimously upheld the constitutionality of the draft act in the case of Arver v. United States on 7 January 1918, on the ground that the Constitution gives Congress the power to declare war and to raise and support armies. The Court also relied on the principle of the reciprocal rights and duties of citizens. "It may not be doubted that the very conception of a just government in its duty to the citizen includes the reciprocal obligation of the citizen to render military service in case of need and the right to compel." Economic It can be argued that in a cost-to-benefit ratio, conscription during peacetime is not worthwhile. Months or years of service performed by the most fit and capable subtract from the productivity of the economy; add to this the cost of training them, and in some countries paying them. Compared to these extensive costs, some would argue there is very little benefit; if there ever was a war then conscription and basic training could be completed quickly, and in any case there is little threat of a war in most countries with conscription. In the United States, every male resident is required by law to register with the Selective Service System within 30 days following his 18th birthday and be available for a draft; this is often accomplished automatically by a motor vehicle department during licensing or by voter registration. According to Milton Friedman the cost of conscription can be related to the parable of the broken window in anti-draft arguments. The cost of the work, military service, does not disappear even if no salary is paid. The work effort of the conscripts is effectively wasted, as an unwilling workforce is extremely inefficient. The impact is especially severe in wartime, when civilian professionals are forced to fight as amateur soldiers. Not only is the work effort of the conscripts wasted and productivity lost, but professionally skilled conscripts are also difficult to replace in the civilian workforce. Every soldier conscripted in the army is taken away from his civilian work, and away from contributing to the economy which funds the military. This may be less a problem in an agrarian or pre-industrialized state where the level of education is generally low, and where a worker is easily replaced by another. However, this is potentially more costly in a post-industrial society where educational levels are high and where the workforce is sophisticated and a replacement for a conscripted specialist is difficult to find. Even more dire economic consequences result if the professional conscripted as an amateur soldier is killed or maimed for life; his work effort and productivity are lost. Arguments for conscription Political and moral motives Jean Jacques Rousseau argued vehemently against professional armies since he believed that it was the right and privilege of every citizen to participate to the defense of the whole society and that it was a mark of moral decline to leave the business to professionals. He based his belief upon the development of the Roman Republic, which came to an end at the same time as the Roman Army changed from a conscript to a professional force. Similarly, Aristotle linked the division of armed service among the populace intimately with the political order of the state. Niccolò Machiavelli argued strongly for conscription and saw the professional armies, made up of mercenary units, as the cause of the failure of societal unity in Italy. Other proponents, such as William James, consider both mandatory military and national service as ways of instilling maturity in young adults. Some proponents, such as Jonathan Alter and Mickey Kaus, support a draft in order to reinforce social equality, create social consciousness, break down class divisions and allow young adults to immerse themselves in public enterprise. Charles Rangel called for the reinstatement of the draft during the Iraq War not because he seriously expected it to be adopted but to stress how the socioeconomic restratification meant that very few children of upper-class Americans served in the all-volunteer American armed forces. Economic and resource efficiency It is estimated by the British military that in a professional military, a company deployed for active duty in peacekeeping corresponds to three inactive companies at home. Salaries for each are paid from the military budget. In contrast, volunteers from a trained reserve are in their civilian jobs when they are not deployed. It was more financially beneficial for less-educated young Portuguese men born in 1967 to participate in conscription than to participate in the highly competitive job market with men of the same age who continued to higher education. Drafting of women Throughout history, women have only been conscripted to join armed forces in a few countries, in contrast to the universal practice of conscription from among the male population. The traditional view has been that military service is a test of manhood and a rite of passage from boyhood into manhood. In recent years, this position has been challenged on the basis that it violates gender equality, and some countries, especially in Europe, have extended conscription obligations to women. Nations that in present-day actively draft women into military service are Bolivia, Chad, Eritrea, Israel, Mozambique, Norway, North Korea and Sweden. Norway introduced female conscription in 2015, making it the first NATO member to have a legally compulsory national service for both men and women. In practice only motivated volunteers are selected to join the army in Norway. Sweden introduced female conscription in 2010, but it was not activated until 2017. This made Sweden the second nation in Europe to draft women, and the second in the world to draft women on the same formal terms as men. Israel has universal female conscription, although it is possible to avoid service by claiming a religious exemption and over a third of Israeli women do so. Finland introduced voluntary female conscription in 1995, giving women between the ages of 18 and 29 an option to complete their military service alongside men. Sudanese law allows for conscription of women, but this is not implemented in practice. In the United Kingdom during World War II, beginning in 1941, women were brought into the scope of conscription but, as all women with dependent children were exempt and many women were informally left in occupations such as nursing or teaching, the number conscripted was relatively few. In the Soviet Union, there was never conscription of women for the armed forces, but the severe disruption of normal life and the high proportion of civilians affected by World War II after the German invasion attracted many volunteers for "The Great Patriotic War". Medical doctors of both sexes could and would be conscripted (as officers). Also, the Soviet university education system required Department of Chemistry students of both sexes to complete an ROTC course in NBC defense, and such female reservist officers could be conscripted in times of war. The United States came close to drafting women into the Nurse Corps in preparation for a planned invasion of Japan. In 1981 in the United States, several men filed lawsuit in the case Rostker v. Goldberg, alleging that the Selective Service Act of 1948 violates the Due Process Clause of the Fifth Amendment by requiring that only men register with the Selective Service System (SSS). The Supreme Court eventually upheld the Act, stating that "the argument for registering women was based on considerations of equity, but Congress was entitled, in the exercise of its constitutional powers, to focus on the question of military need, rather than 'equity.'" In 2013, Judge Gray H. Miller of the United States District Court for the Southern District of Texas ruled that the Service's men-only requirement was unconstitutional, as while at the time Rostker was decided, women were banned from serving in combat, the situation had since changed with the 2013 and 2015 restriction removals. Miller's opinion was reversed by the Fifth Circuit, stating that only the Supreme Court could overturn the Supreme Court precedence from Rostker. The Supreme Court considered but declined to review the Fifth Circuit's ruling in June 2021. In an opinion authored by Justice Sonia Sotomayor and joined by Justices Stephen Breyer and Brett Kavanaugh, the three justices agreed that the male-only draft was likely unconstitutional given the changes in the military's stance on the roles, but because Congress had been reviewing and evaluating legislation to eliminate its male-only draft requirement via the National Commission on Military, National, and Public Service (NCMNPS) since 2016, it would have been inappropriate for the Court to act at that time. On 1 October 1999, in Taiwan, the Judicial Yuan of the Republic of China in its Interpretation 490 considered that the physical differences between males and females and the derived role differentiation in their respective social functions and lives would not make drafting only males a violation of the Constitution of the Republic of China. Though women are not conscripted in Taiwan, transsexual persons are exempt. In 2018, the Netherlands started including women in its draft registration system, although conscription is not currently enforced for either sex. Conscientious objection A conscientious objector is an individual whose personal beliefs are incompatible with military service, or, more often, with any role in the armed forces. In some countries, conscientious objectors have special legal status, which augments their conscription duties. For example, Sweden allows conscientious objectors to choose a service in the weapons-free civil defense. The reasons for refusing to serve in the military are varied. Some people are conscientious objectors for religious reasons. In particular, the members of the historic peace churches are pacifist by doctrine, and Jehovah's Witnesses, while not strictly pacifists, refuse to participate in the armed forces on the ground that they believe that Christians should be neutral in international conflicts. By country Austria Every male citizen of the Republic of Austria from the age of 17 up to 50, specialists up to 65 years is liable to military service. However, besides mobilization, conscription calls to a six-month long basic military training in the can be done up to the age of 35. For men refusing to undergo this training, a nine-month lasting community service is mandatory. Belgium Belgium abolished the conscription in 1994. The last conscripts left active service in February 1995. To this day (2019), a small minority of the Belgian citizens supports the idea of reintroducing military conscription, for both men and women. Bulgaria Bulgaria had mandatory military service for males above 18 until conscription was ended in 2008. Due to a shortfall in the army of some 5500 soldiers, parts of the former ruling coalition have expressed their support for the return of mandatory military service, most notably Krasimir Karakachanov. Opposition towards this idea from the main coalition partner, GERB, saw a compromise in 2018, where instead of mandatory military service, Bulgaria could have possibly introduced a voluntary military service by 2019 where young citizens can volunteer for a period of 6 to 9 months, receiving a basic wage. However this has not gone forward. Cambodia Since the signing of the Peace Accord in 1993, there has been no official conscription in Cambodia. Also the National Assembly has repeatedly rejected to reintroduce it due to popular resentment. However, in November 2006, it was reintroduced. Although mandatory for all males between the ages of 18 and 30 (with some sources stating up to age 35), less than 20% of those in the age group are recruited amidst a downsizing of the armed forces. China Universal conscription in China dates back to the State of Qin, which eventually became the Qin Empire of 221 BC. Following unification, historical records show that a total of 300,000 conscript soldiers and 500,000 conscript labourers constructed the Great Wall of China. In the following dynasties, universal conscription was abolished and reintroduced on numerous occasions. , universal military conscription is theoretically mandatory in China, and reinforced by law. However, due to the large population of China and large pool of candidates available for recruitment, the People's Liberation Army has always had sufficient volunteers, so conscription has not been required in practice. Cuba Cyprus Military service in Cyprus has a deep rooted history entangled with the Cyprus problem. Military service in the Cypriot National Guard is mandatory for all male citizens of the Republic of Cyprus, as well as any male non-citizens born of a parent of Greek Cypriot descent, lasting from the 1 January of the year in which they turn 18 years of age to 31 December, of the year in which they turn 50. All male residents of Cyprus who are of military age (16 and over) are required to obtain an exit visa from the Ministry of Defense. Currently, military conscription in Cyprus lasts up to 14 months. Denmark Conscription is known in Denmark since the Viking Age, where one man out of every 10 had to serve the king. Frederick IV of Denmark changed the law in 1710 to every 4th man. The men were chosen by the landowner and it was seen as a penalty. Since 12 February 1849, every physically fit man must do military service. According to §81 in the Constitution of Denmark, which was promulgated in 1849: Every male person able to carry arms shall be liable with his person to contribute to the defence of his country under such rules as are laid down by Statute. — Constitution of DenmarkThe legislation about compulsory military service is articulated in the Danish Law of Conscription. National service takes 4–12 months. It is possible to postpone the duty when one is still in full-time education. Every male turning 18 will be drafted to the 'Day of Defence', where they will be introduced to the Danish military and their health will be tested. Physically unfit persons are not required to do military service. It is only compulsory for men, while women are free to choose to join the Danish army. Almost all of the men have been volunteers in recent years, 96.9% of the total number of recruits having been volunteers in the 2015 draft. After lottery, one can become a conscientious objector. Total objection (refusal from alternative civilian service) results in up to 4 months jailtime according to the law. However, in 2014 a Danish man, who signed up for the service and objected later, got only 14 days of home arrest. In many countries the act of desertion (objection after signing up) is punished harder than objecting the compulsory service. Finland Conscription in Finland is part of a general compulsion for national military service for all adult males (; ) defined in the 127§ of the Constitution of Finland. Conscription can take the form of military or of civilian service. According to Finnish Defence Forces 2011 data slightly under 80% of Finnish males turned 30 had entered and finished the military service. The number of female volunteers to annually enter armed service had stabilised at approximately 300. The service period is 165, 255 or 347 days for the rank and file conscripts and 347 days for conscripts trained as NCOs or reserve officers. The length of civilian service is always twelve months. Those electing to serve unarmed in duties where unarmed service is possible serve either nine or twelve months, depending on their training. Any Finnish male citizen who refuses to perform both military and civilian service faces a penalty of 173 days in prison, minus any served days. Such sentences are usually served fully in prison, with no parole. Jehovah's Witnesses are no longer exempted from service as of 27 February 2019. The inhabitants of demilitarized Åland are exempt from military service. By the Conscription Act of 1951, they are, however, required to serve a time at a local institution, like the coast guard. However, until such service has been arranged, they are freed from service obligation. The non-military service of Åland has not been arranged since the introduction of the act, and there are no plans to institute it. The inhabitants of Åland can also volunteer for military service on the mainland. As of 1995, women are permitted to serve on a voluntary basis and pursue careers in the military after their initial voluntary military service. The military service takes place in Finnish Defence Forces or in the Finnish Border Guard. All services of the Finnish Defence Forces train conscripts. However, the Border Guard trains conscripts only in land-based units, not in coast guard detachments or in the Border Guard Air Wing. Civilian service may take place in the Civilian Service Center in Lapinjärvi or in an accepted non-profit organization of educational, social or medical nature. Germany Between 1956 and 2011 conscription was mandatory for all male citizens in the German federal armed forces (), as well as for the Federal Border Guard () in the 1970s (see Border Guard Service). With the end of the Cold War the German government drastically reduced the size of its armed forces. The low demand for conscripts led to the suspension of compulsory conscription in 2011. Since then, only volunteer professionals serve in the . Greece Since 1914 Greece has been enforcing mandatory military service, currently lasting 12 months (but historically up to 36 months) for all adult men. Citizens discharged from active service are normally placed in the reserve and are subject to periodic recalls of 1–10 days at irregular intervals. Universal conscription was introduced in Greece during the military reforms of 1909, although various forms of selective conscription had been in place earlier. In more recent years, conscription was associated with the state of general mobilisation declared on 20 July 1974, due to the crisis in Cyprus (the mobilisation was formally ended on 18 December 2002). The duration of military service has historically ranged between 9 and 36 months depending on various factors either particular to the conscript or the political situation in the Eastern Mediterranean. Although women are employed by the Greek army as officers and soldiers, they are not obliged to enlist. Soldiers receive no health insurance, but they are provided with medical support during their army service, including hospitalization costs. Greece enforces conscription for all male citizens aged between 19 and 45. In August 2009, duration of the mandatory service was reduced from 12 months as it was before to 9 months for the army, but remained at 12 months for the navy and the air force. The number of conscripts allocated to the latter two has been greatly reduced aiming at full professionalization. Nevertheless, mandatory military service at the army was once again raised to 12 months in March 2021, unless served in units in Evros or the North Aegean islands where duration was kept at 9 months. Although full professionalization is under consideration, severe financial difficulties and mismanagement, including delays and reduced rates in the hiring of professional soldiers, as well as widespread abuse of the deferment process, has resulted in the postponement of such a plan. Iran In Iran, all men who reach the age of 18 must do about two years of compulsory military service in the Iranian army or Islamic Revolutionary Guard Corps. Before the 1979 revolution, women could serve in the military. However, after the establishment of the Islamic Republic, some Ayatollahs considered women's military service to be disrespectful to women by the Pahlavi government and banned women's military service in Iran. Therefore, Iranian women and girls were completely exempted from military service, which caused Iranian men and boys to oppose. In Iran, men who refuse to go to military service are deprived of their citizenship rights, such as employment, health insurance, continuing their education at university, finding a job, going abroad, opening a bank account, etc. Iranian men have so far opposed mandatory military service and demanded that military service in Iran become a job like in other countries, but the Islamic Republic is opposed to this demand. Some Iranian military commanders consider the elimination of conscription or improving the condition of soldiers as a security issue and one of Ali Khamenei's powers as the commander-in-chief of the armed forces, so they treat it with caution. In Iran, usually wealthy people are exempted from conscription. Some other men can be exempted from conscription due to their fathers serving in the Iran-Iraq war. Israel There is a mandatory military service for all men and women in Israel who are fit and 18 years old. Men must serve 32 months while women serve 24 months, with the vast majority of conscripts being Jewish. Some Israeli citizens are exempt from mandatory service: Non-Jewish Arab citizens permanent residents (non-civilian) such as the Druze of the Golan Heights Male Ultra-Orthodox Jews can apply for deferment to study in Yeshiva and the deferment tends to become an exemption, although some do opt to serve in the military Female religious Jews, as long as they declare they are unable to serve due to religious grounds. Most of whom opt for the alternative of volunteering in the national service Sherut Leumi All of the exempt above are eligible to volunteer to the Israel Defense Forces (IDF), as long as they declare so. Male Druze and male Circassian Israeli citizens are liable for conscription, in accordance with agreement set by their community leaders (their community leaders however signed a clause in which all female Druze and female Circassian are exempt from service). A few male Bedouin Israeli citizens choose to enlist to the Israeli military in every draft (despite their Muslim-Arab background that exempt them from conscription). South Africa There was mandatory military conscription for all white men in South Africa from 1968 until the end of apartheid in 1994. Under South African defense law, young white men had to undergo two years' continuous military training after they leave school, after which they had to serve 720 days in occasional military duty over the next 12 years. The End Conscription Campaign began in 1983 in opposition to the requirement. In the same year the National Party government announced plans to extend conscription to white immigrants in the country. South Korea Lithuania Lithuania abolished its conscription in 2008. In May 2015, the Lithuanian parliament voted to reintroduce conscription and the conscripts started their training in August 2015. From 2015 to 2017 there were enough volunteers to avoid drafting civilians. Luxembourg Luxembourg practiced military conscription from 1948 until 1967. Moldova Moldova, which currently has male conscription, has announced plans to abolish the practice. Moldova's Defense Ministry announced that a plan which stipulates the gradual elimination of military conscription will be implemented starting from the autumn of 2018. Netherlands Conscription, which was called "Service Duty" () in the Netherlands, was first employed in 1810 by French occupying forces. Napoleon's brother Louis Bonaparte, who was King of Holland from 1806 to 1810, had tried to introduce conscription a few years earlier, unsuccessfully. Every man aged 20 years or older had to enlist. By means of drawing lots it was decided who had to undertake service in the French army. It was possible to arrange a substitute against payment. Later on, conscription was used for all men over the age of 18. Postponement was possible, due to study, for example. Conscientious objectors could perform an alternative civilian service instead of military service. For various reasons, this forced military service was criticized at the end of the twentieth century. Since the Cold War was over, so was the direct threat of a war. Instead, the Dutch army was employed in more and more peacekeeping operations. The complexity and danger of these missions made the use of conscripts controversial. Furthermore, the conscription system was thought to be unfair as only men were drafted. In the European part of Netherlands, compulsory attendance has been officially suspended since 1 May 1997. Between 1991 and 1996, the Dutch armed forces phased out their conscript personnel and converted to an all-professional force. The last conscript troops were inducted in 1995, and demobilized in 1996. The suspension means that citizens are no longer forced to serve in the armed forces, as long as it is not required for the safety of the country. Since then, the Dutch army has become an all-professional force. However, to this day, every male and – from January 2020 onward – female citizen aged 17 gets a letter in which they are told that they have been registered but do not have to present themselves for service. Norway Conscription was constitutionally established the 12 April 1907 with Kongeriket Norges Grunnlov § 119.. , Norway currently employs a weak form of mandatory military service for men and women. In practice recruits are not forced to serve, instead only those who are motivated are selected. About 60,000 Norwegians are available for conscription every year, but only 8,000 to 10,000 are conscripted. Since 1985, women have been able to enlist for voluntary service as regular recruits. On 14 June 2013 the Norwegian Parliament voted to extend conscription to women, making Norway the first NATO member and first European country to make national service compulsory for both sexes. In earlier times, up until at least the early 2000s, all men aged 19–44 were subject to mandatory service, with good reasons required to avoid becoming drafted. There is a right of conscientious objection. In addition to the military service, the Norwegian government draft a total of 8,000 men and women between 18 and 55 to non-military Civil defence duty. (Not to be confused with Alternative civilian service.) Former service in the military does not exclude anyone from later being drafted to the Civil defence, but an upper limit of total 19 months of service applies. Neglecting mobilisation orders to training exercises and actual incidents, may impose fines. Serbia , Serbia no longer practises mandatory military service. Prior to this, mandatory military service lasted 6 months for men. Conscientious objectors could however opt for 9 months of civil service instead. On 15 December 2010, the Parliament of Serbia voted to suspend mandatory military service. The decision fully came into force on 1 January 2011. Sweden Sweden had conscription () for men between 1901 and 2010. During the last few decades it was selective. Since 1980, women have been allowed to sign up by choice, and, if passing the tests, do military training together with male conscripts. Since 1989 women have been allowed to serve in all military positions and units, including combat. In 2010, conscription was made gender-neutral, meaning both women and men would be conscripted on equal terms. The conscription system was simultaneously deactivated in peacetime. Seven years later, referencing increased military threat, the Swedish Government reactivated military conscription. Beginning in 2018, both men and women are conscripted. Taiwan Taiwan, officially the Republic of China (ROC), maintains an active conscription system. All qualified male citizens of military age are now obligated to receive 4-month of military training. In December 2022, President Tsai Ing-wen led the government to announce the reinstatement of the mandatory 1-year active duty military service from January 2024. United Kingdom The United Kingdom introduced conscription to full-time military service for the first time in January 1916 (the eighteenth month of World War I) and abolished it in 1920. Ireland, then part of the United Kingdom, was exempted from the original 1916 military service legislation, and although further legislation in 1918 gave power for an extension of conscription to Ireland, the power was never put into effect. Conscription was reintroduced in 1939, in the lead up to World War II, and continued in force until 1963. Northern Ireland was exempted from conscription legislation throughout the whole period. In all, eight million men were conscripted during both World Wars, as well as several hundred thousand younger single women. The introduction of conscription in May 1939, before the war began, was partly due to pressure from the French, who emphasized the need for a large British army to oppose the Germans. From early 1942 unmarried women age 19–30 were conscripted. Most were sent to the factories, but they could volunteer for the Auxiliary Territorial Service (ATS) and other women's services. Some women served in the Women's Land Army: initially volunteers but later conscription was introduced. However, women who were already working in a skilled job considered helpful to the war effort, such as a General Post Office telephonist, were told to continue working as before. None was assigned to combat roles unless she volunteered. By 1943 women were liable to some form of directed labour up to age 51. During the Second World War, 1.4 million British men volunteered for service and 3.2 million were conscripted. Conscripts comprised 50% of the Royal Air Force, 60% of the Royal Navy and 80% of the British Army. The abolition of conscription in Britain was announced on 4 April 1957, by new prime minister Harold Macmillan, with the last conscripts being recruited three years later. United States Conscription in the United States ended in 1973, but males aged between 18 and 25 are required to register with the Selective Service System to enable a reintroduction of conscription if necessary. President Gerald Ford had suspended mandatory draft registration in 1975, but President Jimmy Carter reinstated that requirement when the Soviet Union intervened in Afghanistan five years later. Consequently, Selective Service registration is still required of almost all young men. There have been no prosecutions for violations of the draft registration law since 1986. Males between the ages of 17 and 45, and female members of the US National Guard may be conscripted for federal militia service pursuant to 10 U.S. Code § 246 and the Militia Clauses of the United States Constitution. In February 2019, the United States District Court for the Southern District of Texas ruled that male-only conscription registration breached the Fourteenth Amendment's equal protection clause. In National Coalition for Men v. Selective Service System, a case brought by non-profit men's rights organisation the National Coalition for Men against the U.S. Selective Service System, judge Gray H. Miller issued a declaratory judgement that the male-only registration requirement is unconstitutional, though did not specify what action the government should take. That ruling was reversed by the Fifth Circuit. In June 2021, the U.S. Supreme Court declined to review the decision by the Court of Appeals. Other countries Conscription in Australia Conscription in Canada Conscription in Egypt Conscription in France Conscription in Gibraltar Conscription in Malaysia Conscription in Mexico Conscription in New Zealand Conscription in Russia Conscription in Singapore Conscription in South Korea Conscription in Switzerland Conscription in Turkey Conscription in Ukraine Conscription in the Ottoman Empire Conscription in the Russian Empire See also Civil conscription Civilian Public Service Corvée Economic conscription Quota System Male expendability Pospolite ruszenie, mass mobilization in Poland Bevin Boys Counter-recruitment Draft evasion Ephebic Oath Home front during World War I Home front during World War II List of countries by number of military and paramilitary personnel Military recruitment Timeline of women's participation in warfare War resister References Further reading Burk, James (April 1989). "Debating the Draft in America", Armed Forces and Society p. vol. 15: pp. 431–48. Challener, Richard D. The French theory of the nation in arms, 1866–1939 (1955) Chambers, John Whiteclay. To Raise an Army: The Draft Comes to Modern America (1987) Flynn, George Q. (1998 33(1): 5–20). "Conscription and Equity in Western Democracies, 1940–75", Journal of Contemporary History in JSTOR Looks at citizens' responses to military conscription in several democracies since the French Revolution. Krueger, Christine, and Sonja Levsen, eds. War Volunteering in Modern Times: From the French Revolution to the Second World War (Palgrave Macmillan 2011) Littlewood, David. "Conscription in Britain, New Zealand, Australia and Canada during the Second World War", History Compass 18#4 (2020) online External links Political theories Citizenship
5736
https://en.wikipedia.org/wiki/Catherine%20Coleman
Catherine Coleman
Catherine Grace "Cady" Coleman (born December 14, 1960) is an American chemist, engineer, former United States Air Force colonel, and retired NASA astronaut. She is a veteran of two Space Shuttle missions, and departed the International Space Station on May 23, 2011, as a crew member of Expedition 27 after logging 159 days in space. Education Coleman graduated from Wilbert Tucker Woodson High School, Fairfax, Virginia, in 1978. In 1978–1979, she was an exchange student at Røyken Upper Secondary School in Norway with the AFS Intercultural Programs. She received a B.S. degree in chemistry from the Massachusetts Institute of Technology (MIT) in 1983 and was commissioned as graduate of the Air Force Reserve Officer Training Corps (Air Force ROTC)., then received a Ph.D. degree in polymer science and engineering from the University of Massachusetts Amherst in 1991. She was advised by Professor Thomas J. McCarthy on her doctorate. As an undergraduate, she was a member of the intercollegiate rowing crew and was a resident of Baker House. Military career Coleman continued to pursue her PhD at the University of Massachusetts Amherst as a second lieutenant. In 1988, she entered active duty at Wright-Patterson Air Force Base as a research chemist. During her work, she participated as a surface analysis consultant on the NASA Long Duration Exposure Facility experiment. In 1991, she received her doctorate in polymer science and engineering. She retired from the Air Force in November 2009 as a colonel. NASA career Coleman was selected by NASA in 1992 to join the NASA Astronaut Corps. In 1995, she was a member of the STS-73 crew on the scientific mission USML-2 with experiments including biotechnology, combustion science, and the physics of fluids. During the flight, she reported to Houston Mission Control that she had spotted an Unidentified flying object (UFO). She also trained for the mission STS-83 to be the backup for Donald A. Thomas; however, as he recovered on time, she did not fly that mission. STS-93 was Coleman's second space flight in 1999. She was mission specialist in charge of deploying the Chandra X-ray Observatory and its Inertial Upper Stage out of the shuttle's cargo bay. Coleman served as Chief of Robotics for the Astronaut Office, to include robotic arm operations and training for all Space Shuttle and International Space Station missions. In October 2004, Coleman served as an aquanaut during the NEEMO 7 mission aboard the Aquarius underwater laboratory, living and working underwater for eleven days. Coleman was assigned as a backup U.S. crew member for Expeditions 19, 20 and 21 and served as a backup crew member for Expeditions 24 and 25 as part of her training for Expedition 26. Coleman launched on December 15, 2010 (December 16, 2010 Baikonur time), aboard Soyuz TMA-20 to join the Expedition 26 mission aboard the International Space Station. She retired from NASA on December 1, 2016. Spaceflight experience STS-73 on Space Shuttle Columbia (October 20 to November 5, 1995) was the second United States Microgravity Laboratory (USML-2) mission. The mission focused on materials science, biotechnology, combustion science, the physics of fluids, and numerous scientific experiments housed in the pressurized Spacelab module. In completing her first space flight, Coleman orbited the Earth 256 times, traveled over 6 million miles, and logged a total of 15 days, 21 hours, 52 minutes and 21 seconds in space. STS-93 on Columbia (July 22 to 27, 1999) was a five-day mission during which Coleman was the lead mission specialist for the deployment of the Chandra X-ray Observatory. Designed to conduct comprehensive studies of the universe, the telescope will enable scientists to study exotic phenomena such as exploding stars, quasars, and black holes. Mission duration was 118 hours and 50 minutes. Soyuz TMA-20 / Expedition 26/27 (December 15, 2010, to May 23, 2011) was an extended duration mission to the International Space Station. Personal Coleman is married to glass artist Josh Simpson who lives in Massachusetts. They have one son. She is part of the band Bandella, which also includes fellow NASA astronaut Stephen Robinson, Canadian astronaut Chris Hadfield, and Micki Pettit (wife of the astronaut Donald Pettit). Coleman is a flute player and has taken several flutes with her to the ISS, including a pennywhistle from Paddy Moloney of The Chieftains, an old Irish flute from Matt Molloy of The Chieftains, and a flute from Ian Anderson of Jethro Tull (band). On February 15, 2011, she played one of the instruments live from orbit on National Public Radio. On April 12, 2011, she played live via video link for the audience of Jethro Tull's show in Russia in honour of the 50th anniversary of Yuri Gagarin's flight, playing in orbit while Anderson played on the ground. On May 13 of that year, Coleman delivered a taped commencement address to the class of 2011 at the University of Massachusetts Amherst. As do many other astronauts, Coleman holds an amateur radio license (callsign: KC5ZTH). As of 2015, she is also known to be working as a guest speaker at the Baylor College of Medicine, for the children's program 'Saturday Morning Science'. In 2018, she gave a graduation address to Carter Lynch, the sole graduate of Cuttyhunk Elementary School, on Cuttyhunk Island, Massachusetts. In 2019 the Irish postal service An Post issued a set of commemorative stamps for the 50th anniversary of the Apollo Moon landings, Catherine Coleman is featured alongside fellow astronauts Neil Armstrong, Michael Collins, and Eileen Collins. References External links Cady Coleman Video produced by Makers: Women Who Make America 1960 births Living people Aquanauts Women astronauts United States Air Force astronauts NASA civilian astronauts Military personnel from Charleston, South Carolina People from Fairfax, Virginia People from Franklin County, Massachusetts Massachusetts Institute of Technology School of Science alumni University of Massachusetts Amherst College of Engineering alumni Female officers of the United States Air Force 21st-century American chemists American chemical engineers American women engineers Amateur radio people Amateur radio women Female explorers Scientists from Virginia United States Air Force colonels Space Shuttle program astronauts Wilbert Tucker Woodson High School alumni MIT Engineers athletes 21st-century American women Military personnel from Massachusetts
5738
https://en.wikipedia.org/wiki/Cervix
Cervix
The cervix (: cervices) or cervix uteri (Latin, "neck of the uterus") is the lower part of the uterus (womb) in the human female reproductive system. The cervix is usually 2 to 3 cm long (~1 inch) and roughly cylindrical in shape, which changes during pregnancy. The narrow, central cervical canal runs along its entire length, connecting the uterine cavity and the lumen of the vagina. The opening into the uterus is called the internal os, and the opening into the vagina is called the external os. The lower part of the cervix, known as the vaginal portion of the cervix (or ectocervix), bulges into the top of the vagina. The cervix has been documented anatomically since at least the time of Hippocrates, over 2,000 years ago. The cervical canal is a passage through which sperm must travel to fertilize an egg cell after sexual intercourse. Several methods of contraception, including cervical caps and cervical diaphragms, aim to block or prevent the passage of sperm through the cervical canal. Cervical mucus is used in several methods of fertility awareness, such as the Creighton model and Billings method, due to its changes in consistency throughout the menstrual period. During vaginal childbirth, the cervix must flatten and dilate to allow the fetus to progress along the birth canal. Midwives and doctors use the extent of the dilation of the cervix to assist decision-making during childbirth. The cervical canal is lined with a single layer of column-shaped cells, while the ectocervix is covered with multiple layers of cells topped with flat cells. The two types of epithelia meet at the squamocolumnar junction. Infection with the human papillomavirus (HPV) can cause changes in the epithelium, which can lead to cancer of the cervix. Cervical cytology tests can often detect cervical cancer and its precursors, and enable early successful treatment. Ways to avoid HPV include avoiding sex, using condoms, and HPV vaccination. HPV vaccines, developed in the early 21st century, reduce the risk of cervical cancer by preventing infections from the main cancer-causing strains of HPV. Structure The cervix is part of the female reproductive system. Around in length, it is the lower narrower part of the uterus continuous above with the broader upper part—or body—of the uterus. The lower end of the cervix bulges through the anterior wall of the vagina, and is referred to as the vaginal portion of cervix (or ectocervix) while the rest of the cervix above the vagina is called the supravaginal portion of cervix. A central canal, known as the cervical canal, runs along its length and connects the cavity of the body of the uterus with the lumen of the vagina. The openings are known as the internal os and external orifice of the uterus (or external os), respectively. The mucosa lining the cervical canal is known as the endocervix, and the mucosa covering the ectocervix is known as the exocervix. The cervix has an inner mucosal layer, a thick layer of smooth muscle, and posteriorly the supravaginal portion has a serosal covering consisting of connective tissue and overlying peritoneum. In front of the upper part of the cervix lies the bladder, separated from it by cellular connective tissue known as parametrium, which also extends over the sides of the cervix. To the rear, the supravaginal cervix is covered by peritoneum, which runs onto the back of the vaginal wall and then turns upwards and onto the rectum, forming the recto-uterine pouch. The cervix is more tightly connected to surrounding structures than the rest of the uterus. The cervical canal varies greatly in length and width between women or over the course of a woman's life, and it can measure 8 mm (0.3 inch) at its widest diameter in premenopausal adults. It is wider in the middle and narrower at each end. The anterior and posterior walls of the canal each have a vertical fold, from which ridges run diagonally upwards and laterally. These are known as palmate folds, due to their resemblance to a palm leaf. The anterior and posterior ridges are arranged in such a way that they interlock with each other and close the canal. They are often effaced after pregnancy. The ectocervix (also known as the vaginal portion of the cervix) has a convex, elliptical shape and projects into the cervix between the anterior and posterior vaginal fornices. On the rounded part of the ectocervix is a small, depressed external opening, connecting the cervix with the vagina. The size and shape of the ectocervix and the external opening (external os) can vary according to age, hormonal state, and whether childbirth has taken place. In women who have not had a vaginal delivery, the external opening is small and circular, and in women who have had a vaginal delivery, it is slit-like. On average, the ectocervix is long and wide. Blood is supplied to the cervix by the descending branch of the uterine artery and drains into the uterine vein. The pelvic splanchnic nerves, emerging as S2–S3, transmit the sensation of pain from the cervix to the brain. These nerves travel along the uterosacral ligaments, which pass from the uterus to the anterior sacrum. Three channels facilitate lymphatic drainage from the cervix. The anterior and lateral cervix drains to nodes along the uterine arteries, travelling along the cardinal ligaments at the base of the broad ligament to the external iliac lymph nodes and ultimately the paraaortic lymph nodes. The posterior and lateral cervix drains along the uterine arteries to the internal iliac lymph nodes and ultimately the paraaortic lymph nodes, and the posterior section of the cervix drains to the obturator and presacral lymph nodes. However, there are variations as lymphatic drainage from the cervix travels to different sets of pelvic nodes in some people. This has implications in scanning nodes for involvement in cervical cancer. After menstruation and directly under the influence of estrogen, the cervix undergoes a series of changes in position and texture. During most of the menstrual cycle, the cervix remains firm, and is positioned low and closed. However, as ovulation approaches, the cervix becomes softer and rises to open in response to the higher levels of estrogen present. These changes are also accompanied by changes in cervical mucus, described below. Development As a component of the female reproductive system, the cervix is derived from the two paramesonephric ducts (also called Müllerian ducts), which develop around the sixth week of embryogenesis. During development, the outer parts of the two ducts fuse, forming a single urogenital canal that will become the vagina, cervix and uterus. The cervix grows in size at a smaller rate than the body of the uterus, so the relative size of the cervix over time decreases, decreasing from being much larger than the body of the uterus in fetal life, twice as large during childhood, and decreasing to its adult size, smaller than the uterus, after puberty. Previously it was thought that during fetal development, the original squamous epithelium of the cervix is derived from the urogenital sinus and the original columnar epithelium is derived from the paramesonephric duct. The point at which these two original epithelia meet is called the original squamocolumnar junction. New studies show, however, that all the cervical as well as large part of the vaginal epithelium are derived from Müllerian duct tissue and that phenotypic differences might be due to other causes. Histology The endocervical mucosa is about thick and lined with a single layer of columnar mucous cells. It contains numerous tubular mucous glands, which empty viscous alkaline mucus into the lumen. In contrast, the ectocervix is covered with nonkeratinized stratified squamous epithelium, which resembles the squamous epithelium lining the vagina. The junction between these two types of epithelia is called the squamocolumnar junction. Underlying both types of epithelium is a tough layer of collagen. The mucosa of the endocervix is not shed during menstruation. The cervix has more fibrous tissue, including collagen and elastin, than the rest of the uterus. In prepubertal girls, the functional squamocolumnar junction is present just within the cervical canal. Upon entering puberty, due to hormonal influence, and during pregnancy, the columnar epithelium extends outward over the ectocervix as the cervix everts. Hence, this also causes the squamocolumnar junction to move outwards onto the vaginal portion of the cervix, where it is exposed to the acidic vaginal environment. The exposed columnar epithelium can undergo physiological metaplasia and change to tougher metaplastic squamous epithelium in days or weeks, which is very similar to the original squamous epithelium when mature. The new squamocolumnar junction is therefore internal to the original squamocolumnar junction, and the zone of unstable epithelium between the two junctions is called the transformation zone of the cervix. Histologically, the transformation zone is generally defined as surface squamous epithelium with surface columnar epithelium or stromal glands/crypts, or both. After menopause, the uterine structures involute and the functional squamocolumnar junction moves into the cervical canal. Nabothian cysts (or Nabothian follicles) form in the transformation zone where the lining of metaplastic epithelium has replaced mucous epithelium and caused a strangulation of the outlet of some of the mucous glands. A buildup of mucus in the glands forms Nabothian cysts, usually less than about in diameter, which are considered physiological rather than pathological. Both gland openings and Nabothian cysts are helpful to identify the transformation zone. Function Fertility The cervical canal is a pathway through which sperm enter the uterus after being induced by estradiol after sexual intercourse, and some forms of artificial insemination. Some sperm remains in cervical crypts, infoldings of the endocervix, which act as a reservoir, releasing sperm over several hours and maximising the chances of fertilisation. A theory states the cervical and uterine contractions during orgasm draw semen into the uterus. Although the "upsuck theory" has been generally accepted for some years, it has been disputed due to lack of evidence, small sample size, and methodological errors. Some methods of fertility awareness, such as the Creighton model and the Billings method involve estimating a woman's periods of fertility and infertility by observing physiological changes in her body. Among these changes are several involving the quality of her cervical mucus: the sensation it causes at the vulva, its elasticity (Spinnbarkeit), its transparency, and the presence of ferning. Cervical mucus Several hundred glands in the endocervix produce 20–60 mg of cervical mucus a day, increasing to 600 mg around the time of ovulation. It is viscous because it contains large proteins known as mucins. The viscosity and water content varies during the menstrual cycle; mucus is composed of around 93% water, reaching 98% at midcycle. These changes allow it to function either as a barrier or a transport medium to spermatozoa. It contains electrolytes such as calcium, sodium, and potassium; organic components such as glucose, amino acids, and soluble proteins; trace elements including zinc, copper, iron, manganese, and selenium; free fatty acids; enzymes such as amylase; and prostaglandins. Its consistency is determined by the influence of the hormones estrogen and progesterone. At midcycle around the time of ovulation—a period of high estrogen levels— the mucus is thin and serous to allow sperm to enter the uterus and is more alkaline and hence more hospitable to sperm. It is also higher in electrolytes, which results in the "ferning" pattern that can be observed in drying mucus under low magnification; as the mucus dries, the salts crystallize, resembling the leaves of a fern. The mucus has a stretchy character described as Spinnbarkeit most prominent around the time of ovulation. At other times in the cycle, the mucus is thick and more acidic due to the effects of progesterone. This "infertile" mucus acts as a barrier to keep sperm from entering the uterus. Women taking an oral contraceptive pill also have thick mucus from the effects of progesterone. Thick mucus also prevents pathogens from interfering with a nascent pregnancy. A cervical mucus plug, called the operculum, forms inside the cervical canal during pregnancy. This provides a protective seal for the uterus against the entry of pathogens and against leakage of uterine fluids. The mucus plug is also known to have antibacterial properties. This plug is released as the cervix dilates, either during the first stage of childbirth or shortly before. It is visible as a blood-tinged mucous discharge. Childbirth The cervix plays a major role in childbirth. As the fetus descends within the uterus in preparation for birth, the presenting part, usually the head, rests on and is supported by the cervix. As labour progresses, the cervix becomes softer and shorter, begins to dilate, and withdraws to face the anterior of the body. The support the cervix provides to the fetal head starts to give way when the uterus begins its contractions. During childbirth, the cervix must dilate to a diameter of more than to accommodate the head of the fetus as it descends from the uterus to the vagina. In becoming wider, the cervix also becomes shorter, a phenomenon known as effacement. Along with other factors, midwives and doctors use the extent of cervical dilation to assist decision making during childbirth. Generally, the active first stage of labour, when the uterine contractions become strong and regular, begins when the cervical dilation is more than . The second phase of labor begins when the cervix has dilated to , which is regarded as its fullest dilation, and is when active pushing and contractions push the baby along the birth canal leading to the birth of the baby. The number of past vaginal deliveries is a strong factor in influencing how rapidly the cervix is able to dilate in labour. The time taken for the cervix to dilate and efface is one factor used in reporting systems such as the Bishop score, used to recommend whether interventions such as a forceps delivery, induction, or Caesarean section should be used in childbirth. Cervical incompetence is a condition in which shortening of the cervix due to dilation and thinning occurs, before term pregnancy. Short cervical length is the strongest predictor of preterm birth. Contraception Several methods of contraception involve the cervix. Cervical diaphragms are reusable, firm-rimmed plastic devices inserted by a woman prior to intercourse that cover the cervix. Pressure against the walls of the vagina maintain the position of the diaphragm, and it acts as a physical barrier to prevent the entry of sperm into the uterus, preventing fertilisation. Cervical caps are a similar method, although they are smaller and adhere to the cervix by suction. Diaphragms and caps are often used in conjunction with spermicides. In one year, 12% of women using the diaphragm will undergo an unintended pregnancy, and with optimal use this falls to 6%. Efficacy rates are lower for the cap, with 18% of women undergoing an unintended pregnancy, and 10–13% with optimal use. Most types of progestogen-only pills are effective as a contraceptive because they thicken cervical mucus, making it difficult for sperm to pass along the cervical canal. In addition, they may also sometimes prevent ovulation. In contrast, contraceptive pills that contain both oestrogen and progesterone, the combined oral contraceptive pills, work mainly by preventing ovulation. They also thicken cervical mucus and thin the lining of the uterus, enhancing their effectiveness. Clinical significance Cancer In 2008, cervical cancer was the third-most common cancer in women worldwide, with rates varying geographically from less than one to more than 50 cases per 100,000 women. It is a leading cause of cancer-related death in poor countries, where delayed diagnosis leading to poor outcomes is common. The introduction of routine screening has resulted in fewer cases of (and deaths from) cervical cancer, however this has mainly taken place in developed countries. Most developing countries have limited or no screening, and 85% of the global burden occurring there. Cervical cancer nearly always involves human papillomavirus (HPV) infection. HPV is a virus with numerous strains, several of which predispose to precancerous changes in the cervical epithelium, particularly in the transformation zone, which is the most common area for cervical cancer to start. HPV vaccines, such as Gardasil and Cervarix, reduce the incidence of cervical cancer, by inoculating against the viral strains involved in cancer development. Potentially precancerous changes in the cervix can be detected by cervical screening, using methods including a Pap smear (also called a cervical smear), in which epithelial cells are scraped from the surface of the cervix and examined under a microscope. The colposcope, an instrument used to see a magnified view of the cervix, was invented in 1925. The Pap smear was developed by Georgios Papanikolaou in 1928. A LEEP procedure using a heated loop of platinum to excise a patch of cervical tissue was developed by Aurel Babes in 1927. In some parts of the developed world including the UK, the Pap test has been superseded with liquid-based cytology. A cheap, cost-effective and practical alternative in poorer countries is visual inspection with acetic acid (VIA). Instituting and sustaining cytology-based programs in these regions can be difficult, due to the need for trained personnel, equipment and facilities and difficulties in follow-up. With VIA, results and treatment can be available on the same day. As a screening test, VIA is comparable to cervical cytology in accurately identifying precancerous lesions. A result of dysplasia is usually further investigated, such as by taking a cone biopsy, which may also remove the cancerous lesion. Cervical intraepithelial neoplasia is a possible result of the biopsy and represents dysplastic changes that may eventually progress to invasive cancer. Most cases of cervical cancer are detected in this way, without having caused any symptoms. When symptoms occur, they may include vaginal bleeding, discharge, or discomfort. Inflammation Inflammation of the cervix is referred to as cervicitis. This inflammation may be of the endocervix or ectocervix. When associated with the endocervix, it is associated with a mucous vaginal discharge and sexually transmitted infections such as chlamydia and gonorrhoea. As many as half of pregnant women having a gonorrheal infection of the cervix are asymptomatic. Other causes include overgrowth of the commensal flora of the vagina. When associated with the ectocervix, inflammation may be caused by the herpes simplex virus. Inflammation is often investigated through directly visualising the cervix using a speculum, which may appear whiteish due to exudate, and by taking a Pap smear and examining for causal bacteria. Special tests may be used to identify particular bacteria. If the inflammation is due to a bacterium, then antibiotics may be given as treatment. Anatomical abnormalities Cervical stenosis is an abnormally narrow cervical canal, typically associated with trauma caused by removal of tissue for investigation or treatment of cancer, or cervical cancer itself. Diethylstilbestrol, used from 1938 to 1971 to prevent preterm labour and miscarriage, is also strongly associated with the development of cervical stenosis and other abnormalities in the daughters of the exposed women. Other abnormalities include: vaginal adenosis, in which the squamous epithelium of the ectocervix becomes columnar; cancers such as clear cell adenocarcinomas; cervical ridges and hoods; and development of a cockscomb cervix appearance, which is the condition wherein, as the name suggests, the cervix of the uterus is shaped like a cockscomb. About one third of women born to diethylstilbestrol-treated mothers (i.e. in-utero exposure) develop a cockscomb cervix. Enlarged folds or ridges of cervical stroma (fibrous tissues) and epithelium constitute a cockscomb cervix. Similarly, cockscomb polyps lining the cervix are usually considered or grouped into the same overarching description. It is in and of itself considered a benign abnormality; its presence, however is usually indicative of DES exposure, and as such women who experience these abnormalities should be aware of their increased risk of associated pathologies. Cervical agenesis is a rare congenital condition in which the cervix completely fails to develop, often associated with the concurrent failure of the vagina to develop. Other congenital cervical abnormalities exist, often associated with abnormalities of the vagina and uterus. The cervix may be duplicated in situations such as bicornuate uterus and uterine didelphys. Cervical polyps, which are benign overgrowths of endocervical tissue, if present, may cause bleeding, or a benign overgrowth may be present in the cervical canal. Cervical ectropion refers to the horizontal overgrowth of the endocervical columnar lining in a one-cell-thick layer over the ectocervix. In mammals Female marsupials have paired uteri and cervices. Most eutherian (placental) mammal species have a single cervix and single, bipartite or bicornuate uterus. Lagomorphs, rodents, aardvarks and hyraxes have a duplex uterus and two cervices. Lagomorphs and rodents share many morphological characteristics and are grouped together in the clade Glires. Anteaters of the family myrmecophagidae are unusual in that they lack a defined cervix; they are thought to have lost the characteristic rather than other mammals developing a cervix on more than one lineage. In domestic pigs, the cervix contains a series of five interdigitating pads that hold the boar's corkscrew-shaped penis during copulation. Etymology and pronunciation The word cervix () came to English from Latin, where it means "neck", and like its Germanic counterpart, it can refer not only to the neck [of the body] but also to an analogous narrowed part of an object. The cervix uteri (neck of the uterus) is thus the uterine cervix, but in English the word cervix used alone usually refers to it. Thus the adjective cervical may refer either to the neck (as in cervical vertebrae or cervical lymph nodes) or to the uterine cervix (as in cervical cap or cervical cancer). Latin cervix came from the Proto-Indo-European root ker-, referring to a "structure that projects". Thus, the word cervix is linguistically related to the English word "horn", the Persian word for "head" ( sar), the Greek word for "head" ( koruphe), and the Welsh and Romanian words for "deer" (, Romanian: cerb). The cervix was documented in anatomical literature in at least the time of Hippocrates; cervical cancer was first described more than 2,000 years ago, with descriptions provided by both Hippocrates and Aretaeus. However, there was some variation in word sense among early writers, who used the term to refer to both the cervix and the internal uterine orifice. The first attested use of the word to refer to the cervix of the uterus was in 1702. References Citations Cited texts External links Human female reproductive system Women's health
5739
https://en.wikipedia.org/wiki/Compiler
Compiler
In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program. There are many different types of compilers which produce output in different useful forms. A cross-compiler produces code for a different CPU or operating system than the one on which the cross-compiler itself runs. A bootstrap compiler is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language. Related software include, a program that translates from a low-level language to a higher level one is a decompiler; a program that translates between high-level languages, usually called a source-to-source compiler or transpiler. A language rewriter is usually a program that translates the form of expressions without a change of language. A compiler-compiler is a compiler that produces a compiler (or part of one), often in a generic and reusable way so as to be able to produce many differing compilers. A compiler is likely to perform some or all of the following operations, often called phases: preprocessing, lexical analysis, parsing, semantic analysis (syntax-directed translation), conversion of input programs to an intermediate representation, code optimization and machine specific code generation. Compilers generally implement these phases as modular components, promoting efficient design and correctness of transformations of source input to target output. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementers invest significant effort to ensure compiler correctness. Compilers are not the only language processor used to transform source programs. An interpreter is computer software that transforms and then executes the indicated operations. The translation process influences the design of computer languages, which leads to a preference of compilation or interpretation. In theory, a programming language can have both a compiler and an interpreter. In practice, programming languages tend to be associated with just one (a compiler or an interpreter). History Theoretical computing concepts developed by scientists, mathematicians, and engineers formed the basis of digital modern computing development during World War II. Primitive binary languages evolved because digital devices only understand ones and zeros and the circuit patterns in the underlying machine architecture. In the late 1940s, assembly languages were created to offer a more workable abstraction of the computer architectures. Limited memory capacity of early computers led to substantial technical challenges when the first compilers were designed. Therefore, the compilation process needed to be divided into several small programs. The front end programs produce the analysis products used by the back end programs to generate target code. As computer technology provided more resources, compiler designs could align better with the compilation process. It is usually more productive for a programmer to use a high-level language, so the development of high-level languages followed naturally from the capabilities offered by digital computers. High-level languages are formal languages that are strictly defined by their syntax and semantics which form the high-level language architecture. Elements of these formal languages include: Alphabet, any finite set of symbols; String, a finite sequence of symbols; Language, any set of strings on an alphabet. The sentences in a language may be defined by a set of rules called a grammar. Backus–Naur form (BNF) describes the syntax of "sentences" of a language and was used for the syntax of Algol 60 by John Backus. The ideas derive from the context-free grammar concepts by Noam Chomsky, a linguist. "BNF and its extensions have become standard tools for describing the syntax of programming notations, and in many cases parts of compilers are generated automatically from a BNF description." In the 1940s, Konrad Zuse designed an algorithmic programming language called Plankalkül ("Plan Calculus"). While no actual implementation occurred until the 1970s, it presented concepts later seen in APL designed by Ken Iverson in the late 1950s. APL is a language for mathematical computations. High-level language design during the formative years of digital computing provided useful programming tools for a variety of applications: FORTRAN (Formula Translation) for engineering and science applications is considered to be the first high-level language. COBOL (Common Business-Oriented Language) evolved from A-0 and FLOW-MATIC to become the dominant high-level language for business applications. LISP (List Processor) for symbolic computation. Compiler technology evolved from the need for a strictly defined transformation of the high-level source program into a low-level target program for the digital computer. The compiler could be viewed as a front end to deal with the analysis of the source code and a back end to synthesize the analysis into the target code. Optimization between the front end and back end could produce more efficient target code. Some early milestones in the development of compiler technology: 1952: An Autocode compiler developed by Alick Glennie for the Manchester Mark I computer at the University of Manchester is considered by some to be the first compiled programming language. 1952: Grace Hopper's team at Remington Rand wrote the compiler for the A-0 programming language (and coined the term compiler to describe it), although the A-0 compiler functioned more as a loader or linker than the modern notion of a full compiler. 1954–1957: A team led by John Backus at IBM developed FORTRAN which is usually considered the first high-level language. In 1957, they completed a FORTRAN compiler that is generally credited as having introduced the first unambiguously complete compiler. 1959: The Conference on Data Systems Language (CODASYL) initiated development of COBOL. The COBOL design drew on A-0 and FLOW-MATIC. By the early 1960s COBOL was compiled on multiple architectures. 1958–1960: Algol 58 was the precursor to ALGOL 60. Algol 58 introduced code blocks, a key advance in the rise of structured programming. ALGOL 60 was the first language to implement nested function definitions with lexical scope. It included recursion. Its syntax was defined using BNF. ALGOL 60 inspired many languages that followed it. Tony Hoare remarked: "... it was not only an improvement on its predecessors but also on nearly all its successors." 1958–1962: John McCarthy at MIT designed LISP. The symbol processing capabilities provided useful features for artificial intelligence research. In 1962, LISP 1.5 release noted some tools: an interpreter written by Stephen Russell and Daniel J. Edwards, a compiler and assembler written by Tim Hart and Mike Levin. Early operating systems and software were written in assembly language. In the 1960s and early 1970s, the use of high-level languages for system programming was still controversial due to resource limitations. However, several research and industry efforts began the shift toward high-level systems programming languages, for example, BCPL, BLISS, B, and C. BCPL (Basic Combined Programming Language) designed in 1966 by Martin Richards at the University of Cambridge was originally developed as a compiler writing tool. Several compilers have been implemented, Richards' book provides insights to the language and its compiler. BCPL was not only an influential systems programming language that is still used in research but also provided a basis for the design of B and C languages. BLISS (Basic Language for Implementation of System Software) was developed for a Digital Equipment Corporation (DEC) PDP-10 computer by W. A. Wulf's Carnegie Mellon University (CMU) research team. The CMU team went on to develop BLISS-11 compiler one year later in 1970. Multics (Multiplexed Information and Computing Service), a time-sharing operating system project, involved MIT, Bell Labs, General Electric (later Honeywell) and was led by Fernando Corbató from MIT. Multics was written in the PL/I language developed by IBM and IBM User Group. IBM's goal was to satisfy business, scientific, and systems programming requirements. There were other languages that could have been considered but PL/I offered the most complete solution even though it had not been implemented. For the first few years of the Multics project, a subset of the language could be compiled to assembly language with the Early PL/I (EPL) compiler by Doug McIlory and Bob Morris from Bell Labs. EPL supported the project until a boot-strapping compiler for the full PL/I could be developed. Bell Labs left the Multics project in 1969, and developed a system programming language B based on BCPL concepts, written by Dennis Ritchie and Ken Thompson. Ritchie created a boot-strapping compiler for B and wrote Unics (Uniplexed Information and Computing Service) operating system for a PDP-7 in B. Unics eventually became spelled Unix. Bell Labs started the development and expansion of C based on B and BCPL. The BCPL compiler had been transported to Multics by Bell Labs and BCPL was a preferred language at Bell Labs. Initially, a front-end program to Bell Labs' B compiler was used while a C compiler was developed. In 1971, a new PDP-11 provided the resource to define extensions to B and rewrite the compiler. By 1973 the design of C language was essentially complete and the Unix kernel for a PDP-11 was rewritten in C. Steve Johnson started development of Portable C Compiler (PCC) to support retargeting of C compilers to new machines. Object-oriented programming (OOP) offered some interesting possibilities for application development and maintenance. OOP concepts go further back but were part of LISP and Simula language science. Bell Labs became interested in OOP with the development of C++. C++ was first used in 1980 for systems programming. The initial design leveraged C language systems programming capabilities with Simula concepts. Object-oriented facilities were added in 1983. The Cfront program implemented a C++ front-end for C84 language compiler. In subsequent years several C++ compilers were developed as C++ popularity grew. In many application domains, the idea of using a higher-level language quickly caught on. Because of the expanding functionality supported by newer programming languages and the increasing complexity of computer architectures, compilers became more complex. DARPA (Defense Advanced Research Projects Agency) sponsored a compiler project with Wulf's CMU research team in 1970. The Production Quality Compiler-Compiler PQCC design would produce a Production Quality Compiler (PQC) from formal definitions of source language and the target. PQCC tried to extend the term compiler-compiler beyond the traditional meaning as a parser generator (e.g., Yacc) without much success. PQCC might more properly be referred to as a compiler generator. PQCC research into code generation process sought to build a truly automatic compiler-writing system. The effort discovered and designed the phase structure of the PQC. The BLISS-11 compiler provided the initial structure. The phases included analyses (front end), intermediate translation to virtual machine (middle end), and translation to the target (back end). TCOL was developed for the PQCC research to handle language specific constructs in the intermediate representation. Variations of TCOL supported various languages. The PQCC project investigated techniques of automated compiler construction. The design concepts proved useful in optimizing compilers and compilers for the (since 1995, object-oriented) programming language Ada. The Ada STONEMAN document formalized the program support environment (APSE) along with the kernel (KAPSE) and minimal (MAPSE). An Ada interpreter NYU/ED supported development and standardization efforts with the American National Standards Institute (ANSI) and the International Standards Organization (ISO). Initial Ada compiler development by the U.S. Military Services included the compilers in a complete integrated design environment along the lines of the STONEMAN document. Army and Navy worked on the Ada Language System (ALS) project targeted to DEC/VAX architecture while the Air Force started on the Ada Integrated Environment (AIE) targeted to IBM 370 series. While the projects did not provide the desired results, they did contribute to the overall effort on Ada development. Other Ada compiler efforts got underway in Britain at the University of York and in Germany at the University of Karlsruhe. In the U. S., Verdix (later acquired by Rational) delivered the Verdix Ada Development System (VADS) to the Army. VADS provided a set of development tools including a compiler. Unix/VADS could be hosted on a variety of Unix platforms such as DEC Ultrix and the Sun 3/60 Solaris targeted to Motorola 68020 in an Army CECOM evaluation. There were soon many Ada compilers available that passed the Ada Validation tests. The Free Software Foundation GNU project developed the GNU Compiler Collection (GCC) which provides a core capability to support multiple languages and targets. The Ada version GNAT is one of the most widely used Ada compilers. GNAT is free but there is also commercial support, for example, AdaCore, was founded in 1994 to provide commercial software solutions for Ada. GNAT Pro includes the GNU GCC based GNAT with a tool suite to provide an integrated development environment. High-level languages continued to drive compiler research and development. Focus areas included optimization and automatic code generation. Trends in programming languages and development environments influenced compiler technology. More compilers became included in language distributions (PERL, Java Development Kit) and as a component of an IDE (VADS, Eclipse, Ada Pro). The interrelationship and interdependence of technologies grew. The advent of web services promoted growth of web languages and scripting languages. Scripts trace back to the early days of Command Line Interfaces (CLI) where the user could enter commands to be executed by the system. User Shell concepts developed with languages to write shell programs. Early Windows designs offered a simple batch programming capability. The conventional transformation of these language used an interpreter. While not widely used, Bash and Batch compilers have been written. More recently sophisticated interpreted languages became part of the developers tool kit. Modern scripting languages include PHP, Python, Ruby and Lua. (Lua is widely used in game development.) All of these have interpreter and compiler support. "When the field of compiling began in the late 50s, its focus was limited to the translation of high-level language programs into machine code ... The compiler field is increasingly intertwined with other disciplines including computer architecture, programming languages, formal methods, software engineering, and computer security." The "Compiler Research: The Next 50 Years" article noted the importance of object-oriented languages and Java. Security and parallel computing were cited among the future research targets. Compiler construction A compiler implements a formal transformation from a high-level source program to a low-level target program. Compiler design can define an end-to-end solution or tackle a defined subset that interfaces with other compilation tools e.g. preprocessors, assemblers, linkers. Design requirements include rigorously defined interfaces both internally between compiler components and externally between supporting toolsets. In the early days, the approach taken to compiler design was directly affected by the complexity of the computer language to be processed, the experience of the person(s) designing it, and the resources available. Resource limitations led to the need to pass through the source code more than once. A compiler for a relatively simple language written by one person might be a single, monolithic piece of software. However, as the source language grows in complexity the design may be split into a number of interdependent phases. Separate phases provide design improvements that focus development on the functions in the compilation process. One-pass versus multi-pass compilers Classifying compilers by number of passes has its background in the hardware resource limitations of computers. Compiling involves performing much work and early computers did not have enough memory to contain one program that did all of this work. So compilers were split up into smaller programs which each made a pass over the source (or some representation of it) performing some of the required analysis and translations. The ability to compile in a single pass has classically been seen as a benefit because it simplifies the job of writing a compiler and one-pass compilers generally perform compilations faster than multi-pass compilers. Thus, partly driven by the resource limitations of early systems, many early languages were specifically designed so that they could be compiled in a single pass (e.g., Pascal). In some cases, the design of a language feature may require a compiler to perform more than one pass over the source. For instance, consider a declaration appearing on line 20 of the source which affects the translation of a statement appearing on line 10. In this case, the first pass needs to gather information about declarations appearing after statements that they affect, with the actual translation happening during a subsequent pass. The disadvantage of compiling in a single pass is that it is not possible to perform many of the sophisticated optimizations needed to generate high quality code. It can be difficult to count exactly how many passes an optimizing compiler makes. For instance, different phases of optimization may analyse one expression many times but only analyse another expression once. Splitting a compiler up into small programs is a technique used by researchers interested in producing provably correct compilers. Proving the correctness of a set of small programs often requires less effort than proving the correctness of a larger, single, equivalent program. Three-stage compiler structure Regardless of the exact number of phases in the compiler design, the phases can be assigned to one of three stages. The stages include a front end, a middle end, and a back end. The front end scans the input and verifies syntax and semantics according to a specific source language. For statically typed languages it performs type checking by collecting type information. If the input program is syntactically incorrect or has a type error, it generates error and/or warning messages, usually identifying the location in the source code where the problem was detected; in some cases the actual error may be (much) earlier in the program. Aspects of the front end include lexical analysis, syntax analysis, and semantic analysis. The front end transforms the input program into an intermediate representation (IR) for further processing by the middle end. This IR is usually a lower-level representation of the program with respect to the source code. The middle end performs optimizations on the IR that are independent of the CPU architecture being targeted. This source code/machine code independence is intended to enable generic optimizations to be shared between versions of the compiler supporting different languages and target processors. Examples of middle end optimizations are removal of useless (dead-code elimination) or unreachable code (reachability analysis), discovery and propagation of constant values (constant propagation), relocation of computation to a less frequently executed place (e.g., out of a loop), or specialization of computation based on the context, eventually producing the "optimized" IR that is used by the back end. The back end takes the optimized IR from the middle end. It may perform more analysis, transformations and optimizations that are specific for the target CPU architecture. The back end generates the target-dependent assembly code, performing register allocation in the process. The back end performs instruction scheduling, which re-orders instructions to keep parallel execution units busy by filling delay slots. Although most optimization problems are NP-hard, heuristic techniques for solving them are well-developed and implemented in production-quality compilers. Typically the output of a back end is machine code specialized for a particular processor and operating system. This front/middle/back-end approach makes it possible to combine front ends for different languages with back ends for different CPUs while sharing the optimizations of the middle end. Practical examples of this approach are the GNU Compiler Collection, Clang (LLVM-based C/C++ compiler), and the Amsterdam Compiler Kit, which have multiple front-ends, shared optimizations and multiple back-ends. Front end The front end analyzes the source code to build an internal representation of the program, called the intermediate representation (IR). It also manages the symbol table, a data structure mapping each symbol in the source code to associated information such as location, type and scope. While the frontend can be a single monolithic function or program, as in a scannerless parser, it was traditionally implemented and analyzed as several phases, which may execute sequentially or concurrently. This method is favored due to its modularity and separation of concerns. Most commonly, the frontend is broken into three phases: lexical analysis (also known as lexing or scanning), syntax analysis (also known as scanning or parsing), and semantic analysis. Lexing and parsing comprise the syntactic analysis (word syntax and phrase syntax, respectively), and in simple cases, these modules (the lexer and parser) can be automatically generated from a grammar for the language, though in more complex cases these require manual modification. The lexical grammar and phrase grammar are usually context-free grammars, which simplifies analysis significantly, with context-sensitivity handled at the semantic analysis phase. The semantic analysis phase is generally more complex and written by hand, but can be partially or fully automated using attribute grammars. These phases themselves can be further broken down: lexing as scanning and evaluating, and parsing as building a concrete syntax tree (CST, parse tree) and then transforming it into an abstract syntax tree (AST, syntax tree). In some cases additional phases are used, notably line reconstruction and preprocessing, but these are rare. The main phases of the front end include the following: converts the input character sequence to a canonical form ready for the parser. Languages which strop their keywords or allow arbitrary spaces within identifiers require this phase. The top-down, recursive-descent, table-driven parsers used in the 1960s typically read the source one character at a time and did not require a separate tokenizing phase. Atlas Autocode and Imp (and some implementations of ALGOL and Coral 66) are examples of stropped languages whose compilers would have a Line Reconstruction phase. Preprocessing supports macro substitution and conditional compilation. Typically the preprocessing phase occurs before syntactic or semantic analysis; e.g. in the case of C, the preprocessor manipulates lexical tokens rather than syntactic forms. However, some languages such as Scheme support macro substitutions based on syntactic forms. Lexical analysis (also known as lexing or tokenization) breaks the source code text into a sequence of small pieces called lexical tokens. This phase can be divided into two stages: the scanning, which segments the input text into syntactic units called lexemes and assigns them a category; and the evaluating, which converts lexemes into a processed value. A token is a pair consisting of a token name and an optional token value. Common token categories may include identifiers, keywords, separators, operators, literals and comments, although the set of token categories varies in different programming languages. The lexeme syntax is typically a regular language, so a finite state automaton constructed from a regular expression can be used to recognize it. The software doing lexical analysis is called a lexical analyzer. This may not be a separate step—it can be combined with the parsing step in scannerless parsing, in which case parsing is done at the character level, not the token level. Syntax analysis (also known as parsing) involves parsing the token sequence to identify the syntactic structure of the program. This phase typically builds a parse tree, which replaces the linear sequence of tokens with a tree structure built according to the rules of a formal grammar which define the language's syntax. The parse tree is often analyzed, augmented, and transformed by later phases in the compiler. Semantic analysis adds semantic information to the parse tree and builds the symbol table. This phase performs semantic checks such as type checking (checking for type errors), or object binding (associating variable and function references with their definitions), or definite assignment (requiring all local variables to be initialized before use), rejecting incorrect programs or issuing warnings. Semantic analysis usually requires a complete parse tree, meaning that this phase logically follows the parsing phase, and logically precedes the code generation phase, though it is often possible to fold multiple phases into one pass over the code in a compiler implementation. Middle end The middle end, also known as optimizer, performs optimizations on the intermediate representation in order to improve the performance and the quality of the produced machine code. The middle end contains those optimizations that are independent of the CPU architecture being targeted. The main phases of the middle end include the following: Analysis: This is the gathering of program information from the intermediate representation derived from the input; data-flow analysis is used to build use-define chains, together with dependence analysis, alias analysis, pointer analysis, escape analysis, etc. Accurate analysis is the basis for any compiler optimization. The control-flow graph of every compiled function and the call graph of the program are usually also built during the analysis phase. Optimization: the intermediate language representation is transformed into functionally equivalent but faster (or smaller) forms. Popular optimizations are inline expansion, dead-code elimination, constant propagation, loop transformation and even automatic parallelization. Compiler analysis is the prerequisite for any compiler optimization, and they tightly work together. For example, dependence analysis is crucial for loop transformation. The scope of compiler analysis and optimizations vary greatly; their scope may range from operating within a basic block, to whole procedures, or even the whole program. There is a trade-off between the granularity of the optimizations and the cost of compilation. For example, peephole optimizations are fast to perform during compilation but only affect a small local fragment of the code, and can be performed independently of the context in which the code fragment appears. In contrast, interprocedural optimization requires more compilation time and memory space, but enable optimizations that are only possible by considering the behavior of multiple functions simultaneously. Interprocedural analysis and optimizations are common in modern commercial compilers from HP, IBM, SGI, Intel, Microsoft, and Sun Microsystems. The free software GCC was criticized for a long time for lacking powerful interprocedural optimizations, but it is changing in this respect. Another open source compiler with full analysis and optimization infrastructure is Open64, which is used by many organizations for research and commercial purposes. Due to the extra time and space needed for compiler analysis and optimizations, some compilers skip them by default. Users have to use compilation options to explicitly tell the compiler which optimizations should be enabled. Back end The back end is responsible for the CPU architecture specific optimizations and for code generation. The main phases of the back end include the following: Machine dependent optimizations: optimizations that depend on the details of the CPU architecture that the compiler targets. A prominent example is peephole optimizations, which rewrites short sequences of assembler instructions into more efficient instructions. Code generation: the transformed intermediate language is translated into the output language, usually the native machine language of the system. This involves resource and storage decisions, such as deciding which variables to fit into registers and memory and the selection and scheduling of appropriate machine instructions along with their associated addressing modes (see also Sethi–Ullman algorithm). Debug data may also need to be generated to facilitate debugging. Compiler correctness Compiler correctness is the branch of software engineering that deals with trying to show that a compiler behaves according to its language specification. Techniques include developing the compiler using formal methods and using rigorous testing (often called compiler validation) on an existing compiler. Compiled versus interpreted languages Higher-level programming languages usually appear with a type of translation in mind: either designed as compiled language or interpreted language. However, in practice there is rarely anything about a language that requires it to be exclusively compiled or exclusively interpreted, although it is possible to design languages that rely on re-interpretation at run time. The categorization usually reflects the most popular or widespread implementations of a language – for instance, BASIC is sometimes called an interpreted language, and C a compiled one, despite the existence of BASIC compilers and C interpreters. Interpretation does not replace compilation completely. It only hides it from the user and makes it gradual. Even though an interpreter can itself be interpreted, a set of directly executed machine instructions is needed somewhere at the bottom of the execution stack (see machine language). Furthermore, for optimization compilers can contain interpreter functionality, and interpreters may include ahead of time compilation techniques. For example, where an expression can be executed during compilation and the results inserted into the output program, then it prevents it having to be recalculated each time the program runs, which can greatly speed up the final program. Modern trends toward just-in-time compilation and bytecode interpretation at times blur the traditional categorizations of compilers and interpreters even further. Some language specifications spell out that implementations must include a compilation facility; for example, Common Lisp. However, there is nothing inherent in the definition of Common Lisp that stops it from being interpreted. Other languages have features that are very easy to implement in an interpreter, but make writing a compiler much harder; for example, APL, SNOBOL4, and many scripting languages allow programs to construct arbitrary source code at runtime with regular string operations, and then execute that code by passing it to a special evaluation function. To implement these features in a compiled language, programs must usually be shipped with a runtime library that includes a version of the compiler itself. Types One classification of compilers is by the platform on which their generated code executes. This is known as the target platform. A native or hosted compiler is one whose output is intended to directly run on the same type of computer and operating system that the compiler itself runs on. The output of a cross compiler is designed to run on a different platform. Cross compilers are often used when developing software for embedded systems that are not intended to support a software development environment. The output of a compiler that produces code for a virtual machine (VM) may or may not be executed on the same platform as the compiler that produced it. For this reason, such compilers are not usually classified as native or cross compilers. The lower level language that is the target of a compiler may itself be a high-level programming language. C, viewed by some as a sort of portable assembly language, is frequently the target language of such compilers. For example, Cfront, the original compiler for C++, used C as its target language. The C code generated by such a compiler is usually not intended to be readable and maintained by humans, so indent style and creating pretty C intermediate code are ignored. Some of the features of C that make it a good target language include the #line directive, which can be generated by the compiler to support debugging of the original source, and the wide platform support available with C compilers. While a common compiler type outputs machine code, there are many other types: Source-to-source compilers are a type of compiler that takes a high-level language as its input and outputs a high-level language. For example, an automatic parallelizing compiler will frequently take in a high-level language program as an input and then transform the code and annotate it with parallel code annotations (e.g. OpenMP) or language constructs (e.g. Fortran's DOALL statements). Other terms for a source-to-source compiler are transcompiler or transpiler. Bytecode compilers compile to assembly language of a theoretical machine, like some Prolog implementations This Prolog machine is also known as the Warren Abstract Machine (or WAM). Bytecode compilers for Java, Python are also examples of this category. Just-in-time compilers (JIT compiler) defer compilation until runtime. JIT compilers exist for many modern languages including Python, JavaScript, Smalltalk, Java, Microsoft .NET's Common Intermediate Language (CIL) and others. A JIT compiler generally runs inside an interpreter. When the interpreter detects that a code path is "hot", meaning it is executed frequently, the JIT compiler will be invoked and compile the "hot" code for increased performance. For some languages, such as Java, applications are first compiled using a bytecode compiler and delivered in a machine-independent intermediate representation. A bytecode interpreter executes the bytecode, but the JIT compiler will translate the bytecode to machine code when increased performance is necessary. Hardware compilers (also known as synthesis tools) are compilers whose input is a hardware description language and whose output is a description, in the form of a netlist or otherwise, of a hardware configuration. The output of these compilers target computer hardware at a very low level, for example a field-programmable gate array (FPGA) or structured application-specific integrated circuit (ASIC). Such compilers are said to be hardware compilers, because the source code they compile effectively controls the final configuration of the hardware and how it operates. The output of the compilation is only an interconnection of transistors or lookup tables. An example of hardware compiler is XST, the Xilinx Synthesis Tool used for configuring FPGAs. Similar tools are available from Altera, Synplicity, Synopsys and other hardware vendors. An assembler is a program that compiles human readable assembly language to machine code, the actual instructions executed by hardware. The inverse program that translates machine code to assembly language is called a disassembler. A program that translates from a low-level language to a higher level one is a decompiler. A program that translates into an object code format that is not supported on the compilation machine is called a cross compiler and is commonly used to prepare code for execution on embedded software applications. A program that rewrites object code back into the same type of object code while applying optimisations and transformations is a binary recompiler. See also Abstract interpretation Bottom-up parsing Compile and go system Compile farm List of compilers Metacompilation References Further reading (2+xiv+270+6 pages) Compiler textbook references A collection of references to mainstream Compiler Construction Textbooks External links Incremental Approach to Compiler Constructiona PDF tutorial explaining the key conceptual difference between compilers and interpreters Let's Build a Compiler, by Jack Crenshaw American inventions Compiler construction Computer libraries Programming language implementation Utility software types
5742
https://en.wikipedia.org/wiki/Castrato
Castrato
A castrato (Italian, : castrati) is a male singer who underwent castration before puberty in order to retain singing voice equivalent to that of a soprano, mezzo-soprano, or contralto. The voice can also occur in one who, due to an endocrinological condition, never reaches sexual maturity. Castration before puberty (or in its early stages) prevents the larynx from being transformed by the normal physiological events of puberty. As a result, the vocal range of prepubescence (shared by both sexes) is largely retained, and the voice develops into adulthood in a unique way. Prepubescent castration for this purpose diminished greatly in the late 18th century. Methods of castration used to terminate the onset of puberty varied. Methods involved using opium to medically induce a coma, then submerging the boy into an ice or milk bath where the procedure of either severing the vas deferens (similar to a vasectomy), twisting the testicles until they atrophied, or complete removal via surgical cutting was performed (however the complete removal of the testicles was not a popularly used technique). The procedure was usually done to boys around the age of 8–10; recovery time from the procedure took around two weeks. The means by which future singers were prepared could lead to premature death. To prevent the child from experiencing the intense pain of castration, many were inadvertently administered lethal doses of opium or some other narcotic, or were killed by overlong compression of the carotid artery in the neck (intended to render them unconscious during the castration procedure). The geographical locations of where these procedures took place is not known specifically. During the 18th century itself, the music historian Charles Burney was sent from pillar to post in search of places where the operation was carried out: I enquired throughout Italy at what place boys were chiefly qualified for singing by castration, but could get no certain intelligence. I was told at Milan that it was at Venice; at Venice that it was at Bologna; but at Bologna the fact was denied, and I was referred to Florence; from Florence to Rome, and from Rome I was sent to Naples ... it is said that there are shops in Naples with this inscription: 'QUI SI CASTRANO RAGAZZI' ("Here boys are castrated"); but I was utterly unable to see or hear of any such shops during my residence in that city. As the castrato's body grew, his lack of testosterone meant that his epiphyses (bone-joints) did not harden in the normal manner. Thus the limbs of the castrati often grew unusually long, as did their ribs. This, combined with intensive training, gave them unrivalled lung-power and breath capacity. Operating through small, child-sized vocal cords, their voices were also extraordinarily flexible, and quite different from the equivalent adult female voice. Their vocal range was higher than that of the uncastrated adult male. Listening to the only surviving recordings of a castrato (see below), one can hear that the lower part of the voice sounds like a "super-high" tenor, with a more falsetto-like upper register above that. Castrati were rarely referred to as such: in the 18th century, the euphemism musico (pl musici) was much more generally used, although it usually carried derogatory implications; another synonym was evirato, literally meaning "emasculated". Eunuch is a more general term since, historically, many eunuchs were castrated after puberty and thus the castration had no impact on their voices. History Castration as a means of subjugation, enslavement or other punishment has a very long history, dating back to ancient Sumer. In a Western context, eunuch singers are known to have existed from the early Byzantine Empire. In Constantinople around 400 AD, the empress Aelia Eudoxia had a eunuch choir-master, Brison, who may have established the use of castrati in Byzantine choirs, though whether Brison himself was a singer and whether he had colleagues who were eunuch singers is not certain. By the 9th century, eunuch singers were well-known (not least in the choir of Hagia Sophia) and remained so until the sack of Constantinople by the Western forces of the Fourth Crusade in 1204. Their fate from then until their reappearance in Italy more than three hundred years later is not clear. It seems likely that the Spanish tradition of soprano falsettists may have hidden castrati. Much of Spain was under Muslim rulers during the Middle Ages, and castration had a history going back to the ancient Near East. Stereotypically, eunuchs served as harem guards, but they were also valued as high-level political appointees since they could not start a dynasty which would threaten the ruler. European classical tradition Castrati first appeared in Italy in the mid-16th century, though at first the terms describing them were not always clear. The phrase soprano maschio (male soprano), which could also mean falsettist, occurs in the Due Dialoghi della Musica (Two dialogues upon music) of Luigi Dentice, an Oratorian priest, published in Rome in 1553. On 9 November 1555 Cardinal Ippolito II d'Este (famed as the builder of the Villa d'Este at Tivoli), wrote to Guglielmo Gonzaga, Duke of Mantua (1538–1587), that he has heard that the Duke was interested in his cantoretti (little singers) and offered to send him two, so that he could choose one for his own service. This is a rare term but probably does equate to castrato. The Cardinal's nephew, Alfonso II d'Este, Duke of Ferrara, was another early enthusiast, inquiring about castrati in 1556. There were certainly castrati in the Sistine Chapel choir in 1558, although not described as such: on 27 April of that year, Hernando Bustamante, a Spaniard from Palencia, was admitted (the first castrati so termed who joined the Sistine choir were Pietro Paolo Folignato and Girolamo Rossini, admitted in 1599). Surprisingly, considering the later French distaste for castrati, they certainly existed in France at this time also, being known of in Paris, Orléans, Picardy and Normandy, though they were not abundant: the King of France himself had difficulty in obtaining them. By 1574, there were castrati in the Ducal court chapel at Munich, where the Kapellmeister (music director) was the famous Orlando di Lasso. In 1589, by the bull Cum pro nostro pastorali munere, Pope Sixtus V re-organised the choir of St Peter's, Rome specifically to include castrati. Thus the castrati came to supplant both boys (whose voices broke after only a few years) and falsettists (whose voices were weaker and less reliable) from the top line in such choirs. Women were banned by the Pauline dictum mulieres in ecclesiis taceant ("let women keep silent in the churches"; see I Corinthians, ch. 14, v. 34). The Italian castrati were often rumored to have unusually long lives, but a 1993 study found that their lifespans were average. Opera Although the castrato (or musico) predates opera, there is some evidence that castrati had parts in the earliest operas. In the first performance of Monteverdi's Orfeo (1607), for example, they played subsidiary roles, including Speranza and (possibly) that of Euridice. Although female roles were performed by castrati in some of the papal states, this was increasingly rare; by 1680, they had supplanted "normal" male voices in lead roles, and retained their position as primo uomo for about a hundred years; an Italian opera not featuring at least one renowned castrato in a lead part would be doomed to fail. Because of the popularity of Italian opera throughout 18th-century Europe (except France), singers such as Ferri, Farinelli, Senesino and Pacchierotti became the first operatic superstars, earning enormous fees and hysterical public adulation. The strictly hierarchical organisation of opera seria favoured their high voices as symbols of heroic virtue, though they were frequently mocked for their strange appearance and bad acting. In his 1755 Reflections upon theatrical expression in tragedy, Roger Pickering wrote: Farinelli drew every Body to the Haymarket. What a Pipe! What Modulation! What Extasy to the Ear! But, Heavens! What Clumsiness! What Stupidity! What Offence to the Eye! Reader, if of the City, thou mayest probably have seen in the Fields of Islington or Mile-End or, If thou art in the environs of St James', thou must have observed in the Park with what Ease and Agility a cow, heavy with calf, has rose up at the command of the Milk-woman's foot: thus from the mossy bank sprang the DIVINE FARINELLI.The training of the boys was rigorous. The regimen of one singing school in Rome (c. 1700) consisted of one hour of singing difficult and awkward pieces, one hour practising trills, one hour practising ornamented passaggi, one hour of singing exercises in their teacher's presence and in front of a mirror so as to avoid unnecessary movement of the body or facial grimaces, and one hour of literary study; all this, moreover, before lunch. After, half an hour would be devoted to musical theory, another to writing counterpoint, an hour copying down the same from dictation, and another hour of literary study. During the remainder of the day, the young castrati had to find time to practice their harpsichord playing, and to compose vocal music, either sacred or secular depending on their inclination. This demanding schedule meant that, if sufficiently talented, they were able to make a debut in their mid-teens with a perfect technique and a voice of a flexibility and power no woman or ordinary male singer could match. In the 1720s and 1730s, at the height of the craze for these voices, it has been estimated that upwards of 4,000 boys were castrated annually in the service of art. Many came from poor homes and were castrated by their parents in the hope that their child might be successful and lift them from poverty (this was the case with Senesino). There are, though, records of some young boys asking to be operated on to preserve their voices (e.g. Caffarelli, who was from a wealthy family: his grandmother gave him the income from two vineyards to pay for his studies). Caffarelli was also typical of many castrati in being famous for tantrums on and off-stage, and for amorous adventures with noble ladies. Some, as described by Casanova, preferred gentlemen (noble or otherwise). Only a small percentage of boys castrated to preserve their voices had successful careers on the operatic stage; the better "also-rans" sang in cathedral or church choirs, but because of their marked appearance and the ban on their marrying, there was little room for them in society outside a musical context. The castrati came in for a great amount of scurrilous and unkind abuse, and as their fame increased, so did the hatred of them. They were often castigated as malign creatures who lured men into homosexuality. There were homosexual castrati, as Casanova's accounts of 18th-century Italy bear witness. He mentions meeting an abbé whom he took for a girl in disguise, only later discovering that "she" was a famous castrato. In Rome in 1762 he attended a performance at which the prima donna was a castrato, "the favourite pathic" of Cardinal Borghese, who dined every evening with his protector. From his behaviour on stage "it was obvious that he hoped to inspire the love of those who liked him as a man, and probably would not have done so as a woman". Decline By the late 18th century, changes in operatic taste and social attitudes spelled the end for castrati. They lingered on past the end of the ancien régime (which their style of opera parallels), and two of their number, Pacchierotti and Crescentini, performed before Napoleon. The last great operatic castrato was Giovanni Battista Velluti (1781–1861), who performed the last operatic castrato role ever written: Armando in Il crociato in Egitto by Meyerbeer (Venice, 1824). Soon after this they were replaced definitively as the first men of the operatic stage by a new breed of heroic tenor, as first incarnated by the Frenchman Gilbert-Louis Duprez, the earliest so-called "king of the high Cs". His successors have included such singers as Enrico Tamberlik, Jean de Reszke, Francesco Tamagno, Enrico Caruso, Giovanni Martinelli, Beniamino Gigli, Jussi Björling, Franco Corelli and Luciano Pavarotti, among others. After the unification of Italy in 1861, "eviration" was officially made illegal (the new Italian state had adopted the previous penal code of the Kingdom of Sardinia which expressly forbade the practice). In 1878, Pope Leo XIII prohibited the hiring of new castrati by the church: only in the Sistine Chapel and in other papal basilicas in Rome did a few castrati linger. A group photo of the Sistine Choir taken in 1898 shows that by then only six remained (plus the Direttore Perpetuo, the fine soprano castrato Domenico Mustafà), and in 1902 a ruling was extracted from Pope Leo that no further castrati should be admitted. The official end to the castrati came on St. Cecilia's Day, 22 November 1903, when the new pope, Pius X, issued his motu proprio, Tra le Sollecitudini ('Amongst the Cares'), which contained this instruction: "Whenever ... it is desirable to employ the high voices of sopranos and contraltos, these parts must be taken by boys, according to the most ancient usage of the Church." The last Sistine castrato to survive was Alessandro Moreschi, the only castrato to have made solo recordings. While an interesting historical record, these discs of his give us only a glimpse of the castrato voice – although he had been renowned as "The Angel of Rome" at the beginning of his career, some would say he was past his prime when the recordings were made in 1902 and 1904 and he never attempted to sing opera. Domenico Salvatori, a castrato who was contemporary with Moreschi, made some ensemble recordings with him but has no surviving solo recordings. The recording technology of the day was not of modern high quality. Salvatori died in 1909; Moreschi retired officially in March 1913, and died in 1922. The Catholic Church's involvement in the castrato phenomenon has long been controversial, and there have recently been calls for it to issue an official apology for its role. As early as 1748, Pope Benedict XIV tried to ban castrati from churches, but such was their popularity at the time that he realised that doing so might result in a drastic decline in church attendance. The rumours of another castrato sequestered in the Vatican for the personal delectation of the Pontiff until as recently as 1959 have been proven false. The singer in question was a pupil of Moreschi's, Domenico Mancini, such a successful imitator of his teacher's voice that even Lorenzo Perosi, Direttore Perpetuo of the Sistine Choir from 1898 to 1956 and a strenuous opponent of the practice of castrato singers, thought he was a castrato. Mancini was in fact a moderately skilful falsettist and professional double bass player. Modern castrati and similar voices A male can retain his child voice if it never changes during puberty. The retained voice can be the treble voice shared by both sexes in childhood and is the same as a boy soprano voice. But as evidence shows, many castratos, such as Senesino and Caffarelli, were actually altos (mezzo-soprano) – not sopranos. So-called "natural" or "endocrinological castrati" are born with hormonal anomalies, such as Klinefelter's syndrome and Kallmann's syndrome, or have undergone unusual physical or medical events during their early lives that reproduce the vocal effects of castration without being castrated. Jimmy Scott, Radu Marian and Javier Medina are examples of this type of high male voice via endocrinological diseases. Michael Maniaci is somewhat different, in that he has no hormonal or other anomalies, but claims that his voice did not "break" in the usual manner, leaving him still able to sing in the soprano register. Other uncastrated male adults sing soprano, generally using some form of falsetto but in a much higher range than most countertenors. Examples are Aris Christofellis, Jörg Waschinski, and Ghio Nannini. However, it is believed the castrati possessed more of a tenorial chest register (the aria "Navigante che non spera" in Leonardo Vinci's opera Il Medo, written for Farinelli, requires notes down to C3, 131 Hz). Similar low-voiced singing can be heard from the jazz vocalist Jimmy Scott, whose range matches approximately that used by female blues singers. High-pitched singer Jordan Smith has demonstrated having more of a tenorial chest register. Actor Chris Colfer has stated in interviews that when his voice began to change at puberty, he sang in a high voice "constantly" in an effort to retain his range. Actor and singer Alex Newell has soprano range. Voice actor Walter Tetley may or may not have been a castrato; Bill Scott, a co-worker of Tetley's during their later work in television, once half-jokingly quipped that Tetley's mother "had him fixed" to protect the child star's voice-acting career. Tetley did never personally divulge the exact reason for his condition, which left him with the voice of a preteen boy for his entire adult life. Botanist George Washington Carver was noted for his high voice, believed to be the result of pertussis and croup infections in his childhood that stunted his growth. Notable castrati Loreto Vittori (1604–1670) Baldassare Ferri (1610–1680) Atto Melani (1626–1714) Giovanni Grossi ("Siface") (1653–1697) Pier Francesco Tosi (1654–1732) Francesco Ceccarelli (1752–1814) Gaspare Pacchierotti (1740-1821) Nicolo Grimaldi ("Nicolini") (1673–1732) Gaetano Berenstadt (1687-1734) Carlo Mannelli (1640–1697) Antonio Bernacchi (1685–1756) Francesco Bernardi ("Senesino") (1686–1758) Valentino Urbani ("Valentini") (1690–1722) Francesco Paolo Masullo (1679-1733) Giacinto Fontana ("Farfallino") (1692–1739) Giuseppe Aprile (1731-1813) Giovanni Carestini ("Cusanino") (c. 1704–c. 1760) Carlo Broschi ("Farinelli") (1705–1782) Domenico Annibali ("Domenichino") (1705–1779) Gaetano Majorano ("Caffarelli") (1710–1783) Francesco Soto de Langa (1534-1619) Felice Salimbeni (1712–1752) Giaocchino Conti ("Gizziello") (1714–1761) Giovanni Battista Mancini (1714 –1800) Giovanni Manzuoli (1720–1782) Gaetano Guadagni (1725–1792) Giusto Fernando Tenducci (ca. 1736–1790) Giuseppe Millico ("Il Muscovita") (1737–1802) Angelo Maria Monticelli (1710–1764) Gasparo Pacchierotti (1740–1821) Venanzio Rauzzini (1746–1810) Luigi Marchesi ("Marchesini") (1754–1829) Vincenzo dal Prato (1756–1828) Girolamo Crescentini (1762–1848) Francesco Antonio Pistocchi (1659-1726) Giovanni Battista "Giambattista" Velluti (1781–1861) Domenico Mustafà (1829–1912) Giovanni Cesari (1843–1904) Domenico Salvatori (1855–1909) Alessandro Moreschi (1858–1922) See also Cry to Heaven The Alteration Farinelli (film) Sarrasine Eunuch Comprachicos References Bibliography External links All you would like to know about Castrati Castrados por amor al arte Recordings: Antonio Maria Bononcini's Vorrei pupille belle, sung by Radu Marian 1904 Recording of Alessandro Moreschi singing Bach/Gounod Ave Maria Javier Medina Avila, including an audio sample (Riccardo Broschi: Ombra fedele anch'io) Voice types Opera history Italian opera terminology Obsolete occupations Androgyny
5743
https://en.wikipedia.org/wiki/Counting-out%20game
Counting-out game
A counting-out game or counting-out rhyme is a simple method of 'randomly' selecting a person from a group, often used by children for the purpose of playing another game. It usually requires no materials, and is achieved with spoken words or hand gestures. The historian Henry Carrington Bolton suggested in his 1888 book Counting Out Rhymes of Children that the custom of counting out originated in the "superstitious practices of divination by lots." Many such methods involve one person pointing at each participant in a circle of players while reciting a rhyme. A new person is pointed at as each word is said. The player who is selected at the conclusion of the rhyme is "it" or "out". In an alternate version, the circle of players may each put two feet in and at the conclusion of the rhyme, that player removes one foot and the rhyme starts over with the next person. In this case, the first player that has both feet removed is "it" or "out". In theory a counting rhyme is determined entirely by the starting selection (and would result in a modulo operation), but in practice they are often accepted as random selections because the number of words has not been calculated beforehand, so the result is unknown until someone is selected. A variant of counting-out game, known as the Josephus problem, represents a famous theoretical problem in mathematics and computer science. Examples Several simple games can be played to select one person from a group, either as a straightforward winner, or as someone who is eliminated. Rock, Paper, Scissors, Odd or Even and Blue Shoe require no materials and are played using hand gestures, although with the former it is possible for a player to win or lose through skill rather than luck. Coin flipping and drawing straws are fair methods of randomly determining a player. Fizz Buzz is a spoken word game where if a player slips up and speaks a word out of sequence, they are eliminated. Common rhymes (These rhymes may have many local or regional variants.) Eeny, meeny, miny, moe 10 Little Indians Five Little Ducks Ip dip One, Two, Three, Four, Five Tinker, Tailor (traditionally played in England) Yan Tan Tethera Inky Pinky Ponky One potato, two potato Ink-a-dink En Den Dino Cultural references Marx Brothers A scene in the Marx Brothers movie Duck Soup plays on the fact that counting-out games are not really random. Faced with selecting someone to go on a dangerous mission, the character Chicolini (Chico Marx) chants: Rrringspot, vonza, twoza, zig-zag-zav, popti, vinaga, [tin-lie, tav,] harem, scarem, merchan, tarem, teir, tore... only to stop as he realizes he is about to select himself. He then says, "I did it wrong. Wait, wait, I start here", and repeats the chant—with the same result. After that, he says, "That's no good too. I got it!" and reduces the chant to Rrringspot, buck! And with this version he finally manages to "randomly" select someone else. Seinfeld A version of a counting game "ink-a-dink" features in the Seinfeld episode "The Statue." The relevant scene includes a discussion between the characters of Jerry and George if the person who is "it" is the "winner" or the "loser": JERRY: Alright, let's go. Hey, you know, you owe me one. GEORGE: What? JERRY: The Ink-a-dink.. you were It. GEORGE: Its bad? JERRY: Its very bad. See also Repetitive song References External links Videos of "choosing songs" a.k.a. Counting rhymes Selection Rhymes at the BBC's project h2g2 Counting rhymes and other songs for counting in traditional music from county of Nice, France. Nursery rhymes
5749
https://en.wikipedia.org/wiki/Key%20size
Key size
In cryptography, key size or key length refers to the number of bits in a key used by a cryptographic algorithm (such as a cipher). Key length defines the upper-bound on an algorithm's security (i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated by brute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length). Most symmetric-key algorithms are designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance, Triple DES was designed to have a 168-bit key, but an attack of complexity 2112 is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important for asymmetric-key algorithms, because no such algorithm is known to satisfy this property; elliptic curve cryptography comes the closest with an effective security of roughly half its key length. Significance Keys are used to control the operation of a cipher so that only the correct key can convert encrypted text (ciphertext) to plaintext. All commonly-used ciphers are based on publicly known algorithms or are open source and so it is only the difficulty of obtaining the key that determines security of the system, provided that there is no analytic attack (i.e. a "structural weakness" in the algorithms or protocols used), and assuming that the key is not otherwise available (such as via theft, extortion, or compromise of computer systems). The widely accepted notion that the security of the system should depend on the key alone has been explicitly formulated by Auguste Kerckhoffs (in the 1880s) and Claude Shannon (in the 1940s); the statements are known as Kerckhoffs' principle and Shannon's Maxim respectively. A key should, therefore, be large enough that a brute-force attack (possible against any encryption algorithm) is infeasible – i.e. would take too long and/or would take too much memory to execute. Shannon's work on information theory showed that to achieve so-called 'perfect secrecy', the key length must be at least as large as the message and only used once (this algorithm is called the one-time pad). In light of this, and the practical difficulty of managing such long keys, modern cryptographic practice has discarded the notion of perfect secrecy as a requirement for encryption, and instead focuses on computational security, under which the computational requirements of breaking an encrypted text must be infeasible for an attacker. Key size and encryption system Encryption systems are often grouped into families. Common families include symmetric systems (e.g. AES) and asymmetric systems (e.g. RSA and Elliptic-curve_cryptography). They may be grouped according to the central algorithm used (e.g. elliptic curve cryptography and Feistel ciphers). Because each of these has a different level of cryptographic complexity, it is usual to have different key sizes for the same level of security, depending upon the algorithm used. For example, the security available with a 1024-bit key using asymmetric RSA is considered approximately equal in security to an 80-bit key in a symmetric algorithm. The actual degree of security achieved over time varies, as more computational power and more powerful mathematical analytic methods become available. For this reason, cryptologists tend to look at indicators that an algorithm or key length shows signs of potential vulnerability, to move to longer key sizes or more difficult algorithms. For example, , a 1039-bit integer was factored with the special number field sieve using 400 computers over 11 months. The factored number was of a special form; the special number field sieve cannot be used on RSA keys. The computation is roughly equivalent to breaking a 700 bit RSA key. However, this might be an advance warning that 1024 bit RSA keys used in secure online commerce should be deprecated, since they may become breakable in the foreseeable future. Cryptography professor Arjen Lenstra observed that "Last time, it took nine years for us to generalize from a special to a nonspecial, hard-to-factor number" and when asked whether 1024-bit RSA keys are dead, said: "The answer to that question is an unqualified yes." The 2015 Logjam attack revealed additional dangers in using Diffie-Hellman key exchange when only one or a few common 1024-bit or smaller prime moduli are in use. This practice, somewhat common at the time, allows large amounts of communications to be compromised at the expense of attacking a small number of primes. Brute-force attack Even if a symmetric cipher is currently unbreakable by exploiting structural weaknesses in its algorithm, it may be possible to run through the entire space of keys in what is known as a brute-force attack. Because longer symmetric keys require exponentially more work to brute force search, a sufficiently long symmetric key makes this line of attack impractical. With a key of length n bits, there are 2n possible keys. This number grows very rapidly as n increases. The large number of operations (2128) required to try all possible 128-bit keys is widely considered out of reach for conventional digital computing techniques for the foreseeable future. However, a quantum computer capable of running Grover's algorithm would be able to search the possible keys more efficiently. If a suitably sized quantum computer would reduce a 128-bit key down to 64-bit security, roughly a DES equivalent. This is one of the reasons why AES supports key lengths of 256 bits and longer. Symmetric algorithm key lengths IBM's Lucifer cipher was selected in 1974 as the base for what would become the Data Encryption Standard. Lucifer's key length was reduced from 128 bits to 56 bits, which the NSA and NIST argued was sufficient for non-governmental protection at the time. The NSA has major computing resources and a large budget; some cryptographers including Whitfield Diffie and Martin Hellman complained that this made the cipher so weak that NSA computers would be able to break a DES key in a day through brute force parallel computing. The NSA disputed this, claiming that brute-forcing DES would take them "something like 91 years". However, by the late 90s, it became clear that DES could be cracked in a few days' time-frame with custom-built hardware such as could be purchased by a large corporation or government. The book Cracking DES (O'Reilly and Associates) tells of the successful ability in 1998 to break 56-bit DES by a brute-force attack mounted by a cyber civil rights group with limited resources; see EFF DES cracker. Even before that demonstration, 56 bits was considered insufficient length for symmetric algorithm keys for general use. Because of this, DES was replaced in most security applications by Triple DES, which has 112 bits of security when using 168-bit keys (triple key). The Advanced Encryption Standard published in 2001 uses key sizes of 128, 192 or 256 bits. Many observers consider 128 bits sufficient for the foreseeable future for symmetric algorithms of AES's quality until quantum computers become available. However, as of 2015, the U.S. National Security Agency has issued guidance that it plans to switch to quantum computing resistant algorithms and now requires 256-bit AES keys for data classified up to Top Secret. In 2003, the U.S. National Institute for Standards and Technology, NIST proposed phasing out 80-bit keys by 2015. At 2005, 80-bit keys were allowed only until 2010. Since 2015, NIST guidance says that "the use of keys that provide less than 112 bits of security strength for key agreement is now disallowed." NIST approved symmetric encryption algorithms include three-key Triple DES, and AES. Approvals for two-key Triple DES and Skipjack were withdrawn in 2015; the NSA's Skipjack algorithm used in its Fortezza program employs 80-bit keys. Asymmetric algorithm key lengths The effectiveness of public key cryptosystems depends on the intractability (computational and theoretical) of certain mathematical problems such as integer factorization. These problems are time-consuming to solve, but usually faster than trying all possible keys by brute force. Thus, asymmetric keys must be longer for equivalent resistance to attack than symmetric algorithm keys. The most common methods are assumed to be weak against sufficiently powerful quantum computers in the future. Since 2015, NIST recommends a minimum of 2048-bit keys for RSA, an update to the widely-accepted recommendation of a 1024-bit minimum since at least 2002. 1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys, 3072-bit RSA keys to 128-bit symmetric keys, and 15360-bit RSA keys to 256-bit symmetric keys. In 2003, RSA Security claimed that 1024-bit keys were likely to become crackable some time between 2006 and 2010, while 2048-bit keys are sufficient until 2030. the largest RSA key publicly known to be cracked is RSA-250 with 829 bits. The Finite Field Diffie-Hellman algorithm has roughly the same key strength as RSA for the same key sizes. The work factor for breaking Diffie-Hellman is based on the discrete logarithm problem, which is related to the integer factorization problem on which RSA's strength is based. Thus, a 2048-bit Diffie-Hellman key has about the same strength as a 2048-bit RSA key. Elliptic-curve cryptography (ECC) is an alternative set of asymmetric algorithms that is equivalently secure with shorter keys, requiring only approximately twice the bits as the equivalent symmetric algorithm. A 256-bit ECDH key has approximately the same safety factor as a 128-bit AES key. A message encrypted with an elliptic key algorithm using a 109-bit long key was broken in 2004. The NSA previously recommended 256-bit ECC for protecting classified information up to the SECRET level, and 384-bit for TOP SECRET; In 2015 it announced plans to transition to quantum-resistant algorithms by 2024, and until then recommends 384-bit for all classified information. Effect of quantum computing attacks on key strength The two best known quantum computing attacks are based on Shor's algorithm and Grover's algorithm. Of the two, Shor's offers the greater risk to current security systems. Derivatives of Shor's algorithm are widely conjectured to be effective against all mainstream public-key algorithms including RSA, Diffie-Hellman and elliptic curve cryptography. According to Professor Gilles Brassard, an expert in quantum computing: "The time needed to factor an RSA integer is the same order as the time needed to use that same integer as modulus for a single RSA encryption. In other words, it takes no more time to break RSA on a quantum computer (up to a multiplicative constant) than to use it legitimately on a classical computer." The general consensus is that these public key algorithms are insecure at any key size if sufficiently large quantum computers capable of running Shor's algorithm become available. The implication of this attack is that all data encrypted using current standards based security systems such as the ubiquitous SSL used to protect e-commerce and Internet banking and SSH used to protect access to sensitive computing systems is at risk. Encrypted data protected using public-key algorithms can be archived and may be broken at a later time, commonly known as retroactive/retrospective decryption or "harvest and decrypt". Mainstream symmetric ciphers (such as AES or Twofish) and collision resistant hash functions (such as SHA) are widely conjectured to offer greater security against known quantum computing attacks. They are widely thought most vulnerable to Grover's algorithm. Bennett, Bernstein, Brassard, and Vazirani proved in 1996 that a brute-force key search on a quantum computer cannot be faster than roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case. Thus in the presence of large quantum computers an n-bit key can provide at least n/2 bits of security. Quantum brute force is easily defeated by doubling the key length, which has little extra computational cost in ordinary use. This implies that at least a 256-bit symmetric key is required to achieve 128-bit security rating against a quantum computer. As mentioned above, the NSA announced in 2015 that it plans to transition to quantum-resistant algorithms. According to the NSA: , the NSA's Commercial National Security Algorithm Suite includes: See also Key stretching Notes References General Recommendation for Key Management — Part 1: general, NIST Special Publication 800-57. March, 2007 Blaze, Matt; Diffie, Whitfield; Rivest, Ronald L.; et al. "Minimal Key Lengths for Symmetric Ciphers to Provide Adequate Commercial Security". January, 1996 Arjen K. Lenstra, Eric R. Verheul: Selecting Cryptographic Key Sizes. J. Cryptology 14(4): 255-293 (2001) — Citeseer link External links www.keylength.com: An online keylength calculator Articles discussing the implications of quantum computing NIST cryptographic toolkit Burt Kaliski: TWIRL and RSA key sizes (May 2003) Key management
5750
https://en.wikipedia.org/wiki/Cognitive%20behavioral%20therapy
Cognitive behavioral therapy
Cognitive behavioral therapy (CBT) is a psycho-social intervention that aims to reduce symptoms of various mental health conditions, primarily depression and anxiety disorders. Cognitive behavioral therapy is one of the most effective means of treatment for substance abuse and co-occurring mental health disorders. CBT focuses on challenging and changing cognitive distortions (such as thoughts, beliefs, and attitudes) and their associated behaviors to improve emotional regulation and develop personal coping strategies that target solving current problems. Though it was originally designed to treat depression, its uses have been expanded to include many issues and the treatment of many mental health conditions, including anxiety, substance use disorders, marital problems, ADHD, and eating disorders. CBT includes a number of cognitive or behavioral psychotherapies that treat defined psychopathologies using evidence-based techniques and strategies. CBT is a common form of talk therapy based on the combination of the basic principles from behavioral and cognitive psychology. It is different from historical approaches to psychotherapy, such as the psychoanalytic approach where the therapist looks for the unconscious meaning behind the behaviors, and then formulates a diagnosis. Instead, CBT is a "problem-focused" and "action-oriented" form of therapy, meaning it is used to treat specific problems related to a diagnosed mental disorder. The therapist's role is to assist the client in finding and practicing effective strategies to address the identified goals and to alleviate symptoms of the disorder. CBT is based on the belief that thought distortions and maladaptive behaviors play a role in the development and maintenance of many psychological disorders and that symptoms and associated distress can be reduced by teaching new information-processing skills and coping mechanisms. When compared to psychoactive medications, review studies have found CBT alone to be as effective for treating less severe forms of depression, anxiety, post-traumatic stress disorder (PTSD), tics, substance use disorders, eating disorders, and borderline personality disorder. Some research suggests that CBT is most effective when combined with medication for treating mental disorders, such as major depressive disorder. CBT is recommended as the first line of treatment for the majority of psychological disorders in children and adolescents, including aggression and conduct disorder. Researchers have found that other bona fide therapeutic interventions were equally effective for treating certain conditions in adults. Along with interpersonal psychotherapy (IPT), CBT is recommended in treatment guidelines as a psychosocial treatment of choice. Medical uses In adults, CBT has been shown to be an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression, eating disorders, chronic low back pain, personality disorders, psychosis, schizophrenia, substance use disorders, and bipolar disorder. It is also effective as part of treatment plans in the adjustment, depression, and anxiety associated with fibromyalgia, and with post-spinal cord injuries. In children or adolescents, CBT is an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression and suicidality, eating disorders and obesity, obsessive–compulsive disorder (OCD), and post-traumatic stress disorder (PTSD), as well as tic disorders, trichotillomania, and other repetitive behavior disorders. CBT has also been applied to a variety of childhood disorders, including depressive disorders and various anxiety disorders. CBT has shown to be the most effective intervention for people exposed to adverse childhood experiences in the form of abuse or neglect. Criticism of CBT sometimes focuses on implementations (such as the UK IAPT) which may result initially in low quality therapy being offered by poorly trained practitioners. However, evidence supports the effectiveness of CBT for anxiety and depression. Evidence suggests that the addition of hypnotherapy as an adjunct to CBT improves treatment efficacy for a variety of clinical issues. The United Kingdom's National Institute for Health and Care Excellence (NICE) recommends CBT in the treatment plans for a number of mental health difficulties, including PTSD, OCD, bulimia nervosa, and clinical depression. Depression and anxiety disorders Cognitive behavioral therapy has been shown as an effective treatment for clinical depression. The American Psychiatric Association Practice Guidelines (April 2000) indicated that, among psychotherapeutic approaches, cognitive behavioral therapy and interpersonal psychotherapy had the best-documented efficacy for treatment of major depressive disorder. A 2001 meta-analysis comparing CBT and psychodynamic psychotherapy suggested the approaches were equally effective in the short term for depression. In contrast, a 2013 meta-analyses suggested that CBT, interpersonal therapy, and problem-solving therapy outperformed psychodynamic psychotherapy and behavioral activation in the treatment of depression. According to a 2004 review by INSERM of three methods, cognitive behavioral therapy was either proven or presumed to be an effective therapy on several mental disorders. This included depression, panic disorder, post-traumatic stress, and other anxiety disorders. CBT has been shown to be effective in the treatment of adults with anxiety disorders. Results from a 2018 systematic review found a high strength of evidence that CBT-exposure therapy can reduce PTSD symptoms and lead to the loss of a PTSD diagnosis. CBT has also been shown to be effective for post-traumatic stress disorder in very young children (3 to 6 years of age). A Cochrane review found low quality evidence that CBT may be more effective than other psychotherapies in reducing symptoms of posttraumatic stress disorder in children and adolescents. A systematic review of CBT in depression and anxiety disorders concluded that "CBT delivered in primary care, especially including computer- or Internet-based self-help programs, is potentially more effective than usual care and could be delivered effectively by primary care therapists." Some meta-analyses find CBT more effective than psychodynamic therapy and equal to other therapies in treating anxiety and depression. Theoretical approaches One etiological theory of depression is Aaron T. Beck's cognitive theory of depression. His theory states that depressed people think the way they do because their thinking is biased towards negative interpretations. According to this theory, depressed people acquire a negative schema of the world in childhood and adolescence as an effect of stressful life events, and the negative schema is activated later in life when the person encounters similar situations. Beck also described a negative cognitive triad. The cognitive triad is made up of the depressed individual's negative evaluations of themselves, the world, and the future. Beck suggested that these negative evaluations derive from the negative schemata and cognitive biases of the person. According to this theory, depressed people have views such as "I never do a good job", "It is impossible to have a good day", and "things will never get better". A negative schema helps give rise to the cognitive bias, and the cognitive bias helps fuel the negative schema. Beck further proposed that depressed people often have the following cognitive biases: arbitrary inference, selective abstraction, overgeneralization, magnification, and minimization. These cognitive biases are quick to make negative, generalized, and personal inferences of the self, thus fueling the negative schema. A basic concept in some CBT treatments used in anxiety disorders is in vivo exposure. CBT-exposure therapy refers to the direct confrontation of feared objects, activities, or situations by a patient. For example, a woman with PTSD who fears the location where she was assaulted may be assisted by her therapist in going to that location and directly confronting those fears. Likewise, a person with a social anxiety disorder who fears public speaking may be instructed to directly confront those fears by giving a speech. This "two-factor" model is often credited to O. Hobart Mowrer. Through exposure to the stimulus, this harmful conditioning can be "unlearned" (referred to as extinction and habituation). CBT for children with phobias is normally delivered over multiple sessions, but one-session treatment has been shown to be equally effective and is cheaper. Specialised forms of CBT CBT-SP, an adaptation of CBT for suicide prevention (SP), was specifically designed for treating youths who are severely depressed and who have recently attempted suicide within the past 90 days, and was found to be effective, feasible, and acceptable. Acceptance and commitment therapy (ACT) is a specialist branch of CBT (sometimes referred to as contextual CBT). ACT uses mindfulness and acceptance interventions and has been found to have a greater longevity in therapeutic outcomes. In a study with anxiety, CBT and ACT improved similarly across all outcomes from pre- to post-treatment. However, during a 12-month follow-up, ACT proved to be more effective, showing that it is a highly viable lasting treatment model for anxiety disorders. Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating depression and anxiety disorders, including children. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in adolescent anxiety. Combined with other treatments Studies have provided evidence that when examining animals and humans, that glucocorticoids may lead to a more successful extinction learning during exposure therapy for anxiety disorders. For instance, glucocorticoids can prevent aversive learning episodes from being retrieved and heighten reinforcement of memory traces creating a non-fearful reaction in feared situations. A combination of glucocorticoids and exposure therapy may be a better-improved treatment for treating people with anxiety disorders. Prevention For anxiety disorders, use of CBT with people at risk has significantly reduced the number of episodes of generalized anxiety disorder and other anxiety symptoms, and also given significant improvements in explanatory style, hopelessness, and dysfunctional attitudes. In another study, 3% of the group receiving the CBT intervention developed generalized anxiety disorder by 12 months postintervention compared with 14% in the control group. Individuals with subthreshold levels of panic disorder significantly benefitted from use of CBT. Use of CBT was found to significantly reduce social anxiety prevalence. For depressive disorders, a stepped-care intervention (watchful waiting, CBT and medication if appropriate) achieved a 50% lower incidence rate in a patient group aged 75 or older. Another depression study found a neutral effect compared to personal, social, and health education, and usual school provision, and included a comment on potential for increased depression scores from people who have received CBT due to greater self recognition and acknowledgement of existing symptoms of depression and negative thinking styles. A further study also saw a neutral result. A meta-study of the Coping with Depression course, a cognitive behavioral intervention delivered by a psychoeducational method, saw a 38% reduction in risk of major depression. Bipolar disorder Many studies show CBT, combined with pharmacotherapy, is effective in improving depressive symptoms, mania severity and psychosocial functioning with mild to moderate effects, and that it is better than medication alone. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bipolar disorder. This included schizophrenia, depression, bipolar disorder, panic disorder, post-traumatic stress, anxiety disorders, bulimia, anorexia, personality disorders and alcohol dependency. Psychosis In long-term psychoses, CBT is used to complement medication and is adapted to meet individual needs. Interventions particularly related to these conditions include exploring reality testing, changing delusions and hallucinations, examining factors which precipitate relapse, and managing relapses. Meta-analyses confirm the effectiveness of metacognitive training (MCT) for the improvement of positive symptoms (e.g., delusions). For people at risk of psychosis, in 2014 the UK National Institute for Health and Care Excellence (NICE) recommended preventive CBT. Schizophrenia INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including schizophrenia. A Cochrane review reported CBT had "no effect on long‐term risk of relapse" and no additional effect above standard care. A 2015 systematic review investigated the effects of CBT compared with other psychosocial therapies for people with schizophrenia and determined that there is no clear advantage over other, often less expensive, interventions but acknowledged that better quality evidence is needed before firm conclusions can be drawn. Addiction and substance use disorders Pathological and problem gambling CBT is also used for pathological and problem gambling. The percentage of people who problem gamble is 1–3% around the world. Cognitive behavioral therapy develops skills for relapse prevention and someone can learn to control their mind and manage high-risk cases. There is evidence of efficacy of CBT for treating pathological and problem gambling at immediate follow up, however the longer term efficacy of CBT for it is currently unknown. Smoking cessation CBT looks at the habit of smoking cigarettes as a learned behavior, which later evolves into a coping strategy to handle daily stressors. Since smoking is often easily accessible and quickly allows the user to feel good, it can take precedence over other coping strategies, and eventually work its way into everyday life during non-stressful events as well. CBT aims to target the function of the behavior, as it can vary between individuals, and works to inject other coping mechanisms in place of smoking. CBT also aims to support individuals with strong cravings, which are a major reported reason for relapse during treatment. In a 2008 controlled study out of Stanford University School of Medicine suggested CBT may be an effective tool to help maintain abstinence. The results of 304 random adult participants were tracked over the course of one year. During this program, some participants were provided medication, CBT, 24-hour phone support, or some combination of the three methods. At 20 weeks, the participants who received CBT had a 45% abstinence rate, versus non-CBT participants, who had a 29% abstinence rate. Overall, the study concluded that emphasizing cognitive and behavioral strategies to support smoking cessation can help individuals build tools for long term smoking abstinence. Mental health history can affect the outcomes of treatment. Individuals with a history of depressive disorders had a lower rate of success when using CBT alone to combat smoking addiction. A Cochrane review was unable to find evidence of any difference between CBT and hypnosis for smoking cessation. While this may be evidence of no effect, further research may uncover an effect of CBT for smoking cessation. Substance use disorders Studies have shown CBT to be an effective treatment for substance use disorders. For individuals with substance use disorders, CBT aims to reframe maladaptive thoughts, such as denial, minimizing and catastrophizing thought patterns, with healthier narratives. Specific techniques include identifying potential triggers and developing coping mechanisms to manage high-risk situations. Research has shown CBT to be particularly effective when combined with other therapy-based treatments or medication. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including alcohol dependency. Internet addiction Research has identified Internet addiction as a new clinical disorder that causes relational, occupational, and social problems. Cognitive behavioral therapy (CBT) has been suggested as the treatment of choice for Internet addiction, and addiction recovery in general has used CBT as part of treatment planning. There is also evidence for the efficacy of CBT in multicenter randomized controlled trials such as STICA (Short-Term Treatment of Internet and Computer Game Addiction). Eating disorders Though many forms of treatment can support individuals with eating disorders, CBT is proven to be a more effective treatment than medications and interpersonal psychotherapy alone. CBT aims to combat major causes of distress such as negative cognitions surrounding body weight, shape and size. CBT therapists also work with individuals to regulate strong emotions and thoughts that lead to dangerous compensatory behaviors. CBT is the first line of treatment for bulimia nervosa, and Eating Disorder Non-Specific. While there is evidence to support the efficacy of CBT for bulimia nervosa and binging, the evidence is somewhat variable and limited by small study sizes. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bulimia and anorexia nervosa. With autistic adults Emerging evidence for cognitive behavioral interventions aimed at reducing symptoms of depression, anxiety, and obsessive-compulsive disorder in autistic adults without intellectual disability has been identified through a systematic review. While the research was focused on adults, cognitive behavioral interventions have also been beneficial to autistic children. Dementia and mild cognitive impairment A Cochrane review in 2022 found that adults with dementia and mild cognitive impairment (MCI) who experience symptoms of depression may benefit from CBT, whereas other counselling or supportive interventions might not improve symptoms significantly. Across 5 different psychometric scales, where higher scores indicate severity of depression, adults receiving CBT reported somewhat lower mood scores than those receiving usual care for dementia and MCI overall. In this review, a sub-group analysis found clinically significant benefits only among those diagnosed with dementia, rather than MCI. The likelihood of remission from depression also appeared to be 84% higher following CBT, though the evidence for this was less certain. Anxiety, cognition and other neuropsychiatric symptoms were not significantly improved following CBT, however this review did find moderate evidence of improved quality of life and daily living activity scores in those with dementia and MCI. Post-traumatic stress Cognitive behavioural therapy interventions may have some benefits for people who have post-traumatic stress related to surviving rape, sexual abuse, or sexual assault. Other uses Evidence suggests a possible role for CBT in the treatment of attention deficit hyperactivity disorder (ADHD), hypochondriasis, and bipolar disorder, but more study is needed and results should be interpreted with caution. CBT has been studied as an aid in the treatment of anxiety associated with stuttering. Initial studies have shown CBT to be effective in reducing social anxiety in adults who stutter, but not in reducing stuttering frequency. There is some evidence that CBT is superior in the long-term to benzodiazepines and the nonbenzodiazepines in the treatment and management of insomnia. Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating insomnia. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in insomnia. A Cochrane review of interventions aimed at preventing psychological stress in healthcare workers found that CBT was more effective than no intervention but no more effective than alternative stress-reduction interventions. Cochrane Reviews have found no convincing evidence that CBT training helps foster care providers manage difficult behaviors in the youths under their care, nor was it helpful in treating people who abuse their intimate partners. CBT has been applied in both clinical and non-clinical environments to treat disorders such as personality disorders and behavioral problems. INSERM's 2004 review found that CBT is an effective therapy for personality disorders. Individuals with medical conditions In the case of people with metastatic breast cancer, data is limited but CBT and other psychosocial interventions might help with psychological outcomes and pain management. A 2015 Cochrane review also found that CBT for symptomatic management of non-specific chest pain is probably effective in the short term. However, the findings were limited by small trials and the evidence was considered of questionable quality. Cochrane reviews have found no evidence that CBT is effective for tinnitus, although there appears to be an effect on management of associated depression and quality of life in this condition. CBT combined with hypnosis and distraction reduces self-reported pain in children. There is limited evidence to support its use in coping with the impact of multiple sclerosis, sleep disturbances related to aging, and dysmenorrhea, but more study is needed and results should be interpreted with caution. Previously CBT has been considered as moderately effective for treating chronic fatigue syndrome, however a National Institutes of Health Pathways to Prevention Workshop stated that in respect of improving treatment options for ME/CFS that the modest benefit from cognitive behavioral therapy should be studied as an adjunct to other methods. The Centres for Disease Control advice on the treatment of ME/CFS makes no reference to CBT while the National Institute for Health and Care Excellence states that cognitive behavioral therapy (CBT) has sometimes been assumed to be a cure for ME/CFS, however, it should only be offered to support people who live with ME/CFS to manage their symptoms, improve their functioning and reduce the distress associated with having a chronic illness." Age CBT is used to help people of all ages, but the therapy should be adjusted based on the age of the patient with whom the therapist is dealing. Older individuals in particular have certain characteristics that need to be acknowledged and the therapy altered to account for these differences thanks to age. Of the small number of studies examining CBT for the management of depression in older people, there is currently no strong support. Description Mainstream cognitive behavioral therapy assumes that changing maladaptive thinking leads to change in behavior and affect, but recent variants emphasize changes in one's relationship to maladaptive thinking rather than changes in thinking itself. Cognitive distortions Therapists use CBT techniques to help people challenge their patterns and beliefs and replace errors in thinking, known as cognitive distortions with "more realistic and effective thoughts, thus decreasing emotional distress and self-defeating behavior". Cognitive distortions can be either a pseudo-discrimination belief or an overgeneralization of something. CBT techniques may also be used to help individuals take a more open, mindful, and aware posture toward cognitive distortions so as to diminish their impact. Mainstream CBT helps individuals replace "maladaptive... coping skills, cognitions, emotions and behaviors with more adaptive ones", by challenging an individual's way of thinking and the way that they react to certain habits or behaviors, but there is still controversy about the degree to which these traditional cognitive elements account for the effects seen with CBT over and above the earlier behavioral elements such as exposure and skills training. Phases in therapy CBT can be seen as having six phases: Assessment or psychological assessment; Reconceptualization; Skills acquisition; Skills consolidation and application training; Generalization and maintenance; Post-treatment assessment follow-up. These steps are based on a system created by Kanfer and Saslow. After identifying the behaviors that need changing, whether they be in excess or deficit, and treatment has occurred, the psychologist must identify whether or not the intervention succeeded. For example, "If the goal was to decrease the behavior, then there should be a decrease relative to the baseline. If the critical behavior remains at or above the baseline, then the intervention has failed." The steps in the assessment phase include: Identify critical behaviors; Determine whether critical behaviors are excesses or deficits; Evaluate critical behaviors for frequency, duration, or intensity (obtain a baseline); If excess, attempt to decrease frequency, duration, or intensity of behaviors; if deficits, attempt to increase behaviors. The re-conceptualization phase makes up much of the "cognitive" portion of CBT. Delivery protocols There are different protocols for delivering cognitive behavioral therapy, with important similarities among them. Use of the term CBT may refer to different interventions, including "self-instructions (e.g. distraction, imagery, motivational self-talk), relaxation and/or biofeedback, development of adaptive coping strategies (e.g. minimizing negative or self-defeating thoughts), changing maladaptive beliefs about pain, and goal setting". Treatment is sometimes manualized, with brief, direct, and time-limited treatments for individual psychological disorders that are specific technique-driven. CBT is used in both individual and group settings, and the techniques are often adapted for self-help applications. Some clinicians and researchers are cognitively oriented (e.g. cognitive restructuring), while others are more behaviorally oriented (e.g. in vivo exposure therapy). Interventions such as imaginal exposure therapy combine both approaches. Related techniques CBT may be delivered in conjunction with a variety of diverse but related techniques such as exposure therapy, stress inoculation, cognitive processing therapy, cognitive therapy, metacognitive therapy, metacognitive training, relaxation training, dialectical behavior therapy, and acceptance and commitment therapy. Some practitioners promote a form of mindful cognitive therapy which includes a greater emphasis on self-awareness as part of the therapeutic process. Methods of access Therapist A typical CBT programme would consist of face-to-face sessions between patient and therapist, made up of 6–18 sessions of around an hour each with a gap of 1–3 weeks between sessions. This initial programme might be followed by some booster sessions, for instance after one month and three months. CBT has also been found to be effective if patient and therapist type in real time to each other over computer links. Cognitive-behavioral therapy is most closely allied with the scientist–practitioner model in which clinical practice and research are informed by a scientific perspective, clear operationalization of the problem, and an emphasis on measurement, including measuring changes in cognition and behavior and the attainment of goals. These are often met through "homework" assignments in which the patient and the therapist work together to craft an assignment to complete before the next session. The completion of these assignments – which can be as simple as a person with depression attending some kind of social event – indicates a dedication to treatment compliance and a desire to change. The therapists can then logically gauge the next step of treatment based on how thoroughly the patient completes the assignment. Effective cognitive behavioral therapy is dependent on a therapeutic alliance between the healthcare practitioner and the person seeking assistance. Unlike many other forms of psychotherapy, the patient is very involved in CBT. For example, an anxious patient may be asked to talk to a stranger as a homework assignment, but if that is too difficult, he or she can work out an easier assignment first. The therapist needs to be flexible and willing to listen to the patient rather than acting as an authority figure. Computerized or Internet-delivered (CCBT) Computerized cognitive behavioral therapy (CCBT) has been described by NICE as a "generic term for delivering CBT via an interactive computer interface delivered by a personal computer, internet, or interactive voice response system", instead of face-to-face with a human therapist. It is also known as internet-delivered cognitive behavioral therapy or ICBT. CCBT has potential to improve access to evidence-based therapies, and to overcome the prohibitive costs and lack of availability sometimes associated with retaining a human therapist. In this context, it is important not to confuse CBT with 'computer-based training', which nowadays is more commonly referred to as e-Learning. Although improvements in both research quality and treatment adherence is required before advocating for the global dissemination of CCBT, it has been found in meta-studies to be cost-effective and often cheaper than usual care, including for anxiety and PTSD. Studies have shown that individuals with social anxiety and depression experienced improvement with online CBT-based methods. A study assessing an online version of CBT for people with mild-to-moderate PTSD found that the online approach was as effective as, and cheaper than, the same therapy given face-to-face. A review of current CCBT research in the treatment of OCD in children found this interface to hold great potential for future treatment of OCD in youths and adolescent populations. Additionally, most internet interventions for post-traumatic stress disorder use CCBT. CCBT is also predisposed to treating mood disorders amongst non-heterosexual populations, who may avoid face-to-face therapy from fear of stigma. However presently CCBT programs seldom cater to these populations. In February 2006 NICE recommended that CCBT be made available for use within the NHS across England and Wales for patients presenting with mild-to-moderate depression, rather than immediately opting for antidepressant medication, and CCBT is made available by some health systems. The 2009 NICE guideline recognized that there are likely to be a number of computerized CBT products that are useful to patients, but removed endorsement of any specific product. Smartphone app-delivered Another new method of access is the use of mobile app or smartphone applications to deliver self-help or guided CBT. Technology companies are developing mobile-based artificial intelligence chatbot applications in delivering CBT as an early intervention to support mental health, to build psychological resilience, and to promote emotional well-being. Artificial intelligence (AI) text-based conversational application delivered securely and privately over smartphone devices have the ability to scale globally and offer contextual and always-available support. Active research is underway including real-world data studies that measure effectiveness and engagement of text-based smartphone chatbot apps for delivery of CBT using a text-based conversational interface. Recent market research and analysis of over 500 online mental healthcare solutions identified 3 key challenges in this market: quality of the content, guidance of the user and personalisation. A study compared CBT alone with a mindfulness-based therapy combined with CBT, both delivered via an app. It found that mindfulness-based self-help reduced the severity of depression more than CBT self-help in the short-term. Overall, NHS costs for the mindfulness approach were £500 less per person than for CBT. Reading self-help materials Enabling patients to read self-help CBT guides has been shown to be effective by some studies. However one study found a negative effect in patients who tended to ruminate, and another meta-analysis found that the benefit was only significant when the self-help was guided (e.g. by a medical professional). Group educational course Patient participation in group courses has been shown to be effective. In a meta-analysis reviewing evidence-based treatment of OCD in children, individual CBT was found to be more efficacious than group CBT. Types Brief cognitive behavioral therapy Brief cognitive behavioral therapy (BCBT) is a form of CBT which has been developed for situations in which there are time constraints on the therapy sessions and specifically for those struggling with suicidal ideation and/or making suicide attempts. BCBT was based on Rudd's proposed "suicidal mode", an elaboration of Beck's modal theory. BCBT takes place over a couple of sessions that can last up to 12 accumulated hours by design. This technique was first implemented and developed with soldiers on active duty by Dr. M. David Rudd to prevent suicide. Breakdown of treatment Orientation Commitment to treatment Crisis response and safety planning Means restriction Survival kit Reasons for living card Model of suicidality Treatment journal Lessons learned Skill focus Skill development worksheets Coping cards Demonstration Practice Skill refinement Relapse prevention Skill generalization Skill refinement Cognitive emotional behavioral therapy Cognitive emotional behavioral therapy (CEBT) is a form of CBT developed initially for individuals with eating disorders but now used with a range of problems including anxiety, depression, obsessive compulsive disorder (OCD), post-traumatic stress disorder (PTSD) and anger problems. It combines aspects of CBT and dialectical behavioral therapy and aims to improve understanding and tolerance of emotions in order to facilitate the therapeutic process. It is frequently used as a "pretreatment" to prepare and better equip individuals for longer-term therapy. Structured cognitive behavioral training Structured cognitive-behavioral training (SCBT) is a cognitive-based process with core philosophies that draw heavily from CBT. Like CBT, SCBT asserts that behavior is inextricably related to beliefs, thoughts, and emotions. SCBT also builds on core CBT philosophy by incorporating other well-known modalities in the fields of behavioral health and psychology: most notably, Albert Ellis's rational emotive behavior therapy. SCBT differs from CBT in two distinct ways. First, SCBT is delivered in a highly regimented format. Second, SCBT is a predetermined and finite training process that becomes personalized by the input of the participant. SCBT is designed to bring a participant to a specific result in a specific period of time. SCBT has been used to challenge addictive behavior, particularly with substances such as tobacco, alcohol and food, and to manage diabetes and subdue stress and anxiety. SCBT has also been used in the field of criminal psychology in the effort to reduce recidivism. Moral reconation therapy Moral reconation therapy, a type of CBT used to help felons overcome antisocial personality disorder (ASPD), slightly decreases the risk of further offending. It is generally implemented in a group format because of the risk of offenders with ASPD being given one-on-one therapy reinforces narcissistic behavioral characteristics, and can be used in correctional or outpatient settings. Groups usually meet weekly for two to six months. Stress inoculation training This type of therapy uses a blend of cognitive, behavioral, and certain humanistic training techniques to target the stressors of the client. This usually is used to help clients better cope with their stress or anxiety after stressful events. This is a three-phase process that trains the client to use skills that they already have to better adapt to their current stressors. The first phase is an interview phase that includes psychological testing, client self-monitoring, and a variety of reading materials. This allows the therapist to individually tailor the training process to the client. Clients learn how to categorize problems into emotion-focused or problem-focused so that they can better treat their negative situations. This phase ultimately prepares the client to eventually confront and reflect upon their current reactions to stressors, before looking at ways to change their reactions and emotions to their stressors. The focus is conceptualization. The second phase emphasizes the aspect of skills acquisition and rehearsal that continues from the earlier phase of conceptualization. The client is taught skills that help them cope with their stressors. These skills are then practised in the space of therapy. These skills involve self-regulation, problem-solving, interpersonal communication skills, etc. The third and final phase is the application and following through of the skills learned in the training process. This gives the client opportunities to apply their learned skills to a wide range of stressors. Activities include role-playing, imagery, modeling, etc. In the end, the client will have been trained on a preventive basis to inoculate personal, chronic, and future stressors by breaking down their stressors into problems they will address in long-term, short-term, and intermediate coping goals. Activity-guided CBT: Group-knitting A newly developed group therapy model based on CBT integrates knitting into the therapeutical process and has been proven to yield reliable and promising results. The foundation for this novel approach to CBT is the frequently emphasized notion that therapy success depends on the embeddedness of the therapy method in the patients' natural routine. Similar to standard group-based CBT, patients meet once a week in a group of 10 to 15 patients and knit together under the instruction of a trained psychologist or mental health professional. Central for the therapy is the patient's imaginative ability to assign each part of the wool to a certain thought. During the therapy, the wool is carefully knitted, creating a knitted piece of any form. This therapeutical process teaches the patient to meaningfully align thought, by (physically) creating a coherent knitted piece. Moreover, since CBT emphasizes the behavior as a result of cognition, the knitting illustrates how thoughts (which are tried to be imaginary tight to the wool) materialize into the reality surrounding us. Mindfulness-based cognitive behavioral hypnotherapy Mindfulness-based cognitive behavioural hypnotherapy (MCBH) is a form of CBT that focuses on awareness in a reflective approach, addressing subconscious tendencies. It is more the process that contains three phases for achieving wanted goals and integrates the principles of mindfulness and cognitive-behavioural techniques with the transformative potential of hypnotherapy. Unified Protocol The Unified Protocol for Transdiagnostic Treatment of Emotional Disorders (UP) is a form of CBT, developed by David H. Barlow and researchers at Boston University, that can be applied to a range of and anxiety disorders. The rationale is that anxiety and depression disorders often occur together due to common underlying causes and can efficiently be treated together. The UP includes a common set of components: Psycho-education Cognitive reappraisal Emotion regulation Changing behaviour The UP has been shown to produce equivalent results to single-diagnosis protocols for specific disorders, such as OCD and social anxiety disorder. Several studies have shown that the UP is easier to disseminate as compared to single-diagnosis protocols. Criticisms Relative effectiveness The research conducted for CBT has been a topic of sustained controversy. While some researchers write that CBT is more effective than other treatments, many other researchers and practitioners have questioned the validity of such claims. For example, one study determined CBT to be superior to other treatments in treating anxiety and depression. However, researchers responding directly to that study conducted a re-analysis and found no evidence of CBT being superior to other bona fide treatments, and conducted an analysis of thirteen other CBT clinical trials and determined that they failed to provide evidence of CBT superiority. In cases where CBT has been reported to be statistically better than other psychological interventions in terms of primary outcome measures, effect sizes were small and suggested that those differences were clinically meaningless and insignificant. Moreover, on secondary outcomes (i.e., measures of general functioning) no significant differences have been typically found between CBT and other treatments. A major criticism has been that clinical studies of CBT efficacy (or any psychotherapy) are not double-blind (i.e., either the subjects or the therapists in psychotherapy studies are not blind to the type of treatment). They may be single-blinded, i.e. the rater may not know the treatment the patient received, but neither the patients nor the therapists are blinded to the type of therapy given (two out of three of the persons involved in the trial, i.e., all of the persons involved in the treatment, are unblinded). The patient is an active participant in correcting negative distorted thoughts, thus quite aware of the treatment group they are in. The importance of double-blinding was shown in a meta-analysis that examined the effectiveness of CBT when placebo control and blindedness were factored in. Pooled data from published trials of CBT in schizophrenia, major depressive disorder (MDD), and bipolar disorder that used controls for non-specific effects of intervention were analyzed. This study concluded that CBT is no better than non-specific control interventions in the treatment of schizophrenia and does not reduce relapse rates; treatment effects are small in treatment studies of MDD, and it is not an effective treatment strategy for prevention of relapse in bipolar disorder. For MDD, the authors note that the pooled effect size was very low. Declining effectiveness Additionally, a 2015 meta-analysis revealed that the positive effects of CBT on depression have been declining since 1977. The overall results showed two different declines in effect sizes: 1) an overall decline between 1977 and 2014, and 2) a steeper decline between 1995 and 2014. Additional sub-analysis revealed that CBT studies where therapists in the test group were instructed to adhere to the Beck CBT manual had a steeper decline in effect sizes since 1977 than studies where therapists in the test group were instructed to use CBT without a manual. The authors reported that they were unsure why the effects were declining but did list inadequate therapist training, failure to adhere to a manual, lack of therapist experience, and patients' hope and faith in its efficacy waning as potential reasons. The authors did mention that the current study was limited to depressive disorders only. High drop-out rates Furthermore, other researchers write that CBT studies have high drop-out rates compared to other treatments. One meta-analysis found that CBT drop-out rates were 17% higher than those of other therapies. This high drop-out rate is also evident in the treatment of several disorders, particularly the eating disorder anorexia nervosa, which is commonly treated with CBT. Those treated with CBT have a high chance of dropping out of therapy before completion and reverting to their anorexia behaviors. Other researchers analyzing treatments for youths who self-injure found similar drop-out rates in CBT and DBT groups. In this study, the researchers analyzed several clinical trials that measured the efficacy of CBT administered to youths who self-injure. The researchers concluded that none of them were found to be efficacious. Philosophical concerns with CBT methods The methods employed in CBT research have not been the only criticisms; some individuals have called its theory and therapy into question. Slife and Williams write that one of the hidden assumptions in CBT is that of determinism, or the absence of free will. They argue that CBT holds that external stimuli from the environment enter the mind, causing different thoughts that cause emotional states: nowhere in CBT theory is agency, or free will, accounted for. Another criticism of CBT theory, especially as applied to major depressive disorder (MDD), is that it confounds the symptoms of the disorder with its causes. Side effects CBT is generally regarded as having very few if any side effects. Calls have been made by some for more appraisal of possible side effects of CBT. Many randomized trials of psychological interventions like CBT do not monitor potential harms to the patient. In contrast, randomized trials of pharmacological interventions are much more likely to take adverse effects into consideration. A 2017 meta-analysis revealed that adverse events are not common in children receiving CBT and, furthermore, that CBT is associated with fewer dropouts than either placebo or medications. Nevertheless, CBT therapists do sometimes report 'unwanted events' and side effects in their outpatients with "negative wellbeing/distress" being the most frequent. Socio-political concerns The writer and group analyst Farhad Dalal questions the socio-political assumptions behind the introduction of CBT. According to one reviewer, Dalal connects the rise of CBT with "the parallel rise of neoliberalism, with its focus on marketization, efficiency, quantification and managerialism", and he questions the scientific basis of CBT, suggesting that "the 'science' of psychological treatment is often less a scientific than a political contest". In his book, Dalal also questions the ethical basis of CBT. History Early roots Precursors of certain fundamental aspects of CBT have been identified in various ancient philosophical traditions, particularly Stoicism. Stoic philosophers, particularly Epictetus, believed logic could be used to identify and discard false beliefs that lead to destructive emotions, which has influenced the way modern cognitive-behavioral therapists identify cognitive distortions that contribute to depression and anxiety. Aaron T. Beck's original treatment manual for depression states, "The philosophical origins of cognitive therapy can be traced back to the Stoic philosophers". Another example of Stoic influence on cognitive theorists is Epictetus on Albert Ellis. A key philosophical figure who influenced the development of CBT was John Stuart Mill through his creation of Associationism, a predecessor of classical conditioning and behavioral theory. The modern roots of CBT can be traced to the development of behavior therapy in the early 20th century, the development of cognitive therapy in the 1960s, and the subsequent merging of the two. Behavioral therapy Groundbreaking work of behaviorism began with John B. Watson and Rosalie Rayner's studies of conditioning in 1920. Behaviorally-centered therapeutic approaches appeared as early as 1924 with Mary Cover Jones' work dedicated to the unlearning of fears in children. These were the antecedents of the development of Joseph Wolpe's behavioral therapy in the 1950s. It was the work of Wolpe and Watson, which was based on Ivan Pavlov's work on learning and conditioning, that influenced Hans Eysenck and Arnold Lazarus to develop new behavioral therapy techniques based on classical conditioning. During the 1950s and 1960s, behavioral therapy became widely used by researchers in the United States, the United Kingdom, and South Africa. Their inspiration was by the behaviorist learning theory of Ivan Pavlov, John B. Watson, and Clark L. Hull. In Britain, Joseph Wolpe, who applied the findings of animal experiments to his method of systematic desensitization, applied behavioral research to the treatment of neurotic disorders. Wolpe's therapeutic efforts were precursors to today's fear reduction techniques. British psychologist Hans Eysenck presented behavior therapy as a constructive alternative. At the same time as Eysenck's work, B. F. Skinner and his associates were beginning to have an impact with their work on operant conditioning. Skinner's work was referred to as radical behaviorism and avoided anything related to cognition. However, Julian Rotter in 1954 and Albert Bandura in 1969 contributed to behavior therapy with their works on social learning theory by demonstrating the effects of cognition on learning and behavior modification. The work of Claire Weekes in dealing with anxiety disorders in the 1960s is also seen as a prototype of behavior therapy. The emphasis on behavioral factors has been described as the "first wave" of CBT. Cognitive therapy One of the first therapists to address cognition in psychotherapy was Alfred Adler, notably with his idea of basic mistakes and how they contributed to creation of unhealthy behavioral and life goals.Abraham Low believed that someone's thoughts were best changed by changing their actions. Adler and Low influenced the work of Albert Ellis, who developed the earliest cognitive-based psychotherapy called rational emotive behavioral therapy, or REBT. The first version of REBT was announced to the public in 1956. In the late 1950s, Aaron T. Beck was conducting free association sessions in his psychoanalytic practice. During these sessions, Beck noticed that thoughts were not as unconscious as Freud had previously theorized, and that certain types of thinking may be the culprits of emotional distress. It was from this hypothesis that Beck developed cognitive therapy, and called these thoughts "automatic thoughts". He first published his new methodology in 1967, and his first treatment manual in 1979. Beck has been referred to as "the father of cognitive behavioral therapy". It was these two therapies, rational emotive therapy, and cognitive therapy, that started the "second wave" of CBT, which emphasised cognitive factors. Merger of behavioral and cognitive therapies Although the early behavioral approaches were successful in many so-called neurotic disorders, they had little success in treating depression. Behaviorism was also losing popularity due to the cognitive revolution. The therapeutic approaches of Albert Ellis and Aaron T. Beck gained popularity among behavior therapists, despite the earlier behaviorist rejection of mentalistic concepts like thoughts and cognitions. Both of these systems included behavioral elements and interventions, with the primary focus being on problems in the present. In initial studies, cognitive therapy was often contrasted with behavioral treatments to see which was most effective. During the 1980s and 1990s, cognitive and behavioral techniques were merged into cognitive behavioral therapy. Pivotal to this merging was the successful development of treatments for panic disorder by David M. Clark in the UK and David H. Barlow in the US. Over time, cognitive behavior therapy came to be known not only as a therapy, but as an umbrella term for all cognitive-based psychotherapies. These therapies include, but are not limited to, REBT, cognitive therapy, acceptance and commitment therapy, dialectical behavior therapy, metacognitive therapy, metacognitive training, reality therapy/choice theory, cognitive processing therapy, EMDR, and multimodal therapy. This blending of theoretical and technical foundations from both behavior and cognitive therapies constituted the "third wave" of CBT. The most prominent therapies of this third wave are dialectical behavior therapy and acceptance and commitment therapy. Despite the increasing popularity of third-wave treatment approaches, reviews of studies reveal there may be no difference in the effectiveness compared with non-third wave CBT for the treatment of depression. Society and culture The UK's National Health Service announced in 2008 that more therapists would be trained to provide CBT at government expense as part of an initiative called Improving Access to Psychological Therapies (IAPT). The NICE said that CBT would become the mainstay of treatment for non-severe depression, with medication used only in cases where CBT had failed. Therapists complained that the data does not fully support the attention and funding CBT receives. Psychotherapist and professor Andrew Samuels stated that this constitutes "a coup, a power play by a community that has suddenly found itself on the brink of corralling an enormous amount of money ... Everyone has been seduced by CBT's apparent cheapness." The UK Council for Psychotherapy issued a press release in 2012 saying that the IAPT's policies were undermining traditional psychotherapy and criticized proposals that would limit some approved therapies to CBT, claiming that they restricted patients to "a watered down version of cognitive behavioural therapy (CBT), often delivered by very lightly trained staff". References Further reading External links Association for Behavioral and Cognitive Therapies (ABCT) British Association for Behavioural and Cognitive Psychotherapies National Association of Cognitive-Behavioral Therapists International Association of Cognitive Psychotherapy Information on Research-based CBT Treatments Associated Counsellors & Psychologists CBT Therapists Addiction Addiction medicine Treatment of obsessive–compulsive disorder
5751
https://en.wikipedia.org/wiki/Chinese%20language
Chinese language
Chinese ( or ) is a group of languages spoken natively by the ethnic Han Chinese majority and many minority ethnic groups in Greater China. Approximately 1.3 billion people, or around 16% of the global population, speak a variety of Chinese as their first language. Chinese languages form the Sinitic branch of the Sino-Tibetan language family. The spoken varieties of Chinese are usually considered by native speakers to be dialects of a single language. However, their lack of mutual intelligibility means they are sometimes considered to be separate languages in a family. Investigation of the historical relationships among the varieties of Chinese is ongoing. Currently, most classifications posit 7 to 13 main regional groups based on phonetic developments from Middle Chinese, of which the most spoken by far is Mandarin with 66%, or around 800 million speakers, followed by Min (75 million, e.g. Southern Min), Wu (74 million, e.g. Shanghainese), and Yue (68 million, e.g. Cantonese). These branches are unintelligible to each other, and many of their subgroups are unintelligible with the other varieties within the same branch (e.g. Southern Min). There are, however, transitional areas where varieties from different branches share enough features for some limited intelligibility, including New Xiang with Southwestern Mandarin, Xuanzhou Wu Chinese with Lower Yangtze Mandarin, Jin with Central Plains Mandarin and certain divergent dialects of Hakka with Gan (though these are unintelligible with mainstream Hakka). All varieties of Chinese are tonal to at least some degree, and are largely analytic. The earliest Chinese written records are oracle bone inscriptions dating from the Shang dynasty . The phonetic categories of Old Chinese can be reconstructed from the rhymes of ancient poetry. During the Northern and Southern period, Middle Chinese went through several sound changes and split into several varieties following prolonged geographic and political separation. The Qieyun, a rime dictionary, recorded a compromise between the pronunciations of different regions. The royal courts of the Ming and early Qing dynasties operated using a koiné language known as Guanhua, based on the Nanjing dialect of Mandarin. Standard Chinese is an official language of both the People's Republic of China and the Republic of China on Taiwan, one of the four official languages of Singapore, and one of the six official languages of the United Nations. Standard Chinese is based on the Beijing dialect of Mandarin, and was first officially adopted in the 1930s. The language is written primarily using a logography of Chinese characters, largely shared by readers who may otherwise speak mutually unintelligible varieties. Since the 1950s, the use of Simplified characters has been promoted by the government of the People's Republic of China, with Singapore officially adopting them in 1976. Traditional characters are used in Taiwan, Hong Kong, Macau, and among Chinese-speaking communities overseas. Traditional characters are also in use in mainland China, despite them not being the first choice in daily use. For example, practising Chinese calligraphy requires the knowledge of traditional Chinese characters. Classification Linguists classify all varieties of Chinese as part of the Sino-Tibetan language family, together with Burmese, Tibetan and many other languages spoken in the Himalayas and the Southeast Asian Massif. Although the relationship was first proposed in the early 19th century and is now broadly accepted, reconstruction of Sino-Tibetan is much less developed than that of families such as Indo-European or Austroasiatic. Difficulties have included the great diversity of the languages, the lack of inflection in many of them, and the effects of language contact. In addition, many of the smaller languages are spoken in mountainous areas that are difficult to reach and are often also sensitive border zones. Without a secure reconstruction of proto-Sino-Tibetan, the higher-level structure of the family remains unclear. A top-level branching into Chinese and Tibeto-Burman languages is often assumed, but has not been convincingly demonstrated. History The first written records appeared over 3,000 years ago during the Shang dynasty. As the language evolved over this period, the various local varieties became mutually unintelligible. In reaction, central governments have repeatedly sought to promulgate a unified standard. Old and Middle Chinese The earliest examples of Old Chinese are divinatory inscriptions on oracle bones dated to , during the late Shang. The next attested stage came from inscriptions on bronze artifacts of the Western Zhou period (1046–771 BCE), the Classic of Poetry and portions of the Book of Documents and I Ching. Scholars have attempted to reconstruct the phonology of Old Chinese by comparing later varieties of Chinese with the rhyming practice of the Classic of Poetry and the phonetic elements found in the majority of Chinese characters. Although many of the finer details remain unclear, most scholars agree that Old Chinese differs from Middle Chinese in lacking retroflex and palatal obstruents but having initial consonant clusters of some sort, and in having voiceless nasals and liquids. Most recent reconstructions also describe an atonal language with consonant clusters at the end of the syllable, developing into tone distinctions in Middle Chinese. Several derivational affixes have also been identified, but the language lacks inflection, and indicated grammatical relationships using word order and grammatical particles. Middle Chinese was the language used during Northern and Southern dynasties and the Sui, Tang, and Song dynasties (6th–10th centuries CE). It can be divided into an early period, reflected by the Qieyun rime book (601 CE), and a late period in the 10th century, reflected by rhyme tables such as the constructed by ancient Chinese philologists as a guide to the Qieyun system. These works define phonological categories, but with little hint of what sounds they represent. Linguists have identified these sounds by comparing the categories with pronunciations in modern varieties of Chinese, borrowed Chinese words in Japanese, Vietnamese, and Korean, and transcription evidence. The resulting system is very complex, with a large number of consonants and vowels, but they are probably not all distinguished in any single dialect. Most linguists now believe it represents a diasystem encompassing 6th-century northern and southern standards for reading the classics. Classical and vernacular written forms The complex relationship between spoken and written Chinese is an example of diglossia: as spoken, Chinese varieties have evolved at different rates, while the written language used throughout China changed comparatively little, crystallizing into a prestige form known as Classical or Literary Chinese. Literature written distinctly in the Classical form began to emerge during the Spring and Autumn period. Its use in writing remained nearly universal until the late 19th century, culminating with the widespread adoption of written vernacular Chinese with the May Fourth Movement beginning in 1919. Rise of northern dialects After the fall of the Northern Song dynasty and subsequent reign of the Jurchen Jin and Mongol Yuan dynasties in northern China, a common speech (now called Old Mandarin) developed based on the dialects of the North China Plain around the capital. The 1324 Zhongyuan Yinyun was a dictionary that codified the rhyming conventions of new sanqu verse form in this language. Together with the slightly later Menggu Ziyun, this dictionary describes a language with many of the features characteristic of modern Mandarin dialects. Up to the early 20th century, most Chinese people only spoke their local variety. Thus, as a practical measure, officials of the Ming and Qing dynasties carried out the administration of the empire using a common language based on Mandarin varieties, known as . For most of this period, this language was a koiné based on dialects spoken in the Nanjing area, though not identical to any single dialect. By the middle of the 19th century, the Beijing dialect had become dominant and was essential for any business with the imperial court. In the 1930s, a standard national language, , was adopted. After much dispute between proponents of northern and southern dialects and an abortive attempt at an artificial pronunciation, the National Language Unification Commission finally settled on the Beijing dialect in 1932. The People's Republic founded in 1949 retained this standard but renamed it . The national language is now used in education, the media, and formal situations in both mainland China and Taiwan. Because of their colonial and linguistic history, the language used in education, the media, formal speech, and everyday life in Hong Kong and Macau is the local Cantonese, although the standard language, Mandarin, has become very influential and is being taught in schools. Influence Historically, the Chinese language has spread to its neighbors through a variety of means. Northern Vietnam was incorporated into the Han empire in 111 BCE, marking the beginning of a period of Chinese control that ran almost continuously for a millennium. The Four Commanderies were established in northern Korea in the first century BCE, but disintegrated in the following centuries. Chinese Buddhism spread over East Asia between the 2nd and 5th centuries CE, and with it the study of scriptures and literature in Literary Chinese. Later, strong central governments modeled on Chinese institutions were established in Korea, Japan, and Vietnam, with Literary Chinese serving as the language of administration and scholarship, a position it would retain until the late 19th century in Korea and (to a lesser extent) Japan, and the early 20th century in Vietnam. Scholars from different lands could communicate, albeit only in writing, using Literary Chinese. Although they used Chinese solely for written communication, each country had its own tradition of reading texts aloud, the so-called Sino-Xenic pronunciations. Chinese words with these pronunciations were also extensively imported into the Korean, Japanese and Vietnamese languages, and today comprise over half of their vocabularies. This massive influx led to changes in the phonological structure of the languages, contributing to the development of moraic structure in Japanese and the disruption of vowel harmony in Korean. Borrowed Chinese morphemes have been used extensively in all these languages to coin compound words for new concepts, in a similar way to the use of Latin and Ancient Greek roots in European languages. Many new compounds, or new meanings for old phrases, were created in the late 19th and early 20th centuries to name Western concepts and artifacts. These coinages, written in shared Chinese characters, have then been borrowed freely between languages. They have even been accepted into Chinese, a language usually resistant to loanwords, because their foreign origin was hidden by their written form. Often different compounds for the same concept were in circulation for some time before a winner emerged, and sometimes the final choice differed between countries. The proportion of vocabulary of Chinese origin thus tends to be greater in technical, abstract, or formal language. For example, in Japan, Sino-Japanese words account for about 35% of the words in entertainment magazines, over half the words in newspapers, and 60% of the words in science magazines. Vietnam, Korea, and Japan each developed writing systems for their own languages, initially based on Chinese characters, but later replaced with the hangul alphabet for Korean and supplemented with kana syllabaries for Japanese, while Vietnamese continued to be written with the complex chữ Nôm script. However, these were limited to popular literature until the late 19th century. Today Japanese is written with a composite script using both Chinese characters called kanji, and kana. Korean is written exclusively with hangul in North Korea (although knowledge of the supplementary Chinese characters (called hanja) is still required), and hanja are increasingly rarely used in South Korea. As a result of former French colonization, Vietnamese switched to a Latin-based alphabet. Examples of loan words in English include 'tea' from Hokkien , 'dim sum' from Cantonese , and 'kumquat' from Cantonese . Varieties Jerry Norman estimated that there are hundreds of mutually unintelligible varieties of Chinese. These varieties form a dialect continuum, in which differences in speech generally become more pronounced as distances increase, though the rate of change varies immensely. Generally, mountainous South China exhibits more linguistic diversity than the North China Plain. In parts of South China, a major city's dialect may only be marginally intelligible to close neighbours. For instance, Wuzhou is about upstream from Guangzhou, but the Yue variety spoken there is more like that of Guangzhou than is that of Taishan, southwest of Guangzhou and separated from it by several rivers. In parts of Fujian the speech of neighbouring counties or even villages may be mutually unintelligible. Until the late 20th century, Chinese emigrants to Southeast Asia and North America came from southeast coastal areas, where Min, Hakka, and Yue dialects are spoken. The vast majority of Chinese immigrants to North America up to the mid-20th century spoke the Taishan dialect, from a small coastal area southwest of Guangzhou. Grouping Local varieties of Chinese are conventionally classified into seven dialect groups, largely based on the different evolution of Middle Chinese voiced initials: Mandarin, including Standard Chinese, the Beijing dialect, Sichuanese, and also the Dungan language spoken in Central Asia Wu, including Shanghainese, Suzhounese, and Wenzhounese Gan Xiang Min, including Fuzhounese, Hainanese, Hokkien and Teochew Hakka Yue, including Cantonese and Taishanese The classification of Li Rong, which is used in the Language Atlas of China (1987), distinguishes three further groups: Jin, previously included in Mandarin. Huizhou, previously included in Wu. Pinghua, previously included in Yue. Some varieties remain unclassified, including the Danzhou dialect on Hainan, Waxianghua spoken in western Hunan, and Shaozhou Tuhua spoken in northern Guangdong. Standard Chinese Standard Chinese is the official standard language of China (where it is called ) and Taiwan, and one of the four official languages of Singapore (where it is called either or ). Standard Chinese is based on the Beijing dialect of Mandarin. The governments of both China and Taiwan intend for speakers of all Chinese speech varieties to use it as a common language of communication. Therefore, it is used in government agencies, in the media, and as a language of instruction in schools. In China, diglossia has been a common feature. For example, in addition to Standard Chinese, a resident of Shanghai may speak Shanghainese; if they grew up elsewhere, then they are also likely to be fluent in the particular dialect of that local area. A native of Guangzhou may speak both Cantonese and Standard Chinese. In addition to Mandarin, most Taiwanese people also speak Taiwanese Hokkien (commonly ), Hakka, or an Austronesian language. A Taiwanese may commonly mix pronunciations, phrases, and words from Mandarin and other languages of Taiwan, and this mixture is considered normal in daily or informal speech. Due to their traditional cultural ties to Guangdong amid a history of outside colonization, Cantonese is used as a standard language in Hong Kong and Macau. Nomenclature The designation of various Chinese branches remains controversial. Some linguists and most ordinary Chinese people consider all the spoken varieties as one single language, as speakers share a common national identity and a common written form. Others instead argue that it is inappropriate to refer to major branches of Chinese such as Mandarin, Wu and so on as "dialects" because the mutual unintelligibility between them is too great. However, calling major Chinese branches "languages" would also be wrong under the same criterion, since a branch such as Wu, itself contains many mutually unintelligible varieties, and could not be properly called a single language. There are also viewpoints pointing out that linguists often ignore mutual intelligibility when varieties share intelligibility with a central variety (i.e. prestige variety, such as Standard Mandarin), as the issue requires some careful handling when mutual intelligibility is inconsistent with language identity. The Chinese government's official Chinese designation for the major branches of Chinese is , whereas the more closely related varieties within these are called . Because of the difficulties involved in determining the difference between language and dialect, other terms have been proposed. These include topolect, lect, vernacular, regional, and variety. Phonology Syllables in the Chinese languages have some unique characteristics. They are tightly related to the morphology and also to the characters of the writing system; and phonologically they are structured according to fixed rules. The structure of each syllable consists of a nucleus that has a vowel (which can be a monophthong, diphthong, or even a triphthong in certain varieties), preceded by an onset (a single consonant, or consonant + glide; a zero onset is also possible), and followed (optionally) by a coda consonant; a syllable also carries a tone. There are some instances where a vowel is not used as a nucleus. An example of this is in Cantonese, where the nasal sonorant consonants and can stand alone as their own syllable. In Mandarin much more than in other spoken varieties, most syllables tend to be open syllables, meaning they have no coda (assuming that a final glide is not analyzed as a coda), but syllables that do have codas are restricted to nasals , , , the retroflex approximant , and voiceless stops , , , or . Some varieties allow most of these codas, whereas others, such as Standard Chinese, are limited to only , , and . The number of sounds in the different spoken dialects varies, but in general there has been a tendency to a reduction in sounds from Middle Chinese. The Mandarin dialects in particular have experienced a dramatic decrease in sounds and so have far more polysyllabic words than most other spoken varieties. The total number of syllables in some varieties is therefore only about a thousand, including tonal variation, which is only about an eighth as many as English. Tones All varieties of spoken Chinese use tones to distinguish words. A few dialects of north China may have as few as three tones, while some dialects in south China have up to 6 or 12 tones, depending on how one counts. One exception from this is Shanghainese which has reduced the set of tones to a two-toned pitch accent system much like modern Japanese. A very common example used to illustrate the use of tones in Chinese is the application of the four tones of Standard Chinese, along with the neutral tone, to the syllable . The tones are exemplified by the following five Chinese words: In contrast, Standard Cantonese has six tones. Historically, finals that end in a stop consonant were considered to be "checked tones" and thus counted separately for a total of nine tones. However, they are considered to be duplicates in modern linguistics and are no longer counted as such: Grammar Chinese is often described as a 'monosyllabic' language. However, this is only partially correct. It is largely accurate when describing Old and Middle Chinese; in Classical Chinese, around 90% of words consist of a single character that corresponds one-to-one with a morpheme, the smallest unit of meaning in a language. In modern varieties, it usually remains the case that a morphemes are monosyllabic—in contrast, English has many multi-syllable morphemes, both bound and free, such as 'seven', 'elephant', 'para-' and '-able'. Some of the more conservative modern varieties, usually found in the south, have largely monosyllabic , especially with basic vocabulary. However, most nouns, adjectives and verbs in modern Mandarin are disyllabic. A significant cause of this is phonological attrition: sound changes over time have steadily reduced the number of possible syllables in the language's inventory. In modern Mandarin, there are only around 1,200 possible syllables, including the tonal distinctions, compared with about 5,000 in Vietnamese (still a largely monosyllabic language), and over 8,000 in English. Most modern varieties have the tendency to form new words through polysyllabic compounds. In some cases, monosyllabic words have become disyllabic formed from different characters without the use of compounding, as in from ; this is especially common in Jin varieties. This phonological collapse has led to a corresponding increase in the number of homophones. As an example, the small Langenscheidt Pocket Chinese Dictionary lists six words that are commonly pronounced as in Standard Chinese: In modern spoken Mandarin, however, tremendous ambiguity would result if all of these words could be used as-is. The 20th century Yuen Ren Chao poem Lion-Eating Poet in the Stone Den exploits this, consisting of 92 characters all pronounced . As such, most of these words have been replaced in speech, if not in writing, with less ambiguous disyllabic compounds. Only the first one, , normally appears in monosyllabic form in spoken Mandarin; the rest are normally used in the polysyllabic forms of respectively. In each, the homophone was disambiguated by addition of another morpheme, typically either a near-synonym or some sort of generic word (e.g. 'head', 'thing'), the purpose of which is to indicate which of the possible meanings of the other, homophonic syllable is specifically meant. However, when one of the above words forms part of a compound, the disambiguating syllable is generally dropped and the resulting word is still disyllabic. For example, alone, and not , appears in compounds as meaning 'stone' such as , , , , and . Although many single-syllable morphemes () can stand alone as individual words, they more often than not form multi-syllable compounds known as , which more closely resembles the traditional Western notion of a word. A Chinese can consist of more than one character–morpheme, usually two, but there can be three or more. Examples of Chinese words of more than two syllables include , , and . All varieties of modern Chinese are analytic languages: they depend on syntax (word order and sentence structure), rather than inflectional morphology (changes in the form of a word), to indicate a word's function within a sentence. In other words, Chinese has very few grammatical inflections—it possesses no tenses, no voices, no grammatical number, and only a few articles. They make heavy use of grammatical particles to indicate aspect and mood. In Mandarin, this involves the use of particles such as , , and . Chinese has a subject–verb–object word order, and like many other languages of East Asia, makes frequent use of the topic–comment construction to form sentences. Chinese also has an extensive system of classifiers and measure words, another trait shared with neighboring languages such as Japanese and Korean. Other notable grammatical features common to all the spoken varieties of Chinese include the use of serial verb construction, pronoun dropping and the related subject dropping. Although the grammars of the spoken varieties share many traits, they do possess differences. Vocabulary The entire Chinese character corpus since antiquity comprises well over 50,000 characters, of which only roughly 10,000 are in use and only about 3,000 are frequently used in Chinese media and newspapers. However, Chinese characters should not be confused with Chinese words. Because most Chinese words are made up of two or more characters, there are many more Chinese words than characters. A more accurate equivalent for a Chinese character is the morpheme, as characters represent the smallest grammatical units with individual meanings in the Chinese language. Estimates of the total number of Chinese words and lexicalized phrases vary greatly. The Hanyu Da Zidian, a compendium of Chinese characters, includes 54,678 head entries for characters, including oracle bone versions. The Zhonghua Zihai (1994) contains 85,568 head entries for character definitions, and is the largest reference work based purely on character and its literary variants. The CC-CEDICT project (2010) contains 97,404 contemporary entries including idioms, technology terms and names of political figures, businesses and products. The 2009 version of the Webster's Digital Chinese Dictionary (WDCD), based on CC-CEDICT, contains over 84,000 entries. The most comprehensive pure linguistic Chinese-language dictionary, the 12-volume Hanyu Da Cidian, records more than 23,000 head Chinese characters and gives over 370,000 definitions. The 1999 revised Cihai, a multi-volume encyclopedic dictionary reference work, gives 122,836 vocabulary entry definitions under 19,485 Chinese characters, including proper names, phrases and common zoological, geographical, sociological, scientific and technical terms. The 2016 edition of Xiandai Hanyu Cidian, an authoritative one-volume dictionary on modern standard Chinese language as used in mainland China, has 13,000 head characters and defines 70,000 words. Loanwords Like many other languages, Chinese has absorbed a sizable number of loanwords from other cultures. Most Chinese words are formed out of native Chinese morphemes, including words describing imported objects and ideas. However, direct phonetic borrowing of foreign words has gone on since ancient times. Some early Indo-European loanwords in Chinese have been proposed, notably , , and perhaps also , , , and . Ancient words borrowed from along the Silk Road during the Old Chinese period include , , and . Some words were borrowed from Buddhist scriptures, including and . Other words came from nomadic peoples to the north, such as . Words borrowed from the peoples along the Silk Road, such as , generally have Persian etymologies. Buddhist terminology is generally derived from Sanskrit or Pāli, the liturgical languages of northern India. Words borrowed from the nomadic tribes of the Gobi, Mongolian or northeast regions generally have Altaic etymologies, such as , the Chinese lute, or , but from exactly which source is not always clear. Modern borrowings Modern neologisms are primarily translated into Chinese in one of three ways: free translation (calques), phonetic translation (by sound), or a combination of the two. Today, it is much more common to use existing Chinese morphemes to coin new words to represent imported concepts, such as technical expressions and international scientific vocabulary, wherein the Latin and Greek components usually converted one-for-one into the corresponding Chinese characters. The word 'telephone' was initially loaned phonetically as (Shanghainese )—this word was widely used in Shanghai during the 1920s, but the later , built out of native Chinese morphemes, became prevalent. Other examples include Occasionally, compromises between the transliteration and translation approaches become accepted, such as from + . Sometimes translations are designed so that they sound like the original while incorporating Chinese morphemes (phono-semantic matching), such as for the video game character 'Mario'. This is often done for commercial purposes, for example for 'Pentium' and for 'Subway'. Foreign words, mainly proper nouns, continue to enter the Chinese language by transcription according to their pronunciations. This is done by employing Chinese characters with similar pronunciations. For example, 'Israel' becomes , and 'Paris' becomes . A rather small number of direct transliterations have survived as common words, including , , , , , and . The bulk of these words were originally coined in Shanghai during the early 20th century, and later loaned from there into Mandarin, hence their Mandarin pronunciations occasionally being quite divergent from the English. For example, in Shanghainese and sound more like their English counterparts. Cantonese differs from Mandarin with some transliterations, such as and . Western foreign words representing Western concepts have influenced Chinese since the 20th century through transcription. From French, and were borrowed for 'ballet' and 'champagne' respectively; was borrowed from Italian ; 'coffee'. The influence of English is particularly pronounced: from the early 20th century, many English words were borrowed into Shanghainese, such as and the aforementioned . Later, American soft power gave rise to , , and . Contemporary colloquial Cantonese has distinct loanwords from English, such as , , , and . With the rising popularity of the Internet, there is a current vogue in China for coining English transliterations, for example, , , and . In Taiwan, some of these transliterations are different, such as and for 'blog'. Another result of English influence on Chinese is the appearance in of so-called spelled with letters from the English alphabet. These have appeared in colloquial usage, as well as in magazines and newspapers, and on websites and television: Since the 20th century, another source of words has been kanji: Japan re-molded European concepts and inventions into , and many of these words have been re-loaned into modern Chinese. Other terms were coined by the Japanese by giving new senses to existing Chinese terms or by referring to expressions used in classical Chinese literature. For example, ; in Japanese, which in the original Chinese meant 'the workings of the state', narrowed to 'economy' in Japanese; this narrowed definition was then reimported into Chinese. As a result, these terms are virtually indistinguishable from native Chinese words: indeed, there is some dispute over some of these terms as to whether the Japanese or Chinese coined them first. As a result of this loaning, Chinese, Korean, Japanese, and Vietnamese share a corpus of linguistic terms describing modern terminology, paralleling the similar corpus of terms built from Greco-Latin and shared among European languages. Writing system The Chinese orthography centers on Chinese characters, which are written within imaginary square blocks, traditionally arranged in vertical columns, read from top to bottom down a column, and right to left across columns, despite alternative arrangement with rows of characters from left to right within a row and from top to bottom across rows (like English and other Western writing systems) having become more popular since the 20th century. Chinese characters denote morphemes independent of phonetic variation in different languages. Thus the character is pronounced as in Standard Chinese, in Cantonese and in Hokkien, a form of Min. Most written Chinese documents in the modern time, especially the more formal ones, are created using the grammar and syntax of the Standard Chinese variants, regardless of dialectical background of the author or targeted audience. This replaced the old writing language standard of Literary Chinese before the 20th century. However, vocabularies from different Chinese-speaking areas have diverged, and the divergence can be observed in written Chinese. Meanwhile, colloquial forms of various Chinese language variants have also been written down by their users, especially in less formal settings. The most prominent example of this is Written Cantonese, which has become quite popular in tabloids, instant messaging applications, and on the internet amongst Hong-Kongers and Cantonese-speakers elsewhere. Because some Chinese variants have diverged and developed a number of unique morphemes that are not found in Standard Mandarin (despite all other common morphemes), unique characters rarely used in Standard Chinese have also been created or inherited from archaic literary standard to represent these unique morphemes. For example, characters like and are actively used in Cantonese and Hakka, while being archaic or unused in standard written Chinese. The Chinese had no uniform phonetic transcription system for most of its speakers until the mid-20th century, although enunciation patterns were recorded in early rime books and dictionaries. Early Indian translators, working in Sanskrit and Pali, were the first to attempt to describe the sounds and enunciation patterns of Chinese in a foreign language. After the 15th century, the efforts of Jesuits and Western court missionaries resulted in some Latin character transcription/writing systems, based on various variants of Chinese languages. Some of these Latin character based systems are still being used to write various Chinese variants in the modern era. In Hunan, women in certain areas write their local Chinese language variant in Nüshu, a syllabary derived from Chinese characters. The Dungan language, considered by many a dialect of Mandarin, is nowadays written in Cyrillic, and was previously written in the Arabic script. The Dungan people are primarily Muslim and live mainly in Kazakhstan, Kyrgyzstan, and Russia; many Hui people, living mainly in China, also speak the language. Chinese characters Each Chinese character represents a monosyllabic Chinese word or morpheme. In 100 CE, the famed Han dynasty scholar Xu Shen classified characters into six categories: pictographs, simple ideographs, compound ideographs, phonetic loans, phonetic compounds and derivative characters. Only 4% were categorized as pictographs, including many of the simplest characters, such as , , , and . Between 80% and 90% were classified as phonetic compounds such as , combining a phonetic component with a semantic component of the radical , a reduced form of . Almost all characters created since have been made using this format. The 18th-century Kangxi Dictionary classified characters under a now-common set of 214 radicals. Modern characters are styled after the regular script. Various other written styles are also used in Chinese calligraphy, including seal script, cursive script and clerical script. Calligraphy artists can write in Traditional and Simplified characters, but they tend to use Traditional characters for traditional art. There are currently two systems for Chinese characters. Traditional characters, used in Hong Kong, Taiwan, Macau, and many overseas Chinese speaking communities, largely takes their form from received character forms dating back to the late Han dynasty and standardized during the Ming. Simplified characters, introduced by the PRC in 1954 to promote mass literacy, simplifies most complex traditional glyphs to fewer strokes, many to common cursive shorthand variants. Singapore, which has a large Chinese community, was the second nation to officially adopt simplified characters, although it has also become the de facto standard for younger ethnic Chinese in Malaysia. The Internet provides practice reading each of these systems, and most Chinese readers are capable of, if not necessarily comfortable with, reading the alternative system through experience and guesswork. A well-educated Chinese reader today recognizes approximately 4,000 to 6,000 characters; approximately 3,000 characters are required to read a mainland newspaper. The PRC defines literacy amongst workers as a knowledge of 2,000 characters, though this would be only functional literacy. School-children typically learn around 2,000 characters whereas scholars may memorize up to 10,000. A large unabridged dictionary like the Kangxi dictionary, contains over 40,000 characters, including obscure, variant, rare, and archaic characters; fewer than a quarter of these characters are now commonly used. Romanization Romanization is the process of transcribing a language into the Latin script. There are many systems of romanization for the Chinese varieties, due to the lack of a native phonetic transcription until modern times. Chinese is first known to have been written in Latin characters by Western Christian missionaries in the 16th century. Today the most common romanization standard for Standard Mandarin is Hanyu Pinyin, introduced in 1956 by the PRC, and later adopted by Singapore and Taiwan. Pinyin is almost universally employed now for teaching standard spoken Chinese in schools and universities across the Americas, Australia, and Europe. Chinese parents also use Pinyin to teach their children the sounds and tones of new words. In school books that teach Chinese, the pinyin romanization is often shown below a picture of the thing the word represents, with the Chinese character alongside. The second-most common romanization system, the Wade–Giles, was invented by Thomas Wade in 1859 and modified by Herbert Giles in 1892. As this system approximates the phonology of Mandarin Chinese into English consonants and vowels–it is largely an anglicization, it may be particularly helpful for beginner Chinese speakers of an English-speaking background. Wade–Giles was found in academic use in the United States, particularly before the 1980s, and until 2009 was widely used in Taiwan. When used within European texts, the tone transcriptions in both pinyin and Wade–Giles are often left out for simplicity; Wade–Giles's extensive use of apostrophes is also usually omitted. Thus, most Western readers will be much more familiar with Beijing than they will be with (pinyin), and with than (Wade–Giles). This simplification presents syllables as homophones which really are none, and therefore exaggerates the number of homophones almost by a factor of four. For comparison: Other systems include Gwoyeu Romatzyh, the French EFEO, the Yale system (invented for use by US troops during World War II), as well as distinct systems for the phonetic requirements of Cantonese, Min Nan, Hakka, and other varieties. Other phonetic transcriptions Chinese varieties have been phonetically transcribed into many other writing systems over the centuries. The 'Phags-pa script, for example, has been very helpful in reconstructing the pronunciations of premodern forms of Chinese. Zhuyin (colloquially bopomofo), a semi-syllabary is still widely used in Taiwan's elementary schools to aid standard pronunciation. Although zhuyin characters are reminiscent of katakana script, there is no source to substantiate the claim that Katakana was the basis for the zhuyin system. A comparison table of zhuyin to pinyin exists in the zhuyin article. Syllables based on pinyin and zhuyin can also be compared by looking at the following articles: Pinyin table Zhuyin table There are also at least two systems of cyrillization for Chinese. The most widespread is the Palladius system. As a foreign language With the growing importance and influence of China's economy globally, Standard Chinese instruction has been gaining popularity in schools throughout East Asia, Southeast Asia, and the Western world. Besides Mandarin, Cantonese is the only other Chinese language that is widely taught as a foreign language, largely due to the economic and cultural influence of Hong Kong and its widespread usage among significant Overseas Chinese communities. In 1991 there were 2,000 foreign learners taking China's official Chinese Proficiency Test (HSK), comparable to the English Cambridge Certificate, but by 2005 the number of candidates had risen sharply to 117,660 and in 2010 to 750,000. See also Chinese characters Chinese character orders Chinese exclamative particles Chinese honorifics Chinese numerals Chinese punctuation Classical Chinese grammar Chengyu Han unification Languages of China North American Conference on Chinese Linguistics Protection of the Varieties of Chinese Notes References Citations Sources Further reading On the history of the standardization of Mandarin as the Chiense primary national dialect. External links Classical Chinese texts – Chinese Text Project Marjorie Chan's ChinaLinks; at the Ohio State University with hundreds of links to Chinese related web pages Analytic languages Isolating languages Languages of China Languages of Hong Kong Languages of Macau Languages of Singapore Languages of Taiwan Languages with own distinct writing systems Lingua francas Language
5759
https://en.wikipedia.org/wiki/Complex%20analysis
Complex analysis
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, applied mathematics; as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering. As a differentiable function of a complex variable is equal to its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable (that is, holomorphic functions). History Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Gösta Mittag-Leffler, Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has become very popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions. Another important application of complex analysis is in string theory which examines conformal invariants in quantum field theory. Complex functions A complex function is a function from complex numbers to complex numbers. In other words, it is a function that has a subset of the complex numbers as a domain and the complex numbers as a codomain. Complex functions are generally assumed to have a domain that contains a nonempty open subset of the complex plane. For any complex function, the values from the domain and their images in the range may be separated into real and imaginary parts: where are all real-valued. In other words, a complex function may be decomposed into and i.e., into two real-valued functions (, ) of two real variables (, ). Similarly, any complex-valued function on an arbitrary set (is isomorphic to, and therefore, in that sense, it) can be considered as an ordered pair of two real-valued functions: or, alternatively, as a vector-valued function from into Some properties of complex-valued functions (such as continuity) are nothing more than the corresponding properties of vector valued functions of two real variables. Other concepts of complex analysis, such as differentiability, are direct generalizations of the similar concepts for real functions, but may have very different properties. In particular, every differentiable complex function is analytic (see next section), and two differentiable functions that are equal in a neighborhood of a point are equal on the intersection of their domain (if the domains are connected). The latter property is the basis of the principle of analytic continuation which allows extending every real analytic function in a unique way for getting a complex analytic function whose domain is the whole complex plane with a finite number of curve arcs removed. Many basic and special complex functions are defined in this way, including the complex exponential function, complex logarithm functions, and trigonometric functions. Holomorphic functions Complex functions that are differentiable at every point of an open subset of the complex plane are said to be holomorphic on In the context of complex analysis, the derivative of at is defined to be Superficially, this definition is formally analogous to that of the derivative of a real function. However, complex derivatives and differentiable functions behave in significantly different ways compared to their real counterparts. In particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we approach in the complex plane. Consequently, complex differentiability has much stronger implications than real differentiability. For instance, holomorphic functions are infinitely differentiable, whereas the existence of the nth derivative need not imply the existence of the (n + 1)th derivative for real functions. Furthermore, all holomorphic functions satisfy the stronger condition of analyticity, meaning that the function is, at every point in its domain, locally given by a convergent power series. In essence, this means that functions holomorphic on can be approximated arbitrarily well by polynomials in some neighborhood of every point in . This stands in sharp contrast to differentiable real functions; there are infinitely differentiable real functions that are nowhere analytic; see . Most elementary functions, including the exponential function, the trigonometric functions, and all polynomial functions, extended appropriately to complex arguments as functions are holomorphic over the entire complex plane, making them entire functions, while rational functions , where p and q are polynomials, are holomorphic on domains that exclude points where q is zero. Such functions that are holomorphic everywhere except a set of isolated points are known as meromorphic functions. On the other hand, the functions and are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below). An important property of holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as the Cauchy–Riemann conditions. If , defined by where is holomorphic on a region then for all , In terms of the real and imaginary parts of the function, u and v, this is equivalent to the pair of equations and , where the subscripts indicate partial differentiation. However, the Cauchy–Riemann conditions do not characterize holomorphic functions, without additional continuity conditions (see Looman–Menchoff theorem). Holomorphic functions exhibit some remarkable features. For instance, Picard's theorem asserts that the range of an entire function can take only three possible forms: or for some In other words, if two distinct complex numbers and are not in the range of an entire function then is a constant function. Moreover, a holomorphic function on a connected open set is determined by its restriction to any nonempty open subset. Conformal map Major results One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A "pole" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials. A bounded function that is holomorphic in the entire complex plane must be constant; this is Liouville's theorem. It can be used to provide a natural and short proof for the fundamental theorem of algebra which states that the field of complex numbers is algebraically closed. If a function is holomorphic throughout a connected domain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to be analytically continued from its values on the smaller domain. This allows the extension of the definition of functions, such as the Riemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of the natural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as a Riemann surface. All this refers to complex analysis in one variable. There is also a very rich theory of complex analysis in more than one complex dimension in which the analytic properties such as power series expansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such as conformality) do not carry over. The Riemann mapping theorem about the conformal relationship of certain domains in the complex plane, which may be the most important result in the one-dimensional theory, fails dramatically in higher dimensions. A major application of certain complex spaces is in quantum mechanics as wave functions. See also Complex geometry Hypercomplex analysis Vector calculus List of complex analysis topics Monodromy theorem Real analysis Riemann–Roch theorem Runge's theorem References Sources Ablowitz, M. J. & A. S. Fokas, Complex Variables: Introduction and Applications (Cambridge, 2003). Ahlfors, L., Complex Analysis (McGraw-Hill, 1953). Cartan, H., Théorie élémentaire des fonctions analytiques d'une ou plusieurs variables complexes. (Hermann, 1961). English translation, Elementary Theory of Analytic Functions of One or Several Complex Variables. (Addison-Wesley, 1963). Carathéodory, C., Funktionentheorie. (Birkhäuser, 1950). English translation, Theory of Functions of a Complex Variable (Chelsea, 1954). [2 volumes.] Carrier, G. F., M. Krook, & C. E. Pearson, Functions of a Complex Variable: Theory and Technique. (McGraw-Hill, 1966). Conway, J. B., Functions of One Complex Variable. (Springer, 1973). Fisher, S., Complex Variables. (Wadsworth & Brooks/Cole, 1990). Forsyth, A., Theory of Functions of a Complex Variable (Cambridge, 1893). Freitag, E. & R. Busam, Funktionentheorie. (Springer, 1995). English translation, Complex Analysis. (Springer, 2005). Goursat, E., Cours d'analyse mathématique, tome 2. (Gauthier-Villars, 1905). English translation, A course of mathematical analysis, vol. 2, part 1: Functions of a complex variable. (Ginn, 1916). Henrici, P., Applied and Computational Complex Analysis (Wiley). [Three volumes: 1974, 1977, 1986.] Kreyszig, E., Advanced Engineering Mathematics. (Wiley, 1962). Lavrentyev, M. & B. Shabat, Методы теории функций комплексного переменного. (Methods of the Theory of Functions of a Complex Variable). (1951, in Russian). Markushevich, A. I., Theory of Functions of a Complex Variable, (Prentice-Hall, 1965). [Three volumes.] Marsden & Hoffman, Basic Complex Analysis. (Freeman, 1973). Needham, T., Visual Complex Analysis. (Oxford, 1997). http://usf.usfca.edu/vca/ Remmert, R., Theory of Complex Functions. (Springer, 1990). Rudin, W., Real and Complex Analysis. (McGraw-Hill, 1966). Shaw, W. T., Complex Analysis with Mathematica (Cambridge, 2006). Stein, E. & R. Shakarchi, Complex Analysis. (Princeton, 2003). Sveshnikov, A. G. & A. N. Tikhonov, Теория функций комплексной переменной. (Nauka, 1967). English translation, The Theory Of Functions Of A Complex Variable (MIR, 1978). Titchmarsh, E. C., The Theory of Functions. (Oxford, 1932). Wegert, E., Visual Complex Functions. (Birkhäuser, 2012). Whittaker, E. T. & G. N. Watson, A Course of Modern Analysis. (Cambridge, 1902). 3rd ed. (1920) External links Wolfram Research's MathWorld Complex Analysis Page
5760
https://en.wikipedia.org/wiki/History%20of%20China
History of China
The history of China spans several millennia across a wide geographical area. Each region now considered part of the Chinese world has experienced periods of unity, fracture, prosperity, and hardship. Classical Chinese civilization first emerged in the Yellow River valley, which along with the Yangtze and Pearl River basins now constitute the geographic core of China and have for the majority of its imperial history. China maintains a rich diversity of ethnic and linguistic people groups. The traditional lens for viewing Chinese history is the dynastic cycle: imperial dynasties rise and fall, and are ascribed certain achievements. Throughout pervades the narrative that Chinese civilization can be traced as an unbroken thread many thousands of years into the past, making it one of the cradles of civilization. At various times, states representative of a dominant Chinese culture have directly controlled areas stretching as far west as the Tian Shan, the Tarim Basin, and the Himalayas, as far north as the Sayan Mountains, and as far south as the delta of the Red River. The Neolithic period saw increasingly non-parochial societies begin to emerge along the Yellow and Yangtze rivers. For example, the Erlitou culture existed throughout the central plains of China during the era traditionally attributed to the Xia dynasty (  2070–1600 BCE) by Chinese historiographers in foundational works like the Records of the Grand Historian—a text written around 1700 years after the date assigned to the fall of the Xia. The earliest surviving written Chinese dates to roughly 1250BCE, consisting of divinations inscribed on oracle bones. Chinese bronze inscriptions, ritual texts dedicated to deceased ancestors, form another large corpus of early Chinese writing. The earliest strata of received literature in Chinese include poetry, divination, and records of official speeches. China is believed to be one of a very few loci of independent invention of writing, and the earliest surviving records display an already-mature written language. The culture remembered by the earliest extant literature is that of the decentralized Zhou dynasty ( 1046–256 BCE), during which bureaucratization increased, chariot-based warfare was superseded by infantry, the earliest classical texts took shape, the Mandate of Heaven was introduced, and philosophies such as Confucianism, Taoism, and Legalism were first articulated. China was first united under a single imperial state under Qin Shi Huang in . Orthography, weights, measures, and law were all standardized. Shortly thereafter, China entered its classical age with the Han dynasty ( – CE 220), marking a critical period, a term for the Chinese language is still "Han language", and the dominant Chinese ethnic group is known as Han Chinese. The Chinese empire reached some of its farthest geographical extents during this period. Confucianism was officially adopted and its core texts were edited into their received forms. Wealthy landholding families independent of the ancient aristocracy began to wield significant power. Han technology can be considered on par with that of the contemporaneous Roman Empire: mass production of paper aided the proliferation of written documents, and the written dialect of this period was imitated for millennia afterwards. China also became known internationally for its sericulture. The Han imperial order finally collapsed in the late 2nd century, and China would not see a comparable level of political stability for another 400 years. During this period, Buddhism began to have a significant impact on Chinese culture. Calligraphy, art, historiography, and storytelling flourished. Wealthy families gained even more power compared to the central government. The Yangtze River valley was incorporated into the dominant cultural sphere. The realm saw a period of unity with the Sui dynasty unifying the realm in the late 6th century, soon giving way to the long-lived Tang dynasty (608–907), regarded as another Chinese golden age. The Tang dynasty saw flourishing developments in science, technology, poetry, economics, and geographical influence. China's first officially recognized empress, Wu Zetian, reigned during the dynasty's first century. Buddhism was officially adopted by Tang emperors, while orthodox Confucianism was articulated by scholars. Thus, "Tang people" is the other common demonym for the Han ethnic group. After the Tang's decline led to the Five Dynasties and Ten Kingdoms period, the Song dynasty (960–1279) saw the maximal extent of imperial Chinese cosmopolitan development. Mechanical printing was introduced, and many of the earliest surviving witnesses of certain texts are wood-block prints from this era. Song scientific advancement led the world, and the imperial examination system gave ideological structure to the political bureaucracy. Confucianism and Taoism were fully knit together in Neo-Confucianism. Over the course of the 13th century, the Mongol Empire conquered all of China, culminating in the Mongol Yuan dynasty founded in 1271. Contact with Europe began to increase during this time, symbolized by the reports of Marco Polo. Achievements under the subsequent Ming dynasty (1368–1644) include global exploration, fine porcelain, and many extant public works projects, such as those restoring the Grand Canal and Great Wall. Three of the four Classic Chinese Novels were written during the Ming. The Qing dynasty that succeeded the Ming placed ethnic Manchu officials in important offices, while also becoming sinicized. The Qianlong emperor ( 1735–1796) commissioned a complete encyclopaedia of imperial libraries, totaling nearly a billion words. Imperial China reached its greatest territorial extent of during the Qing, but China came into increasing conflict with European powers, culminating in the Opium Wars and subsequent unequal treaties. The 1911 Xinhai Revolution, led by Sun Yat-sen and others, created the modern Republic of China. From 1927, a costly civil war between the Nationalist government under Chiang Kai-shek and the Chinese Communist Party raged, and the industrialized Empire of Japan also invaded the divided country. After the Communist victory, Mao Zedong proclaimed the People's Republic of China (PRC) in 1949, with the nationalists retreating to Taiwan. Today, both governments still claim to be the legitimate government of China. The PRC has slowly accumulated the majority of diplomatic recognition over the 20th century, and Taiwan's status remains a perennial issue. From 1966 to 1976, the Cultural Revolution helped consolidate Mao's power approaching the end of his life. After his death, the government began economic reforms under Deng Xiaoping. As a result, China became the world's fastest-growing major economy. China had been the most populous nation in the world for decades, until it was surpassed by India in 2023. Prehistory Paleolithic (1.7 – 12 ) The archaic human species of Homo erectus arrived in Eurasia sometime between 1.3 and 1.8 million years ago (Ma) and numerous remains of its subspecies have been found in what is now China. The oldest of these is the southwestern Yuanmou Man (; in Yunnan), dated to Ma, which lived in a mixed bushland-forest environment alongside chalicotheres, deer, the elephant Stegodon, rhinos, cattle, pigs, and the giant short-faced hyaena. The better-known Peking Man (; near Beijing) of 700,000–400,000 BP, was discovered in the Zhoukoudian cave alongside scrapers, choppers, and, dated slightly later, points, burins, and awls. Other Homo erectus fossils have been found widely throughout the region, including the northwestern Lantian Man (; in Shaanxi) as well minor specimens in northeastern Liaoning and southern Guangdong. The dates of most Paleolithic sites were long debated but have been more reliably established based on modern magnetostratigraphy: Majuangou at 1.66–1.55 Ma, Lanpo at 1.6 Ma, Xiaochangliang at 1.36 Ma, Xiantai at 1.36 Ma, Banshan at 1.32 Ma, Feiliang at 1.2 Ma and Donggutuo at 1.1 Ma. Evidence of fire use by Homo erectus occurred between 1–1.8 million years BP at the archaeological site of Xihoudu, Shanxi Province. The circumstances surrounding the evolution of Homo erectus to contemporary H. sapiens is debated; the three main theories include the dominant "Out of Africa" theory (OOA), the regional continuity model and the admixture variant of the OOA hypothesis. Regardless, the earliest modern humans have been dated to China at 120,000–80,000 BP based on fossilized teeth discovered in Fuyan Cave of Dao County, Hunan. The larger animals which lived alongside these humans include the extinct Ailuropoda baconi panda, the Crocuta ultima hyena, the Stegodon, and the giant tapir. Evidence of Middle Palaeolithic Levallois technology has been found in the lithic assemblage of Guanyindong Cave site in southwest China, dated to approximately 170,000–80,000 years ago. Neolithic The Neolithic age in China is considered to have begun about 10,000 years ago. Because the Neolithic is conventionally defined by the presence of agriculture, it follows that the Neolithic began at different times in the various regions of what is now China. Agriculture in China developed gradually, with initial domestication of a few grains and animals gradually expanding with the addition of many others over subsequent millennia. The earliest evidence of cultivated rice, found by the Yangtze River, was carbon-dated to 8,000 years ago. Early evidence for millet agriculture in the Yellow River valley was radiocarbon-dated to about 7000 BC. The Jiahu site is one of the best preserved early agricultural villages (7000 to 5800 BC). At Damaidi in Ningxia, 3,172 cliff carvings dating to 6000–5000 BC have been discovered, "featuring 8,453 individual characters such as the sun, moon, stars, gods and scenes of hunting or grazing", according to researcher Li Xiangshi. Written symbols, sometimes called proto-writing, were found at the site of Jiahu, which is dated around 7000 BC, Damaidi around 6000 BC, Dadiwan from 5800 BC to 5400 BC, and Banpo dating from the 5th millennium BC. With agriculture came increased population, the ability to store and redistribute crops, and the potential to support specialist craftsmen and administrators, which may have existed at late Neolithic sites like Taosi and the Liangzhu culture in the Yangtze delta. The cultures of the middle and late Neolithic in the central Yellow River valley are known respectively as the Yangshao culture (5000 BC to 3000 BC) and the Longshan culture (3000 BC to 2000 BC). Pigs and dogs were the earliest domesticated animals in the region, and after about 3000 BC domesticated cattle and sheep arrived from Western Asia. Wheat also arrived at this time but remained a minor crop. Fruit such as peaches, cherries and oranges, as well as chickens and various vegetables, were also domesticated in Neolithic China. Bronze Age Bronze artifacts have been found at the Majiayao culture site (between 3100 and 2700 BC). The Bronze Age is also represented at the Lower Xiajiadian culture (2200–1600 BC) site in northeast China. Sanxingdui located in what is now Sichuan is believed to be the site of a major ancient city, of a previously unknown Bronze Age culture (between 2000 and 1200 BC). The site was first discovered in 1929 and then re-discovered in 1986. Chinese archaeologists have identified the Sanxingdui culture to be part of the ancient kingdom of Shu, linking the artifacts found at the site to its early legendary kings. Ferrous metallurgy begins to appear in the late 6th century in the Yangzi Valley. A bronze hatchet with a blade of meteoric iron excavated near the city of Gaocheng in Shijiazhuang (now Hebei) has been dated to the 14th century BC. An Iron Age culture of the Tibetan Plateau has tentatively been associated with the Zhang Zhung culture described in early Tibetan writings. Ancient China Chinese historians in later periods were accustomed to the notion of one dynasty succeeding another, but the political situation in early China was much more complicated. Hence, as some scholars of China suggest, the Xia and the Shang can refer to political entities that existed concurrently, just as the early Zhou existed at the same time as the Shang. This bears similarities to how China, both contemporaneously and later, has been divided into states that were not one region, legally or culturally. The earliest period once considered historical was the legendary era of the sage-emperors Yao, Shun, and Yu. Traditionally, the abdication system was prominent in this period, with Yao yielding his throne to Shun, who abdicated to Yu, who founded the Xia dynasty. Xia dynasty (2070–1600 BC) The Xia dynasty of China (from ) is the earliest of the Three Dynasties described in ancient historical records such as Sima Qian's Records of the Grand Historian and Bamboo Annals. The dynasty is generally considered mythical by Western scholars, but in China it is usually associated with the early Bronze Age site at Erlitou that was excavated in Henan in 1959. Since no writing was excavated at Erlitou or any other contemporaneous site, there is not enough evidence to prove whether the Xia dynasty ever existed. Some archaeologists claim that the Erlitou site was the capital of the Xia Dynasty. In any case, the site of Erlitou had a level of political organization that would not be incompatible with the legends of Xia recorded in later texts. More importantly, the Erlitou site has the earliest evidence for an elite who conducted rituals using cast bronze vessels, which would later be adopted by the Shang and Zhou. Shang dynasty (1600–1046 BC) Archaeological evidence, such as oracle bones and bronzes, as well as transmitted texts attest to the historical existence of the Shang dynasty (–1046 BC). Findings from the earlier Shang period come from excavations at Erligang, in present-day Zhengzhou. Findings from the later Shang or Yin (殷) period, were found in profusion at Anyang, in modern-day Henan, the last of the Shang's capitals. The findings at Anyang include the earliest written record of the Chinese so far discovered: inscriptions of divination records in ancient Chinese writing on the bones or shells of animals—the "oracle bones", dating from around 1250 to 1046 BC. A series of at least twenty-nine kings reigned over the Shang dynasty. Throughout their reigns, according to the Shiji, the capital city was moved six times. The final and most important move was to Yin during the reign of Pan Geng, around 1300 BC. The term Yin dynasty has been synonymous with the Shang dynasty in history, although it has lately been used to refer specifically to the latter half of the Shang dynasty. Although written records found at Anyang confirm the existence of the Shang dynasty, Western scholars are often hesitant to associate settlements that are contemporaneous with the Anyang settlement with the Shang dynasty. For example, archaeological findings at Sanxingdui suggest a technologically advanced civilization culturally unlike Anyang. The evidence is inconclusive in proving how far the Shang realm extended from Anyang. The leading hypothesis is that Anyang, ruled by the same Shang in the official history, coexisted and traded with numerous other culturally diverse settlements in the area that is now referred to as China proper. Zhou dynasty (1046–256 BC) The Zhou dynasty (1046 BC to about 256 BC) is the longest-lasting dynasty in Chinese history, though its power declined steadily over the almost eight centuries of its existence. In the late 2nd millennium BC, the Zhou dynasty arose in the Wei River valley of modern western Shaanxi Province, where they were appointed Western Protectors by the Shang. A coalition led by the ruler of the Zhou, King Wu, defeated the Shang at the Battle of Muye. They took over most of the central and lower Yellow River valley and enfeoffed their relatives and allies in semi-independent states across the region. Several of these states eventually became more powerful than the Zhou kings. The kings of Zhou invoked the concept of the Mandate of Heaven to legitimize their rule, a concept that was influential for almost every succeeding dynasty. Like Shangdi, Heaven (tian) ruled over all the other gods, and it decided who would rule China. It was believed that a ruler lost the Mandate of Heaven when natural disasters occurred in great number, and when, more realistically, the sovereign had apparently lost his concern for the people. In response, the royal house would be overthrown, and a new house would rule, having been granted the Mandate of Heaven. The Zhou established two capitals Zongzhou (near modern Xi'an) and Chengzhou (Luoyang), with the king's court moving between them regularly. The Zhou alliance gradually expanded eastward into Shandong, southeastward into the Huai River valley, and southward into the Yangtze River valley. Spring and Autumn period (722–476 BC) In 771 BC, King You and his forces were defeated in the Battle of Mount Li by rebel states and Quanrong barbarians. The rebel aristocrats established a new ruler, King Ping, in Luoyang, beginning the second major phase of the Zhou dynasty: the Eastern Zhou period, which is divided into the Spring and Autumn and Warring States periods. The former period is named after the famous Spring and Autumn Annals. The decline of central power left a vacuum. The Zhou empire now consisted of hundreds of tiny states, some of them only as large as a walled town and surrounding land. These states began to fight against one another and vie for hegemony. The more powerful states tended to conquer and incorporate the weaker ones, so the number of states declined over time. By the 6th century BC most small states had disappeared by being annexed and just a few large and powerful principalities remained. Some southern states, such as Chu and Wu, claimed independence from the Zhou, who undertook wars against some of them (Wu and Yue). Many new cities were established in this period and society gradually became more urbanized and commercialized. Many famous individuals such as Laozi, Confucius and Sun Tzu lived during this chaotic period. Conflict in this period occurred both between and within states. Warfare between states forced the surviving states to develop better administrations to mobilize more soldiers and resources. Within states there was constant jockeying between elite families. For example, the three most powerful families in the Jin state—Zhao, Wei and Han—eventually overthrew the ruling family and partitioned the state between them. The Hundred Schools of Thought of classical Chinese philosophy began blossoming during this period and the subsequent Warring States period. Such influential intellectual movements as Confucianism, Taoism, Legalism and Mohism were founded, partly in response to the changing political world. The first two philosophical thoughts would have an enormous influence on Chinese culture. Warring States period (476–221 BC) After further political consolidations, seven prominent states remained during the 5th centuryBC. The years in which these states battled each other is known as the Warring States period. Though the Zhou king nominally remained as such until 256BC, he was largely a figurehead that held little real power. Numerous developments were made during this period in the areas of culture and mathematics—including the Zuo Zhuan within the Spring and Autumn Annals (a literary work summarizing the preceding Spring and Autumn period), and the bundle of 21 bamboo slips from the Tsinghua collection, dated to 305BC—being the world's earliest known example of a two-digit, base-10 multiplication table. The Tsinghua collection indicates that sophisticated commercial arithmetic was already established during this period. As neighboring territories of the seven states were annexed (including areas of modern Sichuan and Liaoning), they were now to be governed under an administrative system of commanderies and prefectures. This system had been in use elsewhere since the Spring and Autumn period, and its influence on administration would prove resilient—its terminology can still be seen in the contemporaneous sheng and xian ("provinces" and "counties") of contemporary China. The state of Qin became dominant in the waning decades of the Warring States period, conquering the Shu capital of Jinsha on the Chengdu Plain; and then eventually driving Chu from its place in the Han River valley. Qin imitated the administrative reforms of the other states, thereby becoming a powerhouse. Its final expansion began during the reign of Ying Zheng, ultimately unifying the other six regional powers, and enabling him to proclaim himself as China's first emperor—known to history as Qin Shi Huang. Imperial China Early imperial China Qin dynasty (221–206 BC) Ying Zheng's establishment of the Qin dynasty () in 221 BC effectively formalized the region as an empire, rather than a state, and its pivotal status probably led to "Qin" () later evolving into the Western term "". To emphasize his sole rule, Zheng proclaimed himself (; "First August Emperor"); the title, derived from Chinese mythology, become the standard for subsequent rulers. Based in Xianyang, the empire was a centralized bureaucratic monarchy, a governing scheme which dominated the future of Imperial China. In an effort to improve the Zhou's perceived failures, this system consisted of more than 36 commanderies (; ), made up of counties (; ) and progressively smaller divisions, each with a local leader. Many aspects of society were informed by Legalism, a state ideology promoted by the emperor and his chancellor Li Si that was introduced at an earlier time by Shang Yang. In legal matters this philosophy emphasized mutual responsibility in disputes and severe punishments, while economic practices included the general encouragement of agriculture and repression of trade. Reforms occurred in weights and measures, writing styles (seal script) and metal currency (Ban Liang), all of which were standardized. Traditionally, Qin Shi Huang is regarded as ordering a mass burning of books and the live burial of scholars under the guise of Legalism, though contemporary scholars express considerable doubt on the historicity of this event. Despite its importance, Legalism was probably supplemented in non-political matters by Confucianism for social and moral beliefs and the five-element Wuxing () theories for cosmological thought. The Qin administration kept exhaustive records on their population, collecting information on their sex, age, social status and residence. Commoners, who made up over 90% of the population, "suffered harsh treatment" according to the historian Patricia Buckley Ebrey, as they were often conscripted into forced labor for the empire's construction projects. This included a massive system of imperial highways in 220 BC, which ranged around altogether. Other major construction projects were assigned to the general Meng Tian, who concurrently led a successful campaign against the northern Xiongnu peoples (210s BC), reportedly with 300,000 troops. Under Qin Shi Huang's orders, Meng supervised the combining of numerous ancient walls into what came to be known as the Great Wall of China and oversaw the building of a straight highway between northern and southern China. After Qin Shi Huang's death the Qin government drastically deteriorated and eventually capitulated in 207 BC after the Qin capital was captured and sacked by rebels, which would ultimately lead to the establishment of the Han Empire. Han dynasty (206 BC – AD 220) Western Han The Han dynasty was founded by Liu Bang, who emerged victorious in the Chu–Han Contention that followed the fall of the Qin dynasty. A golden age in Chinese history, the Han dynasty's long period of stability and prosperity consolidated the foundation of China as a unified state under a central imperial bureaucracy, which was to last intermittently for most of the next two millennia. During the Han dynasty, territory of China was extended to most of the China proper and to areas far west. Confucianism was officially elevated to orthodox status and was to shape the subsequent Chinese civilization. Art, culture and science all advanced to unprecedented heights. With the profound and lasting impacts of this period of Chinese history, the dynasty name "Han" had been taken as the name of the Chinese people, now the dominant ethnic group in modern China, and had been commonly used to refer to Chinese language and written characters. After the initial laissez-faire policies of Emperors Wen and Jing, the ambitious Emperor Wu brought the empire to its zenith. To consolidate his power, he disenfranchised the majority of imperial relatives, appointing military governors to control their former lands. As a further step, he extended patronage to Confucianism, which emphasizes stability and order in a well-structured society. Imperial Universities were established to support its study. At the urging of his Legalist advisors, however, he also strengthened the fiscal structure of the dynasty with government monopolies. Major military campaigns were launched to weaken the nomadic Xiongnu Empire, limiting their influence north of the Great Wall. Along with the diplomatic efforts led by Zhang Qian, the sphere of influence of the Han Empire extended to the states in the Tarim Basin, opened up the Silk Road that connected China to the west, stimulating bilateral trade and cultural exchange. To the south, various small kingdoms far beyond the Yangtze River Valley were formally incorporated into the empire. Emperor Wu also dispatched a series of military campaigns against the Baiyue tribes. The Han annexed Minyue in 135 BC and 111 BC, Nanyue in 111 BC, and Dian in 109 BC. Migration and military expeditions led to the cultural assimilation of the south. It also brought the Han into contact with kingdoms in Southeast Asia, introducing diplomacy and trade. After Emperor Wu the empire slipped into gradual stagnation and decline. Economically, the state treasury was strained by excessive campaigns and projects, while land acquisitions by elite families gradually drained the tax base. Various consort clans exerted increasing control over strings of incompetent emperors and eventually the dynasty was briefly interrupted by the usurpation of Wang Mang. Xin dynasty In AD 9 the usurper Wang Mang claimed that the Mandate of Heaven called for the end of the Han dynasty and the rise of his own, and he founded the short-lived Xin dynasty. Wang Mang started an extensive program of land and other economic reforms, including the outlawing of slavery and land nationalization and redistribution. These programs, however, were never supported by the landholding families, because they favored the peasants. The instability of power brought about chaos, uprisings, and loss of territories. This was compounded by mass flooding of the Yellow River; silt buildup caused it to split into two channels and displaced large numbers of farmers. Wang Mang was eventually killed in Weiyang Palace by an enraged peasant mob in AD 23. Eastern Han Emperor Guangwu reinstated the Han dynasty with the support of landholding and merchant families at Luoyang, east of the former capital Xi'an. Thus, this new era is termed the Eastern Han dynasty. With the capable administrations of Emperors Ming and Zhang, former glories of the dynasty were reclaimed, with brilliant military and cultural achievements. The Xiongnu Empire was decisively defeated. The diplomat and general Ban Chao further expanded the conquests across the Pamirs to the shores of the Caspian Sea, thus reopening the Silk Road, and bringing trade, foreign cultures, along with the arrival of Buddhism. With extensive connections with the west, the first of several Roman embassies to China were recorded in Chinese sources, coming from the sea route in AD 166, and a second one in AD 284. The Eastern Han dynasty was one of the most prolific eras of science and technology in ancient China, notably the historic invention of papermaking by Cai Lun, and the numerous scientific and mathematical contributions by the famous polymath Zhang Heng. Six Dynasties Three Kingdoms (AD 220–280) By the 2nd century, the empire declined amidst land acquisitions, invasions, and feuding between consort clans and eunuchs. The Yellow Turban Rebellion broke out in AD 184, ushering in an era of warlords. In the ensuing turmoil, three states emerged, trying to gain predominance and reunify the land, giving this historical period its name. The classic historical novel Romance of the Three Kingdoms dramatizes events of this period. The warlord Cao Cao reunified the north in 208, and in 220 his son accepted the abdication of Emperor Xian of Han, thus initiating the Wei dynasty. Soon, Wei's rivals Shu and Wu proclaimed their independence. This period was characterized by a gradual decentralization of the state that had existed during the Qin and Han dynasties, and an increase in the power of great families. In 266, the Jin dynasty overthrew the Wei and later unified the country in 280, but this union was short-lived. Jin dynasty (AD 266–420) The Jin dynasty was severely weakened by War of the Eight Princes and lost control of northern China after non-Han Chinese settlers rebelled and captured Luoyang and Chang'an. In 317, the Jin prince Sima Rui, based in modern-day Nanjing, became emperor and continued the dynasty, now known as the Eastern Jin, which held southern China for another century. Prior to this move, historians refer to the Jin dynasty as the Western Jin. Sixteen Kingdoms (AD 304–439) Northern China fragmented into a series of independent states known as the Sixteen Kingdoms, most of which were founded by Xiongnu, Xianbei, Jie, Di and Qiang rulers. These non-Han peoples were ancestors of the Turks, Mongols, and Tibetans. Many had, to some extent, been "sinicized" long before their ascent to power. In fact, some of them, notably the Qiang and the Xiongnu, had already been allowed to live in the frontier regions within the Great Wall since late Han times. During this period, warfare ravaged the north and prompted large-scale Han Chinese migration south to the Yangtze River Basin and Delta. Northern and Southern dynasties (AD 420–589) In the early 5th century China entered a period known as the Northern and Southern dynasties, in which parallel regimes ruled the northern and southern halves of the country. In the south, the Eastern Jin gave way to the Liu Song, Southern Qi, Liang and finally Chen. Each of these Southern dynasties were led by Han Chinese ruling families and used Jiankang (modern Nanjing) as the capital. They held off attacks from the north and preserved many aspects of Chinese civilization, while northern barbarian regimes began to sinify. In the north the last of the Sixteen Kingdoms was extinguished in 439 by the Northern Wei, a kingdom founded by the Xianbei, a nomadic people who unified northern China. The Northern Wei eventually split into the Eastern and Western Wei, which then became the Northern Qi and Northern Zhou. These regimes were dominated by Xianbei or Han Chinese who had married into Xianbei families. During this period most Xianbei people adopted Han surnames, eventually leading to complete assimilation into the Han. Despite the division of the country, Buddhism spread throughout the land. In southern China, fierce debates about whether Buddhism should be allowed were held frequently by the royal court and nobles. By the end of the era, Buddhists and Taoists had become much more tolerant of each other. Mid-imperial China Sui dynasty (581–618) The short-lived Sui dynasty was a pivotal period in Chinese history. Founded by Emperor Wen in 581 in succession of the Northern Zhou, the Sui went on to conquer the Southern Chen in 589 to reunify China, ending three centuries of political division. The Sui pioneered many new institutions, including the government system of Three Departments and Six Ministries, imperial examinations for selecting officials from commoners, while improved on the systems of fubing system of the army conscription and the equal-field system of land distributions. These policies, which were adopted by later dynasties, brought enormous population growth, and amassed excessive wealth to the state. Standardized coinage was enforced throughout the unified empire. Buddhism took root as a prominent religion and was supported officially. Sui China was known for its numerous mega-construction projects. Intended for grains shipment and transporting troops, the Grand Canal was constructed, linking the capitals Daxing (Chang'an) and Luoyang to the wealthy southeast region, and in another route, to the northeast border. The Great Wall was also expanded, while series of military conquests and diplomatic maneuvers further pacified its borders. However, the massive invasions of the Korean Peninsula during the Goguryeo–Sui War failed disastrously, triggering widespread revolts that led to the fall of the dynasty. Tang dynasty (618–907) The Tang dynasty was a golden age of Chinese civilization, a prosperous, stable, and creative period with significant developments in culture, art, literature, particularly poetry, and technology. Buddhism became the predominant religion for the common people. Chang'an (modern Xi'an), the national capital, was the largest city in the world during its time. The first emperor, Emperor Gaozu, came to the throne on 18 June 618, placed there by his son, Li Shimin, who became the second emperor, Taizong, one of the greatest emperors in Chinese history. Combined military conquests and diplomatic maneuvers reduced threats from Central Asian tribes, extended the border, and brought neighboring states into a tributary system. Military victories in the Tarim Basin kept the Silk Road open, connecting Chang'an to Central Asia and areas far to the west. In the south, lucrative maritime trade routes from port cities such as Guangzhou connected with distant countries, and foreign merchants settled in China, encouraging a cosmopolitan culture. The Tang culture and social systems were observed and adapted by neighboring countries, most notably Japan. Internally the Grand Canal linked the political heartland in Chang'an to the agricultural and economic centers in the eastern and southern parts of the empire. Xuanzang, a Chinese Buddhist monk, scholar, traveller, and translator travelled to India on his own and returned with "over six hundred Mahayana and Hinayana texts, seven statues of the Buddha and more than a hundred sarira relics." The prosperity of the early Tang dynasty was abetted by a centralized bureaucracy. The government was organized as "Three Departments and Six Ministries" to separately draft, review, and implement policies. These departments were run by royal family members and landed aristocrats, but as the dynasty wore on, were joined or replaced by scholar officials selected by imperial examinations, setting patterns for later dynasties. Under the Tang "equal-field system" all land was owned by the Emperor and granted to each family according to household size. Men granted land were conscripted for military service for a fixed period each year, a military policy known as the fubing system. These policies stimulated a rapid growth in productivity and a significant army without much burden on the state treasury. By the dynasty's midpoint, however, standing armies had replaced conscription, and land was continuously falling into the hands of private owners and religious institutions granted exemptions. The dynasty continued to flourish under the rule of Empress Wu Zetian, the only official empress regnant in Chinese history, and reached its zenith during the long reign of Emperor Xuanzong, who oversaw an empire that stretched from the Pacific to the Aral Sea with at least people. There were vibrant artistic and cultural creations, including works of the greatest Chinese poets, Li Bai and Du Fu. At the zenith of prosperity of the empire, the An Lushan Rebellion from 755 to 763 was a watershed event. War, disease, and economic disruption devastated the population and drastically weakened the central imperial government. Upon suppression of the rebellion, regional military governors, known as jiedushi, gained increasingly autonomous status. With loss of revenue from land tax, the central imperial government came to rely heavily on salt monopoly. Externally, former submissive states raided the empire and the vast border territories were lost for centuries. Nevertheless, civil society recovered and thrived amidst the weakened imperial bureaucracy. In late Tang period the empire was worn out by recurring revolts of the regional military governors, while scholar-officials engaged in fierce factional strife and corrupted eunuchs amassed immense power. Catastrophically, the Huang Chao Rebellion, from 874 to 884, devastated the entire empire for a decade. The sack of the southern port Guangzhou in 879 was followed by the massacre of most of its inhabitants, especially the large foreign merchant enclaves. By 881, both capitals, Luoyang and Chang'an, fell successively. The reliance on ethnic Han and Turkic warlords in suppressing the rebellion increased their power and influence. Consequently, the fall of the dynasty following Zhu Wen's usurpation led to an era of division. Five Dynasties and Ten Kingdoms (907–960) The period of political disunity between the Tang and the Song, known as the Five Dynasties and Ten Kingdoms period, lasted from 907 to 960. During this half-century, China was in all respects a multi-state system. Five regimes, namely, (Later) Liang, Tang, Jin, Han and Zhou, rapidly succeeded one another in control of the traditional Imperial heartland in northern China. Among the regimes, rulers of (Later) Tang, Jin and Han were sinicized Shatuo Turks, which ruled over the ethnic majority of Han Chinese. More stable and smaller regimes of mostly ethnic Han rulers coexisted in south and western China over the period, cumulatively constituted the "Ten Kingdoms". Amidst political chaos in the north, the strategic Sixteen Prefectures (region along today's Great Wall) were ceded to the emerging Khitan Liao dynasty, which drastically weakened the defense of China proper against northern nomadic empires. To the south, Vietnam gained lasting independence after being a Chinese prefecture for many centuries. With wars dominating in Northern China, there were mass southward migrations of population, which further enhanced the southward shift of cultural and economic centers in China. The era ended with the coup of Later Zhou general Zhao Kuangyin, and the establishment of the Song dynasty in 960, which eventually annihilated the remains of the "Ten Kingdoms" and reunified China. Late imperial China Song, Liao, Jin, and Western Xia dynasties (960–1279) In 960, the Song dynasty was founded by Emperor Taizu, with its capital established in Kaifeng (then known as Bianjing). In 979, the Song dynasty reunified most of China proper, while large swaths of the outer territories were occupied by sinicized nomadic empires. The Khitan Liao dynasty, which lasted from 907 to 1125, ruled over Manchuria, Mongolia, and parts of Northern China. Meanwhile, in what are now the north-western Chinese provinces of Gansu, Shaanxi, and Ningxia, the Tangut tribes founded the Western Xia dynasty from 1032 to 1227. Aiming to recover the strategic sixteen prefectures lost in the previous dynasty, campaigns were launched against the Liao dynasty in the early Song period, which all ended in failure. Then in 1004, the Liao cavalry swept over the exposed North China Plain and reached the outskirts of Kaifeng, forcing the Song's submission and then agreement to the Chanyuan Treaty, which imposed heavy annual tributes from the Song treasury. The treaty was a significant reversal of Chinese dominance of the traditional tributary system. Yet the annual outflow of Song's silver to the Liao was paid back through the purchase of Chinese goods and products, which expanded the Song economy, and replenished its treasury. This dampened the incentive for the Song to further campaign against the Liao. Meanwhile, this cross-border trade and contact induced further sinicization within the Liao Empire, at the expense of its military might which was derived from its nomadic lifestyle. Similar treaties and social-economical consequences occurred in Song's relations with the Jin dynasty. Within the Liao Empire the Jurchen tribes revolted against their overlords to establish the Jin dynasty in 1115. In 1125, the devastating Jin cataphract annihilated the Liao dynasty, while remnants of Liao court members fled to Central Asia to found the Qara Khitai Empire (Western Liao dynasty). Jin's invasion of the Song dynasty followed swiftly. In 1127, Kaifeng was sacked, a massive catastrophe known as the Jingkang Incident, ending the Northern Song dynasty. Later the entire north of China was conquered. The survived members of Song court regrouped in the new capital city of Hangzhou, and initiated the Southern Song dynasty, which ruled territories south of the Huai River. In the ensuing years, the territory and population of China were divided between the Song dynasty, the Jin dynasty and the Western Xia dynasty. The era ended with the Mongol conquest, as Western Xia fell in 1227, the Jin dynasty in 1234, and finally the Southern Song dynasty in 1279. Despite its military weakness, the Song dynasty is widely considered to be the high point of classical Chinese civilization. The Song economy, facilitated by technology advancement, had reached a level of sophistication probably unseen in world history before its time. The population soared to over and the living standards of common people improved tremendously due to improvements in rice cultivation and the wide availability of coal for production. The capital cities of Kaifeng and subsequently Hangzhou were both the most populous cities in the world for their time, and encouraged vibrant civil societies unmatched by previous Chinese dynasties. Although land trading routes to the far west were blocked by nomadic empires, there was extensive maritime trade with neighboring states, which facilitated the use of Song coinage as the de facto currency of exchange. Giant wooden vessels equipped with compasses traveled throughout the China Seas and northern Indian Ocean. The concept of insurance was practised by merchants to hedge the risks of such long-haul maritime shipments. With prosperous economic activities, the historically first use of paper currency emerged in the western city of Chengdu, as a supplement to the existing copper coins. The Song dynasty was considered to be the golden age of great advancements in science and technology of China, thanks to innovative scholar-officials such as Su Song (1020–1101) and Shen Kuo (1031–1095). Inventions such as the hydro-mechanical astronomical clock, the first continuous and endless power-transmitting chain, woodblock printing and paper money were all invented during the Song dynasty. There was court intrigue between the political reformers and conservatives, led by the chancellors Wang Anshi and Sima Guang, respectively. By the mid-to-late 13th century, the Chinese had adopted the dogma of Neo-Confucian philosophy formulated by Zhu Xi. Enormous literary works were compiled during the Song dynasty, such as the innovative historical narrative Zizhi Tongjian ("Comprehensive Mirror to Aid in Government"). The invention of movable-type printing further facilitated the spread of knowledge. Culture and the arts flourished, with grandiose artworks such as Along the River During the Qingming Festival and Eighteen Songs of a Nomad Flute, along with great Buddhist painters such as the prolific Lin Tinggui. The Song dynasty was also a period of major innovation in the history of warfare. Gunpowder, while invented in the Tang dynasty, was first put into use in battlefields by the Song army, inspiring a succession of new firearms and siege engines designs. During the Southern Song dynasty, as its survival hinged decisively on guarding the Yangtze and Huai River against the cavalry forces from the north, the first standing navy in China was assembled in 1132, with its admiral's headquarters established at Dinghai. Paddle-wheel warships equipped with trebuchets could launch incendiary bombs made of gunpowder and lime, as recorded in Song's victory over the invading Jin forces at the Battle of Tangdao in the East China Sea, and the Battle of Caishi on the Yangtze River in 1161. The advances in civilization during the Song dynasty came to an abrupt end following the devastating Mongol conquest, during which the population sharply dwindled, with a marked contraction in economy. Despite viciously halting Mongol advance for more than three decades, the Southern Song capital Hangzhou fell in 1276, followed by the final annihilation of the Song standing navy at the Battle of Yamen in 1279. Yuan dynasty (1271–1368) The Yuan dynasty was formally proclaimed in 1271, when the Great Khan of Mongol, Kublai Khan, one of the grandsons of Genghis Khan, assumed the additional title of Emperor of China, and considered his inherited part of the Mongol Empire as a Chinese dynasty. In the preceding decades, the Mongols had conquered the Jin dynasty in Northern China, and the Southern Song dynasty fell in 1279 after a protracted and bloody war. The Mongol Yuan dynasty became the first conquest dynasty in Chinese history to rule the entire China proper and its population as an ethnic minority. The dynasty also directly controlled the Mongol heartland and other regions, inheriting the largest share of territory of the eastern Mongol empire, which roughly coincided with the modern area of China and nearby regions in East Asia. Further expansion of the empire was halted after defeats in the invasions of Japan and Vietnam. Following the previous Jin dynasty, the capital of Yuan dynasty was established at Khanbaliq (also known as Dadu, modern-day Beijing). The Grand Canal was reconstructed to connect the remote capital city to economic hubs in southern part of China, setting the precedence and foundation where Beijing would largely remain as the capital of the successive regimes that unified China mainland. A series of Mongol civil wars in the late 13th century led to the division of the Mongol Empire. In 1304 the emperors of the Yuan dynasty were upheld as the nominal Khagan over western khanates (the Chagatai Khanate, the Golden Horde and the Ilkhanate), which nonetheless remained de facto autonomous. The era was known as Pax Mongolica, when much of the Asian continent was ruled by the Mongols. For the first and only time in history, the Silk Road was controlled entirely by a single state, facilitating the flow of people, trade, and cultural exchange. A network of roads and a postal system were established to connect the vast empire. Lucrative maritime trade, developed from the previous Song dynasty, continued to flourish, with Quanzhou and Hangzhou emerging as the largest ports in the world. Adventurous travelers from the far west, most notably the Venetian, Marco Polo, would settle in China for decades. Upon his return, his detail travel record inspired generations of medieval Europeans with the splendors of the far East. The Yuan dynasty was the first ancient economy, where paper currency, known at the time as Jiaochao, was used as the predominant medium of exchange. Its unrestricted issuance in the late Yuan dynasty inflicted hyperinflation, which eventually brought the downfall of the dynasty. While the Mongol rulers of the Yuan dynasty adopted substantially to Chinese culture, their sinicization was of lesser extent compared to earlier conquest dynasties in Chinese history. For preserving racial superiority as the conqueror and ruling class, traditional nomadic customs and heritage from the Mongolian Steppe were held in high regard. On the other hand, the Mongol rulers also adopted flexibly to a variety of cultures from many advanced civilizations within the vast empire. Traditional social structure and culture in China underwent immense transform during the Mongol dominance. Large groups of foreign migrants settled in China, who enjoyed elevated social status over the majority Han Chinese, while enriching Chinese culture with foreign elements. The class of scholar officials and intellectuals, traditional bearers of elite Chinese culture, lost substantial social status. This stimulated the development of culture of the common folks. There were prolific works in zaju variety shows and literary songs (sanqu), which were written in a distinctive poetry style known as qu. Novels of vernacular style gained unprecedented status and popularity. Before the Mongol invasion, Chinese dynasties reported approximately inhabitants; after the conquest had been completed in 1279, the 1300 census reported roughly people. This major decline is not necessarily due only to Mongol killings. Scholars such as Frederick W. Mote argue that the wide drop in numbers reflects an administrative failure to record rather than an actual decrease; others such as Timothy Brook argue that the Mongols created a system of enserfment among a huge portion of the Chinese populace, causing many to disappear from the census altogether; other historians including William McNeill and David Morgan consider that plague was the main factor behind the demographic decline during this period. In the 14th century China suffered additional depredations from epidemics of plague, estimated to have killed around a quarter of the population of China. Throughout the Yuan dynasty, there was some general sentiment among the populace against the Mongol dominance. Yet rather than the nationalist cause, it was mainly strings of natural disasters and incompetent governance that triggered widespread peasant uprisings since the 1340s. After the massive naval engagement at Lake Poyang, Zhu Yuanzhang prevailed over other rebel forces in the south. He proclaimed himself emperor and founded the Ming dynasty in 1368. The same year his northern expedition army captured the capital Khanbaliq. The Yuan remnants fled back to Mongolia and sustained the regime. Other Mongol Khanates in Central Asia continued to exist after the fall of Yuan dynasty in China. Ming dynasty (1368–1644) The Ming dynasty was founded by Zhu Yuanzhang in 1368, who proclaimed himself as the Hongwu Emperor. The capital was initially set at Nanjing, and was later moved to Beijing from Yongle Emperor's reign onward. Urbanization increased as the population grew and as the division of labor grew more complex. Large urban centers, such as Nanjing and Beijing, also contributed to the growth of private industry. In particular, small-scale industries grew up, often specializing in paper, silk, cotton, and porcelain goods. For the most part, however, relatively small urban centers with markets proliferated around the country. Town markets mainly traded food, with some necessary manufactures such as pins or oil. Despite the xenophobia and intellectual introspection characteristic of the increasingly popular new school of neo-Confucianism, China under the early Ming dynasty was not isolated. Foreign trade and other contacts with the outside world, particularly Japan, increased considerably. Chinese merchants explored all of the Indian Ocean, reaching East Africa with the voyages of Zheng He. The Hongwu Emperor, being the only founder of a Chinese dynasty who was also of peasant origin, had laid the foundation of a state that relied fundamentally in agriculture. Commerce and trade, which flourished in the previous Song and Yuan dynasties, were less emphasized. Neo-feudal landholdings of the Song and Mongol periods were expropriated by the Ming rulers. Land estates were confiscated by the government, fragmented, and rented out. Private slavery was forbidden. Consequently, after the death of the Yongle Emperor, independent peasant landholders predominated in Chinese agriculture. These laws might have paved the way to removing the worst of the poverty during the previous regimes. Towards later era of the Ming dynasty, with declining government control, commerce, trade and private industries revived. The dynasty had a strong and complex central government that unified and controlled the empire. The emperor's role became more autocratic, although Hongwu Emperor necessarily continued to use what he called the "Grand Secretariat" to assist with the immense paperwork of the bureaucracy, including memorials (petitions and recommendations to the throne), imperial edicts in reply, reports of various kinds, and tax records. It was this same bureaucracy that later prevented the Ming government from being able to adapt to changes in society, and eventually led to its decline. The Yongle Emperor strenuously tried to extend China's influence beyond its borders by demanding other rulers send ambassadors to China to present tribute. A large navy was built, including four-masted ships displacing 1,500 tons. A standing army of 1 million troops was created. The Chinese armies conquered and occupied Vietnam for around 20 years, while the Chinese fleet sailed the China seas and the Indian Ocean, cruising as far as the east coast of Africa. The Chinese gained influence in eastern Moghulistan. Several maritime Asian nations sent envoys with tribute for the Chinese emperor. Domestically, the Grand Canal was expanded and became a stimulus to domestic trade. Over 100,000 tons of iron per year were produced. Many books were printed using movable type. The imperial palace in Beijing's Forbidden City reached its current splendor. It was also during these centuries that the potential of south China came to be fully exploited. New crops were widely cultivated and industries such as those producing porcelain and textiles flourished. In 1449 Esen Tayisi led an Oirat Mongol invasion of northern China which culminated in the capture of the Zhengtong Emperor at Tumu. Since then, the Ming became on the defensive on the northern frontier, which led to the Ming Great Wall being built. Most of what remains of the Great Wall of China today was either built or repaired by the Ming. The brick and granite work was enlarged, the watchtowers were redesigned, and cannons were placed along its length. At sea the Ming became increasingly isolationist after the death of the Yongle Emperor. The treasure voyages which sailed the Indian Ocean were discontinued, and the maritime prohibition laws were set in place banning the Chinese from sailing abroad. European traders who reached China in the midst of the Age of Discovery were repeatedly rebuked in their requests for trade, with the Portuguese being repulsed by the Ming navy at Tuen Mun in 1521 and again in 1522. Domestic and foreign demands for overseas trade, deemed illegal by the state, led to widespread wokou piracy attacking the southeastern coastline during the rule of the Jiajing Emperor (1507–1567), which only subsided after the opening of ports in Guangdong and Fujian and much military suppression. In addition to raids from Japan by the wokou, raids from Taiwan and the Philippines by the Pisheye also ravaged the southern coasts. The Portuguese were allowed to settle in Macau in 1557 for trade, which remained in Portuguese hands until 1999. After the Spanish invasion of the Philippines, trade with the Spanish at Manila imported large quantities of Mexican and Peruvian silver from the Spanish Americas to China. The Dutch entry into the Chinese seas was also met with fierce resistance, with the Dutch being chased off the Penghu islands in the Sino-Dutch conflicts of 1622–1624 and were forced to settle in Taiwan instead. The Dutch in Taiwan fought with the Ming in the Battle of Liaoluo Bay in 1633 and lost, and eventually surrendered to the Ming loyalist Koxinga in 1662, after the fall of the Ming dynasty. In 1556, during the rule of the Jiajing Emperor, the Shaanxi earthquake killed about 830,000 people, the deadliest earthquake of all time. The Ming dynasty intervened deeply in the Japanese invasions of Korea (1592–98), which ended with the withdrawal of all invading Japanese forces in Korea, and the restoration of the Joseon dynasty, its traditional ally and tributary state. The regional hegemony of the Ming dynasty was preserved at a toll on its resources. Coincidentally, with Ming's control in Manchuria in decline, the Manchu (Jurchen) tribes, under their chieftain Nurhaci, broke away from Ming's rule, and emerged as a powerful, unified state, which was later proclaimed as the Qing dynasty. It went on to subdue the much weakened Korea as its tributary, conquered Mongolia, and expanded its territory to the outskirt of the Great Wall. The most elite army of the Ming dynasty was to station at the Shanhai Pass to guard the last stronghold against the Manchus, which weakened its suppression of internal peasants uprisings. Qing dynasty (1636–1912) The Qing dynasty (1644–1912) was the last imperial dynasty in China. Founded by the Manchus, it was the second conquest dynasty to rule the entirety of China proper, and roughly doubled the territory controlled by the Ming. The Manchus were formerly known as Jurchens, residing in the northeastern part of the Ming territory outside the Great Wall. They emerged as the major threat to the late Ming dynasty after Nurhaci united all Jurchen tribes and his son, Hong Taiji, declared the founding of the Qing dynasty in 1636. The Qing dynasty set up the Eight Banners system that provided the basic framework for the Qing military conquest. Li Zicheng's peasant rebellion captured Beijing in 1644 and the Chongzhen Emperor, the last Ming emperor, committed suicide. The Manchus allied with the Ming general Wu Sangui to seize Beijing, which was made the capital of the Qing dynasty, and then proceeded to subdue the Ming remnants in the south. During the Ming-Qing transition, when the Ming dynasty and later the Southern Ming, the emerging Qing dynasty, and several other factions like the Shun dynasty and Xi dynasty founded by peasant revolt leaders fought against each another, which, along with innumerable natural disasters at that time such as those caused by the Little Ice Age and epidemics like the Great Plague during the last decade of the Ming dynasty, caused enormous loss of lives and significant harm to the economy. In total, these decades saw the loss of as many as lives, but the Qing appeared to have restored China's imperial power and inaugurate another flowering of the arts. The early Manchu emperors combined traditions of Inner Asian rule with Confucian norms of traditional Chinese government and were considered a Chinese dynasty. The Manchus enforced a 'queue order', forcing Han Chinese men to adopt the Manchu queue hairstyle. Officials were required to wear Manchu-style clothing Changshan (bannermen dress and Tangzhuang), but ordinary Han civilians were allowed to wear traditional Han clothing. Bannermen could not undertake trade or manual labor; they had to petition to be removed from banner status. They were considered aristocracy and were given annual pensions, land, and allotments of cloth. The Kangxi Emperor ordered the creation of the Kangxi Dictionary, the most complete dictionary of Chinese characters that had been compiled. Over the next half-century, all areas previously under the Ming dynasty were consolidated under the Qing. Conquests in Central Asia in the eighteenth century extended territorial control. Between 1673 and 1681, the Kangxi Emperor suppressed the Revolt of the Three Feudatories, an uprising of three generals in Southern China who had been denied hereditary rule of large fiefdoms granted by the previous emperor. In 1683, the Qing staged an amphibious assault on southern Taiwan, bringing down the rebel Kingdom of Tungning, which was founded by the Ming loyalist Koxinga (Zheng Chenggong) in 1662 after the fall of the Southern Ming, and had served as a base for continued Ming resistance in Southern China. The Qing defeated the Russians at Albazin, resulting in the Treaty of Nerchinsk. By the end of Qianlong Emperor's long reign in 1796, the Qing Empire was at its zenith. The Qing ruled more than one-third of the world's population, and had the largest economy in the world. By area it was one of the largest empires ever. In the 19th century the empire was internally restive and externally threatened by western powers. The defeat by the British Empire in the First Opium War (1840) led to the Treaty of Nanking (1842), under which Hong Kong was ceded to Britain and importation of opium (produced by British Empire territories) was allowed. Opium usage continued to grow in China, adversely affecting societal stability. Subsequent military defeats and unequal treaties with other western powers continued even after the fall of the Qing dynasty. Internally the Taiping Rebellion (1851–1864), a Christian religious movement led by the "Heavenly King" Hong Xiuquan swept from the south to establish the Taiping Heavenly Kingdom and controlled roughly a third of China proper for over a decade. The court in desperation empowered Han Chinese officials such as Zeng Guofan to raise local armies. After initial defeats, Zeng crushed the rebels in the Third Battle of Nanking in 1864. This was one of the largest wars in the 19th century in troop involvement; there was massive loss of life, with a death toll of about 20 million. A string of civil disturbances followed, including the Punti–Hakka Clan Wars, Nian Rebellion, Dungan Revolt, and Panthay Rebellion. All rebellions were ultimately put down, but at enormous cost and with millions dead, seriously weakening the central imperial authority. China never rebuilt a strong central army, and many local officials used their military power to effectively rule independently in their provinces. Yet the dynasty appeared to recover in the Tongzhi Restoration (1860–1872), led by Manchu royal family reformers and Han Chinese officials such as Zeng Guofan and his proteges Li Hongzhang and Zuo Zongtang. Their Self-Strengthening Movement made effective institutional reforms, imported Western factories and communications technology, with prime emphasis on strengthening the military. However, the reform was undermined by official rivalries, cynicism, and quarrels within the imperial family. The defeat of Yuan Shikai's modernized "Beiyang Fleet" in the First Sino-Japanese War (1894–1895) led to the formation of the New Army. The Guangxu Emperor, advised by Kang Youwei, then launched a comprehensive reform effort, the Hundred Days' Reform (1898). Empress Dowager Cixi, however, feared that precipitous change would lead to bureaucratic opposition and foreign intervention and quickly suppressed it. In the summer of 1900, the Boxer Uprising opposed foreign influence and murdered Chinese Christians and foreign missionaries. When Boxers entered Beijing, the Qing government ordered all foreigners to leave, but they and many Chinese Christians were besieged in the foreign legations quarter. An Eight-Nation Alliance sent the Seymour Expedition of Japanese, Russian, British, Italian, German, French, American, and Austrian troops to relieve the siege, but they were forced to retreat by Boxer and Qing troops at the Battle of Langfang. After the Alliance's attack on the Dagu Forts, the court declared war on the Alliance and authorized the Boxers to join with imperial armies. After fierce fighting at Tientsin, the Alliance formed the second, much larger Gaselee Expedition and finally reached Beijing; the Empress Dowager evacuated to Xi'an. The Boxer Protocol ended the war, exacting a tremendous indemnity. The Qing court then instituted "New Policies" of administrative and legal reform, including abolition of the examination system. But young officials, military officers, and students debated reform, perhaps a constitutional monarchy, or the overthrow of the dynasty and the creation of a republic. They were inspired by an emerging public opinion formed by intellectuals such as Liang Qichao and the revolutionary ideas of Sun Yat-sen. A localised military uprising, the Wuchang uprising, began on 10 October 1911, in Wuchang (today part of Wuhan), and soon spread. The Republic of China was proclaimed on 1 January 1912, ending 2,000 years of dynastic rule. Modern China Republic of China (since 1912) The provisional government of the Republic of China was formed in Nanking on 12 March 1912. Sun Yat-sen became President of the Republic of China, but he turned power over to Yuan Shikai, who commanded the New Army. Over the next few years, Yuan proceeded to abolish the national and provincial assemblies and declared himself as the emperor of Empire of China in late 1915. Yuan's imperial ambitions were fiercely opposed by his subordinates; faced with the prospect of rebellion, he abdicated in March 1916 and died of natural causes in June. Yuan's death in 1916 left a power vacuum; the republican government was all but shattered. This opened the way for the Warlord Era, during which much of China was ruled by shifting coalitions of competing provincial military leaders and the Beiyang government. Intellectuals, disappointed in the failure of the Republic, launched the New Culture Movement. In 1919, the May Fourth Movement began as a response to the pro-Japanese terms imposed on China by the Treaty of Versailles following World War I. It quickly became a nationwide protest movement. The protests were a moral success as the cabinet fell and China refused to sign the Treaty of Versailles, which had awarded German holdings of Shandong to Japan. Memory of the mistreatment at Versailles fuels resentment into the 21st century. Political and intellectual ferment waxed strong throughout the 1920s and 1930s. According to Patricia Ebrey: "Nationalism, patriotism, progress, science, democracy, and freedom were the goals; imperialism, feudalism, warlordism, autocracy, patriarchy, and blind adherence to tradition were the enemies. Intellectuals struggled with how to be strong and modern and yet Chinese, how to preserve China as a political entity in the world of competing nations." In the 1920s Sun Yat-sen established a revolutionary base in Guangzhou and set out to unite the fragmented nation. He welcomed assistance from the Soviet Union (itself fresh from Lenin's Communist takeover) and he entered into an alliance with the fledgling Chinese Communist Party (CCP). After Sun's death from cancer in 1925, one of his protégés, Chiang Kai-shek, seized control of the Nationalist Party (KMT) and succeeded in bringing most of south and central China under its rule in the Northern Expedition (1926–1927). Having defeated the warlords in the south and central China by military force, Chiang was able to secure the nominal allegiance of the warlords in the North and establish the Nationalist government in Nanking. In 1927, Chiang turned on the CCP and relentlessly purged the Communists elements in his NRA. In 1934, driven from their mountain bases such as the Chinese Soviet Republic, the CCP forces embarked on the Long March across China's most desolate terrain to the northwest, where they established a guerrilla base at Yan'an in Shaanxi. During the Long March, the communists reorganized under a new leader, Mao Zedong (Mao Tse-tung). The bitter Chinese Civil War between the Nationalists and the Communists continued, openly or clandestinely, through the 14-year-long Japanese occupation of various parts of the country (1931–1945). The two Chinese parties nominally formed a United Front to oppose the Japanese in 1937, during the Second Sino-Japanese War (1937–1945), which became a part of World War II. Japanese forces committed numerous war atrocities against the civilian population, including biological warfare (see Unit 731) and the Three Alls Policy (Sankō Sakusen), the three alls being: "Kill All, Burn All and Loot All". During the war, China was recognized as one of the Allied "Big Four" in the Declaration by United Nations. China was one of the four major Allies of World War II, and was later considered one of the primary victors in the war. Following the defeat of Japan in 1945, the war between the Nationalist government forces and the CCP resumed, after failed attempts at reconciliation and a negotiated settlement. By 1949, the CCP had established control over most of the country. Odd Arne Westad says the Communists won the Civil War because they made fewer military mistakes than Chiang, and because in his search for a powerful centralized government, Chiang antagonized too many interest groups in China. Furthermore, his party was weakened in the war against the Japanese. Meanwhile, the Communists told different groups, such as peasants, exactly what they wanted to hear, and cloaked themselves in the cover of Chinese Nationalism. During the civil war both the Nationalists and Communists carried out mass atrocities, with millions of non-combatants killed by both sides. These included deaths from forced conscription and massacres. When the Nationalist government forces were defeated by CCP forces in mainland China in 1949, the Nationalist government retreated to Taiwan with its forces, along with Chiang and a large number of their supporters; the Nationalist government had taken effective control of Taiwan at the end of WWII as part of the overall Japanese surrender, when Japanese troops in Taiwan surrendered to the Republic of China troops. Until the early 1970s the ROC was recognized as the sole legitimate government of China by the United Nations, the United States and most Western nations, refusing to recognize the PRC on account of the Cold War. This changed in 1971 when the PRC was seated in the United Nations, replacing the ROC. The KMT ruled Taiwan under martial law until 1987, with the stated goal of being vigilant against Communist infiltration and preparing to retake mainland China. Therefore, political dissent was not tolerated during that period. In the 1990s the ROC underwent a major democratic reform, beginning with the 1991 resignation of the members of the Legislative Yuan and National Assembly elected in 1947. These groups were originally created to represent mainland China constituencies. Also lifted were the restrictions on the use of Taiwanese languages in the broadcast media and in schools. This culminated with the first direct presidential election in 1996 against the Democratic Progressive Party (DPP) candidate and former dissident, Peng Ming-min. In 2000, the KMT status as the ruling party ended when the DPP took power, only to regain its status in the 2008 election by Ma Ying-jeou. Due to the controversial nature of Taiwan's political status, the ROC is currently recognized by 12 UN member states and Holy See as of as the legitimate government of "China". People's Republic of China (since 1949) Major combat in the Chinese Civil War ended in 1949 with the KMT pulling out of the mainland, with the government relocating to Taipei and maintaining control only over a few islands. The CCP was left in control of mainland China. On 1 October 1949, Mao Zedong proclaimed the People's Republic of China. "Communist China" and "Red China" were two common names for the PRC. The PRC was shaped by a series of campaigns and five-year plans. The economic and social plan known as the Great Leap Forward caused an estimated 45 million deaths. Mao's government carried out mass executions of landowners, instituted collectivisation and implemented the Laogai camp system. Execution, deaths from forced labor and other atrocities resulted in millions of deaths under Mao. In 1966 Mao and his allies launched the Cultural Revolution, which continued until Mao's death a decade later. The Cultural Revolution, motivated by power struggles within the Party and a fear of the Soviet Union, led to a major upheaval in Chinese society. In 1972, at the peak of the Sino-Soviet split, Mao and Zhou Enlai met U.S. president Richard Nixon in Beijing to establish relations with the US. In the same year, the PRC was admitted to the United Nations in place of the Republic of China, with permanent membership of the Security Council. A power struggle followed Mao's death in 1976. The Gang of Four were arrested and blamed for the excesses of the Cultural Revolution, marking the end of a turbulent political era in China. Deng Xiaoping outmaneuvered Mao's anointed successor chairman Hua Guofeng, and gradually emerged as the de facto leader over the next few years. Deng Xiaoping was the Paramount Leader of China from 1978 to 1992, although he never became the head of the party or state, and his influence within the Party led the country to significant economic reforms. The CCP subsequently loosened governmental control over citizens' personal lives and the communes were disbanded with many peasants receiving multiple land leases, which greatly increased incentives and agricultural production. In addition, there were many free market areas opened. The most successful free market area was Shenzhen. It is located in Guangdong and the property tax free area still exists today. This turn of events marked China's transition from a planned economy to a mixed economy with an increasingly open market environment, a system termed by some as "market socialism", and officially by the CCP as "Socialism with Chinese characteristics". The PRC adopted its current constitution on 4 December 1982. In 1989 the death of former general secretary Hu Yaobang helped to spark the Tiananmen Square protests of that year, during which students and others campaigned for several months, speaking out against corruption and in favour of greater political reform, including democratic rights and freedom of speech. However, they were eventually put down on 4 June when Army troops and vehicles entered and forcibly cleared the square, with considerable numbers of fatalities. This event was widely reported, and brought worldwide condemnation and sanctions against the government. CCP general secretary and PRC president Jiang Zemin and PRC premier Zhu Rongji, both former mayors of Shanghai, led post-Tiananmen PRC in the 1990s. Under Jiang and Zhu's ten years of administration, the PRC's economic performance pulled an estimated 150 million peasants out of poverty and sustained an average annual gross domestic product growth rate of 11.2%. The country formally joined the World Trade Organization in 2001. By 1997 and 1999, former European colonies of British Hong Kong and Portuguese Macau became the Hong Kong and Macau special administrative regions of the People's Republic of China respectively. Although the PRC needed economic growth to spur its development, the government began to worry that rapid economic growth was degrading the country's resources and environment. Another concern is that certain sectors of society are not sufficiently benefiting from the PRC's economic development; one example of this is the wide gap between urban and rural areas. As a result, under former CCP general secretary and President Hu Jintao and Premier Wen Jiabao, the PRC initiated policies to address issues of equitable distribution of resources, but the outcome was not known . More than 40 million farmers were displaced from their land, usually for economic development, contributing to 87,000 demonstrations and riots across China in 2005. For much of the PRC's population, living standards improved very substantially and freedom increased, but political controls remained tight and rural areas poor. According to the U.S. Department of Defense, as many as 3 million Uyghurs and members of other Muslim minority groups are being held in China's internment camps which are located in the Xinjiang region and which American news reports often label as "concentration camps". The camps were established in late 2010s under Xi Jinping's administration. Human Rights Watch says that they have been used to indoctrinate Uyghurs and other Muslims since 2017 as part of a "people's war on terror", a policy announced in 2014. The camps have been criticized by the governments of many countries and human rights organizations for alleged human rights abuses, including mistreatment, rape, and torture, with some of them alleging genocide. The novel coronavirus SARS-CoV-2, which causes the disease COVID-19, was first detected in Wuhan, Hubei in 2019 and led to a global pandemic. See also Chinese emperors family tree Ancient – Early – Middle – Late Chinese exploration Chinese historiography Christianity in China Economic history of China Ethnic groups in Chinese history Foreign relations of imperial China Golden ages of China History of canals in China History of Islam in China History of science and technology in China History of Taiwan History of the Great Wall of China List of Chinese monarchs List of rebellions in China List of recipients of tribute from China List of tributary states of China Military history of China before 1912 Naval history of China Population history of China Timeline of Chinese history Women in ancient and imperial China References Notes Citations Sources Further reading Fairbank, John King and Goldman, Merle. China: A New History. 2nd ed. (Harvard UP, 2006). 640 pp. Gernet, Jacques. A History of Chinese Civilization (1996). One-volume survey. Li, Xiaobing, ed. China at War: An Encyclopedia. (ABC-CLIO, 2012). Mote, Frederick W. Imperial China, 900–1800 (Harvard UP, 1999), 1,136 pp. Authoritative treatment of the Song, Yuan, Ming, and early Qing dynasties. Perkins, Dorothy. Encyclopedia of China: The Essential Reference to China, Its History and Culture (Facts on File, 1999). 662 pp. Roberts, J. A. G. A Concise History of China (Harvard U. Press, 1999). 341 pp. Stanford, Edward. Atlas of the Chinese Empire, containing separate maps of the eighteen provinces of China (2nd ed., 1917). Legible color maps. Wright, David Curtis. History of China (2001) 257 pp. External links China Knowledge, a comprehensive online encyclopedia of China from Ulrich Theobald The Berkshire Encyclopedia of China on Oxford Reference China Rediscovers its Own History, a lengthy lecture on Chinese history given by Yu Ying-shih
5762
https://en.wikipedia.org/wiki/Civil%20engineering
Civil engineering
Civil engineering is a professional engineering discipline that deals with the design, construction, and maintenance of the physical and naturally built environment, including public works such as roads, bridges, canals, dams, airports, sewage systems, pipelines, structural components of buildings, and railways. Civil engineering is traditionally broken into a number of sub-disciplines. It is considered the second-oldest engineering discipline after military engineering, and it is defined to distinguish non-military engineering from military engineering. Civil engineering can take place in the public sector from municipal public works departments through to federal government agencies, and in the private sector from locally based firms to global Fortune 500 companies. History Civil engineering as a discipline Civil engineering is the application of physical and scientific principles for solving the problems of society, and its history is intricately linked to advances in the understanding of physics and mathematics throughout history. Because civil engineering is a broad profession, including several specialized sub-disciplines, its history is linked to knowledge of structures, materials science, geography, geology, soils, hydrology, environmental science, mechanics, project management, and other fields. Throughout ancient and medieval history most architectural design and construction was carried out by artisans, such as stonemasons and carpenters, rising to the role of master builder. Knowledge was retained in guilds and seldom supplanted by advances. Structures, roads, and infrastructure that existed were repetitive, and increases in scale were incremental. One of the earliest examples of a scientific approach to physical and mathematical problems applicable to civil engineering is the work of Archimedes in the 3rd century BC, including Archimedes' principle, which underpins our understanding of buoyancy, and practical solutions such as Archimedes' screw. Brahmagupta, an Indian mathematician, used arithmetic in the 7th century AD, based on Hindu-Arabic numerals, for excavation (volume) computations. Civil engineering profession Engineering has been an aspect of life since the beginnings of human existence. The earliest practice of civil engineering may have commenced between 4000 and 2000 BC in ancient Egypt, the Indus Valley civilization, and Mesopotamia (ancient Iraq) when humans started to abandon a nomadic existence, creating a need for the construction of shelter. During this time, transportation became increasingly important leading to the development of the wheel and sailing. Until modern times there was no clear distinction between civil engineering and architecture, and the term engineer and architect were mainly geographical variations referring to the same occupation, and often used interchangeably. The construction of pyramids in Egypt (–2500 BC) were some of the first instances of large structure constructions. Other ancient historic civil engineering constructions include the Qanat water management system in modern-day Iran (the oldest is older than 3000 years and longer than ,) the Parthenon by Iktinos in Ancient Greece (447–438 BC), the Appian Way by Roman engineers (), the Great Wall of China by General Meng T'ien under orders from Ch'in Emperor Shih Huang Ti () and the stupas constructed in ancient Sri Lanka like the Jetavanaramaya and the extensive irrigation works in Anuradhapura. The Romans developed civil structures throughout their empire, including especially aqueducts, insulae, harbors, bridges, dams and roads. In the 18th century, the term civil engineering was coined to incorporate all things civilian as opposed to military engineering. In 1747, the first institution for the teaching of civil engineering, the École Nationale des Ponts et Chaussées was established in France; and more examples followed in other European countries, like Spain. The first self-proclaimed civil engineer was John Smeaton, who constructed the Eddystone Lighthouse. In 1771 Smeaton and some of his colleagues formed the Smeatonian Society of Civil Engineers, a group of leaders of the profession who met informally over dinner. Though there was evidence of some technical meetings, it was little more than a social society. In 1818 the Institution of Civil Engineers was founded in London, and in 1820 the eminent engineer Thomas Telford became its first president. The institution received a Royal charter in 1828, formally recognising civil engineering as a profession. Its charter defined civil engineering as: Civil engineering education The first private college to teach civil engineering in the United States was Norwich University, founded in 1819 by Captain Alden Partridge. The first degree in civil engineering in the United States was awarded by Rensselaer Polytechnic Institute in 1835. The first such degree to be awarded to a woman was granted by Cornell University to Nora Stanton Blatch in 1905. In the UK during the early 19th century, the division between civil engineering and military engineering (served by the Royal Military Academy, Woolwich), coupled with the demands of the Industrial Revolution, spawned new engineering education initiatives: the Class of Civil Engineering and Mining was founded at King's College London in 1838, mainly as a response to the growth of the railway system and the need for more qualified engineers, the private College for Civil Engineers in Putney was established in 1839, and the UK's first Chair of Engineering was established at the University of Glasgow in 1840. Education Civil engineers typically possess an academic degree in civil engineering. The length of study is three to five years, and the completed degree is designated as a bachelor of technology, or a bachelor of engineering. The curriculum generally includes classes in physics, mathematics, project management, design and specific topics in civil engineering. After taking basic courses in most sub-disciplines of civil engineering, they move on to specialize in one or more sub-disciplines at advanced levels. While an undergraduate degree (BEng/BSc) normally provides successful students with industry-accredited qualification, some academic institutions offer post-graduate degrees (MEng/MSc), which allow students to further specialize in their particular area of interest. Practicing engineers In most countries, a bachelor's degree in engineering represents the first step towards professional certification, and a professional body certifies the degree program. After completing a certified degree program, the engineer must satisfy a range of requirements including work experience and exam requirements before being certified. Once certified, the engineer is designated as a professional engineer (in the United States, Canada and South Africa), a chartered engineer (in most Commonwealth countries), a chartered professional engineer (in Australia and New Zealand), or a European engineer (in most countries of the European Union). There are international agreements between relevant professional bodies to allow engineers to practice across national borders. The benefits of certification vary depending upon location. For example, in the United States and Canada, "only a licensed professional engineer may prepare, sign and seal, and submit engineering plans and drawings to a public authority for approval, or seal engineering work for public and private clients." This requirement is enforced under provincial law such as the Engineers Act in Quebec. No such legislation has been enacted in other countries including the United Kingdom. In Australia, state licensing of engineers is limited to the state of Queensland. Almost all certifying bodies maintain a code of ethics which all members must abide by. Engineers must obey contract law in their contractual relationships with other parties. In cases where an engineer's work fails, they may be subject to the law of tort of negligence, and in extreme cases, criminal charges. An engineer's work must also comply with numerous other rules and regulations such as building codes and environmental law. Sub-disciplines There are a number of sub-disciplines within the broad field of civil engineering. General civil engineers work closely with surveyors and specialized civil engineers to design grading, drainage, pavement, water supply, sewer service, dams, electric and communications supply. General civil engineering is also referred to as site engineering, a branch of civil engineering that primarily focuses on converting a tract of land from one usage to another. Site engineers spend time visiting project sites, meeting with stakeholders, and preparing construction plans. Civil engineers apply the principles of geotechnical engineering, structural engineering, environmental engineering, transportation engineering and construction engineering to residential, commercial, industrial and public works projects of all sizes and levels of construction. Coastal engineering Coastal engineering is concerned with managing coastal areas. In some jurisdictions, the terms sea defense and coastal protection mean defense against flooding and erosion, respectively. Coastal defense is the more traditional term, but coastal management has become popular as well. Construction engineering Construction engineering involves planning and execution, transportation of materials, site development based on hydraulic, environmental, structural and geotechnical engineering. As construction firms tend to have higher business risk than other types of civil engineering firms do, construction engineers often engage in more business-like transactions, for example, drafting and reviewing contracts, evaluating logistical operations, and monitoring prices of supplies. Earthquake engineering Earthquake engineering involves designing structures to withstand hazardous earthquake exposures. Earthquake engineering is a sub-discipline of structural engineering. The main objectives of earthquake engineering are to understand interaction of structures on the shaky ground; foresee the consequences of possible earthquakes; and design, construct and maintain structures to perform at earthquake in compliance with building codes. Environmental engineering Environmental engineering is the contemporary term for sanitary engineering, though sanitary engineering traditionally had not included much of the hazardous waste management and environmental remediation work covered by environmental engineering. Public health engineering and environmental health engineering are other terms being used. Environmental engineering deals with treatment of chemical, biological, or thermal wastes, purification of water and air, and remediation of contaminated sites after waste disposal or accidental contamination. Among the topics covered by environmental engineering are pollutant transport, water purification, waste water treatment, air pollution, solid waste treatment, recycling, and hazardous waste management. Environmental engineers administer pollution reduction, green engineering, and industrial ecology. Environmental engineers also compile information on environmental consequences of proposed actions. Forensic engineering Forensic engineering is the investigation of materials, products, structures or components that fail or do not operate or function as intended, causing personal injury or damage to property. The consequences of failure are dealt with by the law of product liability. The field also deals with retracing processes and procedures leading to accidents in operation of vehicles or machinery. The subject is applied most commonly in civil law cases, although it may be of use in criminal law cases. Generally the purpose of a Forensic engineering investigation is to locate cause or causes of failure with a view to improve performance or life of a component, or to assist a court in determining the facts of an accident. It can also involve investigation of intellectual property claims, especially patents. Geotechnical engineering Geotechnical engineering studies rock and soil supporting civil engineering systems. Knowledge from the field of soil science, materials science, mechanics, and hydraulics is applied to safely and economically design foundations, retaining walls, and other structures. Environmental efforts to protect groundwater and safely maintain landfills have spawned a new area of research called geo-environmental engineering. Identification of soil properties presents challenges to geotechnical engineers. Boundary conditions are often well defined in other branches of civil engineering, but unlike steel or concrete, the material properties and behavior of soil are difficult to predict due to its variability and limitation on investigation. Furthermore, soil exhibits nonlinear (stress-dependent) strength, stiffness, and dilatancy (volume change associated with application of shear stress), making studying soil mechanics all the more difficult. Geotechnical engineers frequently work with professional geologists, Geological Engineering professionals and soil scientists. Materials science and engineering Materials science is closely related to civil engineering. It studies fundamental characteristics of materials, and deals with ceramics such as concrete and mix asphalt concrete, strong metals such as aluminum and steel, and thermosetting polymers including polymethylmethacrylate (PMMA) and carbon fibers. Materials engineering involves protection and prevention (paints and finishes). Alloying combines two types of metals to produce another metal with desired properties. It incorporates elements of applied physics and chemistry. With recent media attention on nanoscience and nanotechnology, materials engineering has been at the forefront of academic research. It is also an important part of forensic engineering and failure analysis. Site development and planning Site development, also known as site planning, is focused on the planning and development potential of a site as well as addressing possible impacts from permitting issues and environmental challenges. Structural engineering Structural engineering is concerned with the structural design and structural analysis of buildings, bridges, towers, flyovers (overpasses), tunnels, off shore structures like oil and gas fields in the sea, aerostructure and other structures. This involves identifying the loads which act upon a structure and the forces and stresses which arise within that structure due to those loads, and then designing the structure to successfully support and resist those loads. The loads can be self weight of the structures, other dead load, live loads, moving (wheel) load, wind load, earthquake load, load from temperature change etc. The structural engineer must design structures to be safe for their users and to successfully fulfill the function they are designed for (to be serviceable). Due to the nature of some loading conditions, sub-disciplines within structural engineering have emerged, including wind engineering and earthquake engineering. Design considerations will include strength, stiffness, and stability of the structure when subjected to loads which may be static, such as furniture or self-weight, or dynamic, such as wind, seismic, crowd or vehicle loads, or transitory, such as temporary construction loads or impact. Other considerations include cost, constructibility, safety, aesthetics and sustainability. Surveying Surveying is the process by which a surveyor measures certain dimensions that occur on or near the surface of the Earth. Surveying equipment such as levels and theodolites are used for accurate measurement of angular deviation, horizontal, vertical and slope distances. With computerisation, electronic distance measurement (EDM), total stations, GPS surveying and laser scanning have to a large extent supplanted traditional instruments. Data collected by survey measurement is converted into a graphical representation of the Earth's surface in the form of a map. This information is then used by civil engineers, contractors and realtors to design from, build on, and trade, respectively. Elements of a structure must be sized and positioned in relation to each other and to site boundaries and adjacent structures. Although surveying is a distinct profession with separate qualifications and licensing arrangements, civil engineers are trained in the basics of surveying and mapping, as well as geographic information systems. Surveyors also lay out the routes of railways, tramway tracks, highways, roads, pipelines and streets as well as position other infrastructure, such as harbors, before construction. Land surveying In the United States, Canada, the United Kingdom and most Commonwealth countries land surveying is considered to be a separate and distinct profession. Land surveyors are not considered to be engineers, and have their own professional associations and licensing requirements. The services of a licensed land surveyor are generally required for boundary surveys (to establish the boundaries of a parcel using its legal description) and subdivision plans (a plot or map based on a survey of a parcel of land, with boundary lines drawn inside the larger parcel to indicate the creation of new boundary lines and roads), both of which are generally referred to as Cadastral surveying. Construction surveying Construction surveying is generally performed by specialized technicians. Unlike land surveyors, the resulting plan does not have legal status. Construction surveyors perform the following tasks: Surveying existing conditions of the future work site, including topography, existing buildings and infrastructure, and underground infrastructure when possible; "lay-out" or "setting-out": placing reference points and markers that will guide the construction of new structures such as roads or buildings; Verifying the location of structures during construction; As-Built surveying: a survey conducted at the end of the construction project to verify that the work authorized was completed to the specifications set on plans. Transportation engineering Transportation engineering is concerned with moving people and goods efficiently, safely, and in a manner conducive to a vibrant community. This involves specifying, designing, constructing, and maintaining transportation infrastructure which includes streets, canals, highways, rail systems, airports, ports, and mass transit. It includes areas such as transportation design, transportation planning, traffic engineering, some aspects of urban engineering, queueing theory, pavement engineering, Intelligent Transportation System (ITS), and infrastructure management. Municipal or urban engineering Municipal engineering is concerned with municipal infrastructure. This involves specifying, designing, constructing, and maintaining streets, sidewalks, water supply networks, sewers, street lighting, municipal solid waste management and disposal, storage depots for various bulk materials used for maintenance and public works (salt, sand, etc.), public parks and cycling infrastructure. In the case of underground utility networks, it may also include the civil portion (conduits and access chambers) of the local distribution networks of electrical and telecommunications services. It can also include the optimizing of waste collection and bus service networks. Some of these disciplines overlap with other civil engineering specialties, however municipal engineering focuses on the coordination of these infrastructure networks and services, as they are often built simultaneously, and managed by the same municipal authority. Municipal engineers may also design the site civil works for large buildings, industrial plants or campuses (i.e. access roads, parking lots, potable water supply, treatment or pretreatment of waste water, site drainage, etc.) Water resources engineering Water resources engineering is concerned with the collection and management of water (as a natural resource). As a discipline it therefore combines elements of hydrology, environmental science, meteorology, conservation, and resource management. This area of civil engineering relates to the prediction and management of both the quality and the quantity of water in both underground (aquifers) and above ground (lakes, rivers, and streams) resources. Water resource engineers analyze and model very small to very large areas of the earth to predict the amount and content of water as it flows into, through, or out of a facility. Although the actual design of the facility may be left to other engineers. Hydraulic engineering is concerned with the flow and conveyance of fluids, principally water. This area of civil engineering is intimately related to the design of pipelines, water supply network, drainage facilities (including bridges, dams, channels, culverts, levees, storm sewers), and canals. Hydraulic engineers design these facilities using the concepts of fluid pressure, fluid statics, fluid dynamics, and hydraulics, among others. Civil engineering systems Civil engineering systems is a discipline that promotes the use of systems thinking to manage complexity and change in civil engineering within its wider public context. It posits that the proper development of civil engineering infrastructure requires a holistic, coherent understanding of the relationships between all of the important factors that contribute to successful projects while at the same time emphasizing the importance of attention to technical detail. Its purpose is to help integrate the entire civil engineering project life cycle from conception, through planning, designing, making, operating to decommissioning. See also Architectural engineering Civil engineering software Engineering drawing Geological Engineering Glossary of civil engineering Index of civil engineering articles List of civil engineers List of engineering branches List of Historic Civil Engineering Landmarks Macro-engineering Railway engineering Site survey Associations American Society of Civil Engineers Canadian Society for Civil Engineering Chartered Institution of Civil Engineering Surveyors Council for the Regulation of Engineering in Nigeria Earthquake Engineering Research Institute Engineers Australia European Federation of National Engineering Associations International Federation of Consulting Engineers Indian Geotechnical Society Institution of Civil Engineers Institution of Structural Engineers Institute of Engineering (Nepal) International Society of Soil Mechanics and Geotechnical Engineering Institution of Engineers, Bangladesh Institution of Engineers (India) Institution of Engineers of Ireland Institute of Transportation Engineers Japan Society of Civil Engineers Pakistan Engineering Council Philippine Institute of Civil Engineers Transportation Research Board References Further reading External links The Institution of Civil Engineers Civil Engineering Software Database The Institution of Civil Engineering Surveyors Civil engineering classes, from MIT OpenCourseWare Engineering disciplines Articles containing video clips
5763
https://en.wikipedia.org/wiki/Cantonese%20%28disambiguation%29
Cantonese (disambiguation)
Cantonese is a language originating in Canton, Guangdong. Cantonese may also refer to: Yue Chinese, Chinese languages that include Cantonese Cantonese cuisine, the cuisine of Guangdong Province Cantonese people, the native people of Guangdong and Guangxi Lingnan culture, the regional culture often referred to as Cantonese culture See also Cantonese Braille, a Cantonese-language version of Braille in Hong Kong Cantopop, Cantonese pop music
5765
https://en.wikipedia.org/wiki/%C3%87atalh%C3%B6y%C3%BCk
Çatalhöyük
Çatalhöyük (; also Çatal Höyük and Çatal Hüyük; from Turkish çatal "fork" + höyük "tumulus") is a tell of a very large Neolithic and Chalcolithic proto-city settlement in southern Anatolia, which existed from approximately 7500 BC to 6400 BC, and flourished around 7000 BC. In July 2012, it was inscribed as a UNESCO World Heritage Site. Çatalhöyük is located overlooking the Konya Plain, southeast of the present-day city of Konya (ancient Iconium) in Turkey, approximately 140 km (87 mi) from the twin-coned volcano of Mount Hasan. The eastern settlement forms a mound that would have risen about 20 m (66 ft) above the plain at the time of the latest Neolithic occupation. There is also a smaller settlement mound to the west and a Byzantine settlement a few hundred meters to the east. The prehistoric mound settlements were abandoned before the Bronze Age. A channel of the Çarşamba River once flowed between the two mounds, and the settlement was built on alluvial clay which may have been favorable for early agriculture. Currently the closest river to it is the Euphrates. Archaeology The site was first excavated by James Mellaart in 1958. He later led a team which further excavated there for four seasons between 1961 and 1965. These excavations revealed this section of Anatolia as a centre of advanced culture in the Neolithic period. Excavation revealed 18 successive layers of buildings signifying various stages of the settlement and eras of history. The bottom layer of buildings can be dated as early as 7100 BC while the top layer is from 5600 BC. Mellaart was banned from Turkey for his involvement in the Dorak affair in which he published drawings of supposedly important Bronze Age artifacts that later went missing. After this scandal, the site lay idle until 1993, when investigations began under the leadership of Ian Hodder, then at the University of Cambridge. The Hodder led excavations ended in 2018. Hodder, a former student of Mellaart, chose the site as the first "real world" test of his then-controversial theory of post-processual archaeology. The site has always had a strong research emphasis upon engagement with digital methodologies, driven by the project's experimental and reflexive methodological framework. According to Mickel, Hodder's Çatalhöyük Research Project (ÇRP) established itself as a site for progressive methodologies - in terms of adaptable and democratized recording, integration of computerized technologies, sampling strategies, and community involvement." New excavations are being directed by Ali Umut Türkcan from Anadolu University. Culture Çatalhöyük was composed entirely of domestic buildings, with no obvious public buildings. While some of the larger ones have rather ornate murals, the purpose of some rooms remains unclear. The population of the eastern mound has been estimated to be around 10,000 people, but the population likely varied over the community's history. An average population of between 5,000 and 7,000 is a reasonable estimate. The sites were set up as large numbers of buildings clustered together. Households looked to their neighbors for help, trade, and possible marriage for their children. The inhabitants lived in mudbrick houses that were crammed together in an aggregate structure. No footpaths or streets were used between the dwellings, which were clustered in a honeycomb-like maze. Most were accessed by holes in the ceiling and doors on the side of the houses, with doors reached by ladders and stairs. The rooftops were effectively streets. The ceiling openings also served as the only source of ventilation, allowing smoke from the houses' open hearths and ovens to escape. Houses had plaster interiors characterized by squared-off timber ladders or steep stairs. These were usually on the south wall of the room, as were cooking hearths and ovens. The main rooms contained raised platforms that may have been used for a range of domestic activities. Typical houses contained two rooms for everyday activity, such as cooking and crafting. All interior walls and platforms were plastered to a smooth finish. Ancillary rooms were used as storage, and were accessed through low openings from main rooms. All rooms were kept scrupulously clean. Archaeologists identified very little rubbish in the buildings, finding middens outside the ruins, with sewage and food waste, as well as significant amounts of ash from burning wood, reeds and animal dung. In good weather, many daily activities may also have taken place on the rooftops, which may have formed a plaza. In later periods, large communal ovens appear to have been built on these rooftops. Over time, houses were renewed by partial demolition and rebuilding on a foundation of rubble, which was how the mound was gradually built up. As many as eighteen levels of settlement have been uncovered. As a part of ritual life, the people of Çatalhöyük buried their dead within the village. Human remains have been found in pits beneath the floors and, especially, beneath hearths, the platforms within the main rooms, and under beds. Bodies were tightly flexed before burial and were often placed in baskets or wound and wrapped in reed mats. Disarticulated bones in some graves suggest that bodies may have been exposed in the open air for a time before the bones were gathered and buried. In some cases, graves were disturbed, and the individual's head removed from the skeleton. These heads may have been used in rituals, as some were found in other areas of the community. In a woman's grave spinning whorls were recovered and in a man's grave, stone axes. Some skulls were plastered and painted with ochre to recreate faces, a custom more characteristic of Neolithic sites in Syria and at Neolithic Jericho than at sites closer by. Vivid murals and figurines are found throughout the settlement, on interior and exterior walls. Distinctive clay figurines of women, notably the Seated Woman of Çatalhöyük, have been found in the upper levels of the site. Although no identifiable temples have been found, the graves, murals, and figurines suggest that the people of Çatalhöyük had a religion rich in symbols. Rooms with concentrations of these items may have been shrines or public meeting areas. Predominant images include men with erect phalluses, hunting scenes, red images of the now extinct aurochs (wild cattle) and stags, and vultures swooping down on headless figures. Relief figures are carved on walls, such as of lionesses facing one another. Heads of animals, especially of cattle, were mounted on walls. A painting of the village, with the twin mountain peaks of Hasan Dağ in the background, is frequently cited as the world's oldest map, and the first landscape painting. However, some archaeologists question this interpretation. Stephanie Meece, for example, argues that it is more likely a painting of a leopard skin instead of a volcano, and a decorative geometric design instead of a map. Religion A feature of Çatalhöyük are its female figurines. Mellaart, the original excavator, argued that these carefully made figurines, carved and molded from marble, blue and brown limestone, schist, calcite, basalt, alabaster, and clay, represented a female deity. Although a male deity existed as well, "statues of a female deity far outnumber those of the male deity, who moreover, does not appear to be represented at all after Level VI". To date, eighteen levels have been identified. These figurines were found primarily in areas Mellaart believed to be shrines. The stately goddess seated on a throne flanked by two lionesses was found in a grain bin, which Mellaart suggests might have been a means of ensuring the harvest or protecting the food supply. Whereas Mellaart excavated nearly two hundred buildings in four seasons, the current excavator, Ian Hodder, spent an entire season excavating one building alone. Hodder and his team, in 2004 and 2005, began to believe that the patterns suggested by Mellaart were false. They found one similar figurine, but the vast majority did not imitate the Mother Goddess style that Mellaart suggested. Instead of a Mother Goddess culture, Hodder points out that the site gives little indication of a matriarchy or patriarchy. In an article in the Turkish Daily News, Hodder is reported as denying that Çatalhöyük was a matriarchal society and quoted as saying "When we look at what they eat and drink and at their social statues, we see that men and women had the same social status. There was a balance of power. Another example is the skulls found. If one's social status was of high importance in Çatalhöyük, the body and head were separated after death. The number of female and male skulls found during the excavations is almost equal." In another article in the Hurriyet Daily News Hodder is reported to say "We have learned that men and women were equally approached". In a report in September 2009 on the discovery of around 2000 figurines Hodder is quoted as saying: Professor Lynn Meskell explained that while the original excavations had found only 200 figures, the new excavations had uncovered 2,000 figures, most of which depicted animals, and fewer than 5% of the figurines depicted women. Estonian folklorist Uku Masing has suggested as early as in 1976, that Çatalhöyük was probably a hunting and gathering religion and the Mother Goddess figurine did not represent a female deity. He implied that perhaps a longer period of time was needed to develop symbols for agricultural rites. His theory was developed in the paper "Some remarks on the mythology of the people of Catal Hüyük". Economy Çatalhöyük has strong evidence of an egalitarian society, as no houses with distinctive features (belonging to royalty or religious hierarchy, for example) have been found so far. The most recent investigations also reveal little social distinction based on gender, with men and women receiving equivalent nutrition and seeming to have equal social status, as typically found in Paleolithic cultures. Children observed domestic areas. They learned how to perform rituals and how to build or repair houses by watching the adults make statues, beads and other objects. Çatalhöyük's spatial layout may be due to the close kin relations exhibited amongst the people. It can be seen, in the layout, that the people were "divided into two groups who lived on opposite sides of the town, separated by a gully." Furthermore, because no nearby towns were found from which marriage partners could be drawn, "this spatial separation must have marked two intermarrying kinship groups." This would help explain how a settlement so early on would become so large. In the upper levels of the site, it becomes apparent that the people of Çatalhöyük were honing skills in agriculture and the domestication of animals. Female figurines have been found within bins used for storage of cereals, such as wheat and barley, and the figurines are presumed to be of a deity protecting the grain. Peas were also grown, and almonds, pistachios and fruit were harvested from trees in the surrounding hills. Sheep were domesticated and evidence suggests the beginning of cattle domestication as well. However, hunting continued to be a major source of food for the community. Pottery and obsidian tools appear to have been major industries; obsidian tools were probably both used and also traded for items such as Mediterranean sea shells and flint from Syria. Noting the lack of hierarchy and economic inequality, historian and anti-capitalist author Murray Bookchin has argued that Çatalhöyük was an early example of anarcho-communism. Conversely, a 2014 paper argues that the picture of Çatalhöyük is more complex and that while there seemed to have been an egalitarian distribution of cooking tools and some stone tools, unbroken quern-stones and storage units were more unevenly distributed. Private property existed but shared tools also existed. It was also suggested that Çatalhöyük was becoming less egalitarian, with greater inter-generational wealth transmission. See also Boncuklu Höyük Cities of the ancient Near East Cucuteni–Trypillian culture Göbekli Tepe Kamyana Mohyla List of largest cities throughout history List of Stone Age art Matriarchy Neolithic Revolution Old Europe (archaeology) Sacred bull Venus figurines References Sources Bailey, Douglass. Prehistoric Figurines: Representation and Corporeality in the Neolithic. New York: Routledge, 2005 (hardcover, ; paperback, ). Balter, Michael. The Goddess and the Bull: Çatalhöyük: An Archaeological Journey to the Dawn of Civilization. New York: Free Press, 2004 (hardcover, ); Walnut Creek, CA: Left Coast Press, 2006 (paperback, ). A highly condensed version was published in The Smithsonian Magazine, May 2005. Dural, Sadrettin. "Protecting Catalhoyuk: Memoir of an Archaeological Site Guard." Contributions by Ian Hodder. Translated by Duygu Camurcuoglu Cleere. Walnut Creek, CA: Left Coast Press, 2007. . Hodder, Ian. "Women and Men at Çatalhöyük," Scientific American Magazine, January 2004 (update V15:1, 2005). Hodder, I. (2014). "Çatalhöyük excavations: the 2000-2008 seasons.", British Institute at Ankara, Monumenta Archaeologica 29, Hodder, Ian. Twenty-Five Years of Research at Çatalhöyük, Near Eastern Archaeology; Chicago, vol. 83, iss. 2, pp. 72–29, June 2020 Hodder, Ian. The Leopard's Tale: Revealing the Mysteries of Çatalhöyük. London; New York: Thames & Hudson, 2006 (hardcover, ). (The UK title of this work is Çatalhöyük: The Leopard's Tale.) Hodder, Ian; Bogaard, Amy; Engel, Claudia; Pearson, Jessica; Wolfhagen, Jesse., "Spatial autocorrelation analysis and the social organisation of crop and herd management at Çatalhöyük", Anatolian Studies, London, vol. 72, pp. 1–15, 2022 Mallett, Marla, "The Goddess from Anatolia: An Updated View of the Catak Huyuk Controversy," in Oriental Rug Review, Vol. XIII, No. 2 (December 1992/January 1993). Mellaart, James. Çatal Hüyük: A Neolithic Town in Anatolia. London: Thames & Hudson, 1967; New York: McGraw-Hill Book Company, 1967. Online at archive.org On the Surface: Çatalhöyük 1993–95, edited by Ian Hodder. Cambridge: McDonald Institute for Archaeological Research and British Institute of Archaeology at Ankara, 1996 (). Todd, Ian A. Çatal Hüyük in Perspective. Menlo Park, CA: Cummings Pub. Co., 1976 (; ). External links What we learned from 25 Years of Research at Catalhoyuk - Ian Hodder - Oriental Institute lecture Dec 4, 2019 Çatalhöyük — Excavations of a Neolithic Anatolian Höyük, Çatalhöyük excavation official website Çatalhöyük photos The First Cities: Why Settle Down? The Mystery of Communities, by Michael Balter, Çatalhöyük excavation official biographer Interview with Ian Hodder March 201 "Ian Hodder: Çatalhöyük, Religion & Templeton's 25%" 1958 archaeological discoveries Anatolia Archaeological discoveries in Turkey Archaeological museums in Turkey Archaeological sites in Central Anatolia Archaeological sites of prehistoric Anatolia Buildings and structures in Konya Province Chalcolithic sites of Asia Former populated places in Turkey Megasites Museums in Konya Province Neolithic settlements Neolithic sites of Asia Populated places established in the 8th millennium BC
5766
https://en.wikipedia.org/wiki/Clement%20Attlee
Clement Attlee
Clement Richard Attlee, 1st Earl Attlee, (3 January 18838 October 1967) was a British statesman and Labour Party politician who served as Prime Minister of the United Kingdom from 1945 to 1951 and Leader of the Labour Party from 1935 to 1955. He was Deputy Prime Minister during the wartime coalition government under Winston Churchill, and served twice as Leader of the Opposition from 1935 to 1940 and from 1951 to 1955. Attlee remains the longest serving Labour leader and is widely considered by historians and members of the public through various polls to be one of the greatest Prime Ministers of the United Kingdom. Attlee was born into an upper-middle-class family, the son of a wealthy London solicitor. After attending Haileybury College and the University of Oxford, he practised as a barrister. The volunteer work he carried out in London's East End exposed him to poverty, and his political views shifted leftwards thereafter. He joined the Independent Labour Party, gave up his legal career, and began lecturing at the London School of Economics; with his work briefly interrupted by service in the First World War. In 1919, he became mayor of Stepney and in 1922 was elected as the Member for Limehouse. Attlee served in the first Labour minority government led by Ramsay MacDonald in 1924, and then joined the Cabinet during MacDonald's second minority (1929–1931). After retaining his seat in Labour's landslide defeat of 1931, he became the party's Deputy Leader. Elected Leader of the Labour Party in 1935, and at first advocating pacificism and opposing re-armament, he became a critic of Neville Chamberlain's policy of appeasement in the lead-up to the Second World War. Attlee took Labour into the wartime coalition government in 1940 and served under Winston Churchill, initially as Lord Privy Seal and then as Deputy Prime Minister from 1942. As the European front of WWII reached its conclusion, the war cabinet headed by Churchill was dissolved and elections were scheduled to be held. The Labour Party, led by Attlee, won a landslide victory in the 1945 general election, on their post-war recovery platform. Following the election, Attlee led the construction of the first Labour majority government. His government's Keynesian approach to economic management aimed to maintain full employment, a mixed economy and a greatly enlarged system of social services provided by the state. To this end, it undertook the nationalisation of public utilities and major industries, and implemented wide-ranging social reforms, including the passing of the National Insurance Act 1946 and National Assistance Act 1948, the formation of the National Health Service (NHS) in 1948, and the enlargement of public subsidies for council house building. His government also reformed trade union legislation, working practices and children's services; it created the National Parks system, passed the New Towns Act 1946 and established the town and country planning system. The Attlee government proved itself to be a radical, reforming government. From 1945 to 1948, over 200 public Acts of Parliament were passed, with eight major pieces of legislation placed on the statute book in 1946 alone. Attlee's foreign policy focused on decolonization efforts which he delegated to Ernest Bevin, but personally oversaw the partition of India (1947), the independence of Burma and Ceylon, and the dissolution of the British mandates of Palestine and Transjordan. Attlee and Bevin encouraged the United States to take a vigorous role in the Cold War; unable to afford military intervention in Greece during its civil war, he called on Washington to counter the communists there. The strategy of containment was formalized between the two nations through the Truman Doctrine. He supported the Marshall Plan to rebuild Western Europe with American money and, in 1949, promoted the NATO military alliance against the Soviet bloc. After leading Labour to a narrow victory at the 1950 general election, he sent British troops to fight alongside South Korea in the Korean War. Attlee had inherited a country close to bankruptcy following the Second World War and beset by food, housing and resource shortages; despite his social reforms and economic programme, these problems persisted throughout his premiership, alongside recurrent currency crises and dependence on US aid. His party was narrowly defeated by the Conservatives in the 1951 general election, despite winning the most votes. He continued as Labour leader but retired after losing the 1955 election and was elevated to the House of Lords, where he served until his death in 1967. In public, he was modest and unassuming, but behind the scenes his depth of knowledge, quiet demeanour, objectivity and pragmatism proved decisive. He is often ranked as one of the greatest British prime ministers. In 2004, he was voted the most successful British Prime Minister of the 20th century by a poll of 139 academics. The majority of those responses singled out the Attlee government's welfare state reforms and the creation of the NHS as the key 20th century domestic policy achievements. He is also commended for continuing the 'Special Relationship' with the US and active involvement in NATO. Early life Attlee was born on 3 January 1883 in Putney, Surrey (now part of London), into an upper middle-class family, the seventh of eight children. His father was Henry Attlee (1841–1908), a solicitor, and his mother was Ellen Bravery Watson (1847–1920), daughter of Thomas Simons Watson, secretary for the Art Union of London. His parents were "committed Anglicans" who read prayers and psalms each morning at breakfast. Attlee grew up in a two-storey villa with a large garden and tennis court, staffed by three servants and a gardener. His father, a political Liberal, had inherited family interests in milling and brewing, and became a senior partner in the law firm of Druces, also serving a term as president of the Law Society of England and Wales. In 1898 he purchased a estate, Comaques in Thorpe-le-Soken, Essex. At the age of nine, Attlee was sent to board at Northaw Place, a boys' preparatory school in Hertfordshire. In 1896 he followed his brothers to Haileybury College, where he was a middling student. He was influenced by the Darwinist views of his housemaster Frederick Webb Headley, and in 1899 he published an attack on striking London cab-drivers in the school magazine, predicting they would soon have to "beg for their fares". In 1901, Attlee went up to University College, Oxford, reading modern history. He and his brother Tom "were given a generous stipend by their father and embraced the university lifestyle—rowing, reading and socializing". He was later described by a tutor as "level-headed, industrious, dependable man with no brilliance of style ... but with excellent sound judgement". At university he had little interest in politics or economics, later describing his views at this time as "good old fashioned imperialist conservative". He graduated Bachelor of Arts in 1904 with second-class honours. Attlee then trained as a barrister at the Inner Temple and was called to the bar in March 1906. He worked for a time at his father's law firm Druces and Attlee but did not enjoy the work, and had no particular ambition to succeed in the legal profession. He also played football for non-League club Fleet. Attlee's father died in 1908, leaving an estate valued for probate at £75,394 (equivalent to £ in ). Early career In 1906, he became a volunteer at Haileybury House, a charitable club for working-class boys in Stepney in the East End of London run by his old school, and from 1907 to 1909 he served as the club's manager. Until then, his political views had been more conservative. However, after his shock at the poverty and deprivation he saw while working with the slum children, he came to the view that private charity would never be sufficient to alleviate poverty and that only direct action and income redistribution by the state would have any serious effect. This sparked a process that caused him to convert to socialism. He subsequently joined the Independent Labour Party (ILP) in 1908 and became active in local politics. In 1909, he stood unsuccessfully at his first election, as an ILP candidate for Stepney Borough Council. He also worked briefly as a secretary for Beatrice Webb in 1909, before becoming a secretary for Toynbee Hall. He worked for Webb's campaign of popularisation of the Minority Report as he was very active in Fabian Society circles, in which he would go round visiting many political societies—Liberal, Conservative and socialist—to explain and popularise the ideas, as well as recruiting lecturers deemed suitable to work on the campaign. In 1911, he was employed by the Government as an "official explainer"—touring the country to explain Chancellor of the Exchequer David Lloyd George's National Insurance Act. He spent the summer of that year touring Essex and Somerset on a bicycle, explaining the Act at public meetings. A year later, he became a lecturer at the London School of Economics, teaching Social science and Public administration. Military service Following the outbreak of the First World War in August 1914, Attlee applied to join the British Army. Initially his application was turned down, as his age of 31 was seen as being too old; however, he was eventually commissioned as a temporary lieutenant in the 6th (Service) Battalion, South Lancashire Regiment, on 30 September 1914. On 9 February 1915 he was promoted to captain, and on 14 March was appointed battalion adjutant. The 6th South Lancashires were part of the 38th Brigade of the 13th (Western) Division, which served in the Gallipoli campaign in Turkey. Attlee's decision to fight caused a rift between him and his older brother Tom, who, as a conscientious objector, spent much of the war in prison. After a period spent fighting in Gallipoli, Attlee collapsed after falling ill with dysentery and was put on a ship bound for England to recover. When he woke up he wanted to get back to action as soon as possible, and asked to be let off the ship in Malta, where he stayed in hospital in order to recover. His hospitalisation coincided with the Battle of Sari Bair, which saw a large number of his comrades killed. Upon returning to action, he was informed that his company had been chosen to hold the final lines during the evacuation of Suvla. As such, he was the penultimate man to be evacuated from Suvla Bay, the last being General Stanley Maude. The Gallipoli Campaign had been engineered by the First Lord of the Admiralty, Winston Churchill. Although it was unsuccessful, Attlee believed that it was a bold strategy which could have been successful if it had been better implemented on the ground. This led to an admiration for Churchill as a military strategist, something which would make their working relationship in later years productive. He later served in the Mesopotamian campaign in what is now Iraq, where in April 1916 he was badly wounded, being hit in the leg by shrapnel from friendly fire while storming an enemy trench during the Battle of Hanna. The battle was an unsuccessful attempt to relieve the Siege of Kut, and many of Attlee's fellow soldiers were also wounded or killed. He was sent firstly to India, and then back to the UK to recover. On 18 December 1916 he was transferred to the Heavy Section of the Machine Gun Corps, and 1 March 1917 he was promoted to the temporary rank of major, leading him to be known as "Major Attlee" for much of the inter-war period. He would spend most of 1917 training soldiers at various locations in England. From 2 to 9 July 1917, he was the temporary commanding officer (CO) of the newly formed L (later 10th) Battalion, the Tank Corps at Bovington Camp, Dorset. From 9 July, he assumed command of the 30th Company of the same battalion; however, he did not deploy to France with it in December 1917, as he was transferred back to the South Lancashire Regiment on 28 November. After fully recovering from his injuries, he was sent to France in June 1918 to serve on the Western Front for the final months of the war. After being discharged from the Army in January 1919, he returned to Stepney, and returned to his old job lecturing part-time at the London School of Economics. Early political career Local politics Attlee returned to local politics in the immediate post-war period, becoming mayor of the Metropolitan Borough of Stepney, one of London's most deprived inner-city boroughs, in 1919. During his time as mayor, the council undertook action to tackle slum landlords who charged high rents but refused to spend money on keeping their property in habitable condition. The council served and enforced legal orders on homeowners to repair their property. It also appointed health visitors and sanitary inspectors, reducing the infant mortality rate, and took action to find work for returning unemployed ex-servicemen. In 1920, while mayor, he wrote his first book, The Social Worker, which set out many of the principles that informed his political philosophy and that were to underpin the actions of his government in later years. The book attacked the idea that looking after the poor could be left to voluntary action. He wrote that:In a civilised community, although it may be composed of self-reliant individuals, there will be some persons who will be unable at some period of their lives to look after themselves, and the question of what is to happen to them may be solved in three ways – they may be neglected, they may be cared for by the organised community as of right, or they may be left to the goodwill of individuals in the community. [...] Charity is only possible without loss of dignity between equals. A right established by law, such as that to an old age pension, is less galling than an allowance made by a rich man to a poor one, dependent on his view of the recipient's character, and terminable at his caprice. In 1921, George Lansbury, the Labour mayor of the neighbouring borough of Poplar, and future Labour Party leader, launched the Poplar Rates Rebellion; a campaign of disobedience seeking to equalise the poor relief burden across all the London boroughs. Attlee, who was a personal friend of Lansbury, strongly supported this. However, Herbert Morrison, the Labour mayor of nearby Hackney, and one of the main figures in the London Labour Party, strongly denounced Lansbury and the rebellion. During this period, Attlee developed a lifelong dislike of Morrison. Member of Parliament At the 1922 general election, Attlee became the Member of Parliament (MP) for the constituency of Limehouse in Stepney. At the time, he admired Ramsay MacDonald and helped him get elected as Labour Party leader at the 1922 leadership election. He served as MacDonald's Parliamentary Private Secretary for the brief 1922 parliament. His first taste of ministerial office came in 1924, when he served as Under-Secretary of State for War in the short-lived first Labour government, led by MacDonald. Attlee opposed the 1926 General Strike, believing that strike action should not be used as a political weapon. However, when it happened, he did not attempt to undermine it. At the time of the strike, he was chairman of the Stepney Borough Electricity Committee. He negotiated a deal with the Electrical Trade Union so that they would continue to supply power to hospitals, but would end supplies to factories. One firm, Scammell and Nephew Ltd, took a civil action against Attlee and the other Labour members of the committee (although not against the Conservative members who had also supported this). The court found against Attlee and his fellow councillors and they were ordered to pay £300 damages. The decision was later reversed on appeal, but the financial problems caused by the episode almost forced Attlee out of politics. In 1927, he was appointed a member of the multi-party Simon Commission, a royal commission set up to examine the possibility of granting self-rule to India. Due to the time he needed to devote to the commission, and contrary to a promise MacDonald made to Attlee to induce him to serve on the commission, he was not initially offered a ministerial post in the Second Labour Government, which entered office after the 1929 general election. Attlee's service on the Commission equipped him with a thorough exposure to India and many of its political leaders. By 1933 he argued that British rule was alien to India and was unable to make the social and economic reforms necessary for India's progress. He became the British leader most sympathetic to Indian independence (as a dominion), preparing him for his role in deciding on independence in 1947. In May 1930, Labour MP Oswald Mosley left the party after its rejection of his proposals for solving the unemployment problem, and Attlee was given Mosley's post of Chancellor of the Duchy of Lancaster. In March 1931, he became Postmaster General, a post he held for five months until August, when the Labour government fell, after failing to agree on how to tackle the financial crisis of the Great Depression. That month MacDonald and a few of his allies formed a National Government with the Conservatives and Liberals, leading them to be expelled from Labour. MacDonald offered Attlee a job in the National Government, but he turned down the offer and opted to stay loyal to the main Labour party. After Ramsay MacDonald formed the National Government, Labour was deeply divided. Attlee had long been close to MacDonald and now felt betrayed—as did most Labour politicians. During the course of the second Labour government, Attlee had become increasingly disillusioned with MacDonald, whom he came to regard as vain and incompetent, and of whom he later wrote scathingly in his autobiography. He would write: In the old days I had looked up to MacDonald as a great leader. He had a fine presence and great oratorical power. The unpopular line which he took during the First World War seemed to mark him as a man of character. Despite his mishandling of the Red Letter episode, I had not appreciated his defects until he took office a second time. I then realised his reluctance to take positive action and noted with dismay his increasing vanity and snobbery, while his habit of telling me, a junior Minister, the poor opinion he had of all his Cabinet colleagues made an unpleasant impression. I had not, however, expected that he would perpetrate the greatest betrayal in the political history of this country ... The shock to the Party was very great, especially to the loyal workers of the rank-and-file who had made great sacrifices for these men. Deputy Leader The 1931 general election held later that year was a disaster for the Labour Party, which lost over 200 seats, returning only 52 MPs to Parliament. The vast majority of the party's senior figures, including the Leader Arthur Henderson, lost their seats. Attlee, however, narrowly retained his Limehouse seat, with his majority being slashed from 7,288 to just 551. He was one of only three Labour MPs who had experience of government to retain their seats, along with George Lansbury and Stafford Cripps. Accordingly, Lansbury was elected Leader unopposed with Attlee as his deputy. Most of the remaining Labour MPs after 1931 were elderly trade union officials who could not contribute much to debates, Lansbury was in his 70s, and Stafford Cripps another main figure of the Labour front bench who had entered Parliament in 1931, was inexperienced. As one of the most capable and experienced of the remaining Labour MPs, Attlee therefore shouldered a lot of the burden of providing an opposition to the National Government in the years 1931–35, during this time he had to extend his knowledge of subjects which he had not studied in any depth before, such as finance and foreign affairs in order to provide an effective opposition to the government. Attlee effectively served as acting leader for nine months from December 1933, after Lansbury fractured his thigh in an accident, which raised Attlee's public profile considerably. It was during this period, however, that personal financial problems almost forced Attlee to quit politics altogether. His wife had become ill, and at that time there was no separate salary for the Leader of the Opposition. On the verge of resigning from Parliament, he was persuaded to stay by Stafford Cripps, a wealthy socialist, who agreed to make a donation to party funds to pay him an additional salary until Lansbury could take over again. During 1932–33 Attlee flirted with, and then drew back from radicalism, influenced by Stafford Cripps who was then on the radical wing of the party. He was briefly a member of the Socialist League, which had been formed by former Independent Labour Party (ILP) members, who opposed the ILP's disaffiliation from the main Labour Party in 1932. At one point he agreed with the proposition put forward by Cripps that gradual reform was inadequate and that a socialist government would have to pass an emergency powers act, allowing it to rule by decree to overcome any opposition by vested interests until it was safe to restore democracy. He admired Oliver Cromwell's strong-armed rule and use of major generals to control England. After looking more closely at Hitler, Mussolini, Stalin, and even his former colleague Oswald Mosley, leader of the new blackshirt fascist movement in Britain, Attlee retreated from his radicalism, and distanced himself from the League, and argued instead that the Labour Party must adhere to constitutional methods and stand forthright for democracy and against totalitarianism of either the left or right. He always supported the crown, and as Prime Minister was close to King George VI. Leader of the Opposition George Lansbury, a committed pacifist, resigned as the Leader of the Labour Party at the 1935 Party Conference on 8 October, after delegates voted in favour of sanctions against Italy for its aggression against Abyssinia. Lansbury had strongly opposed the policy, and felt unable to continue leading the party. Taking advantage of the disarray in the Labour Party, the Prime Minister Stanley Baldwin announced on 19 October that a general election would be held on 14 November. With no time for a leadership contest, the party agreed that Attlee should serve as interim leader, on the understanding that a leadership election would be held after the general election. Attlee therefore led Labour through the 1935 election, which saw the party stage a partial comeback from its disastrous 1931 performance, winning 38 per cent of the vote, the highest share Labour had won up to that point, and gaining over one hundred seats. Attlee stood in the subsequent leadership election, held soon afterward, where he was opposed by Herbert Morrison, who had just re-entered parliament in the recent election, and Arthur Greenwood: Morrison was seen as the favourite, but was distrusted by many sections of the party, especially the left wing. Arthur Greenwood meanwhile was a popular figure in the party; however, his leadership bid was severely hampered by his alcohol problem. Attlee was able to come across as a competent and unifying figure, particularly having already led the party through a general election. He went on to come first in both the first and second ballots, formally being elected Leader of the Labour Party on 3 December 1935. Throughout the 1920s and most of the 1930s, the Labour Party's official policy had been to oppose rearmament, instead supporting internationalism and collective security under the League of Nations. At the 1934 Labour Party Conference, Attlee declared that, "We have absolutely abandoned any idea of nationalist loyalty. We are deliberately putting a world order before our loyalty to our own country. We say we want to see put on the statute book something which will make our people citizens of the world before they are citizens of this country". During a debate on defence in Commons a year later, Attlee said "We are told (in the White Paper) that there is danger against which we have to guard ourselves. We do not think you can do it by national defence. We think you can only do it by moving forward to a new world. A world of law, the abolition of national armaments with a world force and a world economic system. I shall be told that that is quite impossible". Shortly after those comments, Adolf Hitler proclaimed that German rearmament offered no threat to world peace. Attlee responded the next day noting that Hitler's speech, although containing unfavourable references to the Soviet Union, created "A chance to call a halt in the armaments race ... We do not think that our answer to Herr Hitler should be just rearmament. We are in an age of rearmaments, but we on this side cannot accept that position". Attlee played little part in the events that would lead up to the abdication of Edward VIII, for despite Baldwin's threat to step down if Edward attempted to remain on the throne after marrying Wallis Simpson, Labour was widely accepted not to be a viable alternative government, owing to the National Government's overwhelming majority in the Commons. Attlee, along with Liberal leader Archibald Sinclair, was eventually consulted with by Baldwin on 24 November 1936, and Attlee agreed with both Baldwin and Sinclair that Edward could not remain on the throne, firmly eliminating any prospect of any alternative government forming were Baldwin to resign. In April 1936, the Chancellor of the Exchequer, Neville Chamberlain, introduced a Budget which increased the amount spent on the armed forces. Attlee made a radio broadcast in opposition to it, saying: In June 1936, the Conservative MP Duff Cooper called for an Anglo-French alliance against possible German aggression and called for all parties to support one. Attlee condemned this: "We say that any suggestion of an alliance of this kind—an alliance in which one country is bound to another, right or wrong, by some overwhelming necessity—is contrary to the spirit of the League of Nations, is contrary to the Covenant, is contrary to Locarno is contrary to the obligations which this country has undertaken, and is contrary to the professed policy of this Government". At the Labour Party conference at Edinburgh in October Attlee reiterated that "There can be no question of our supporting the Government in its rearmament policy". However, with the rising threat from Nazi Germany, and the ineffectiveness of the League of Nations, this policy eventually lost credibility. By 1937, Labour had jettisoned its pacifist position and came to support rearmament and oppose Neville Chamberlain's policy of appeasement. At the end of 1937, Attlee and a party of three Labour MPs visited Spain and visited the British Battalion of the International Brigades fighting in the Spanish Civil War. One of the companies was named the "Major Attlee Company" in his honour. Attlee was supportive of the Republican government, and at the 1937 Labour conference moved the wider Labour Party towards opposing what he considered the "farce" of the Non-Intervention Committee organised by the British and French governments. In the House of Commons, Attlee stated "I cannot understand the delusion that if Franco wins with Italian and German aid, he will immediately become independent. I think it is a ridiculous proposition." Dalton, the Labour Party's spokesman on foreign policy, also thought that Franco would ally with Germany and Italy. However, Franco's subsequent behaviour proved it was not such a ridiculous proposition. As Dalton later acknowledged, Franco skilfully maintained Spanish neutrality, whereas Hitler would have occupied Spain if Franco had lost the Civil War. In 1938, Attlee opposed the Munich Agreement, in which Chamberlain negotiated with Hitler to give Germany the German-speaking parts of Czechoslovakia, the Sudetenland: We all feel relief that war has not come this time... we cannot, however, feel that peace has been established, but that we have nothing but an armistice in a state of war. We have been unable to go in for care-free rejoicing. We have felt that we are in the midst of a tragedy... [and] humiliation. This has not been a victory for reason and humanity. It has been a victory for brute force. At every stage of the proceedings there have been time limits laid down... [the] terms laid down as ultimata. We have seen to-day a gallant, civilised and democratic people betrayed and handed over to a ruthless despotism... The events of these last few days constitute one of the greatest diplomatic defeats that this country and France have ever sustained. There can be no doubt that it is a tremendous victory for Herr Hitler. Without firing a shot, by the mere display of military force, he has achieved a dominating position in Europe which Germany failed to win after four years of war. He has overturned the balance of power in Europe... [and] destroyed the last fortress of democracy in Eastern Europe which stood in the way of his ambition. He has opened his way to the food, the oil and the resources which he requires in order to consolidate his military power, and he has successfully defeated and reduced to impotence the forces that might have stood against the rule of violence. [...] The cause [of the crisis which we have undergone] was not the existence of minorities in Czechoslovakia; it was not that the position of the Sudeten Germans had become intolerable. It was not the wonderful principle of self-determination. It was because Herr Hitler had decided that the time was ripe for another step forward in his design to dominate Europe... The minorities question is no new one. [...] [And] short of a drastic and entire reshuffling of these populations there is no possible solution to the problem of minorities in Europe except toleration. However, the new Czechoslovakian state did not provide equal rights to the Slovaks and Sudeten Germans, with the historian Arnold J. Toynbee already having noted that "for the Germans, Magyars and Poles, who account between them for more than one quarter of the whole population, the present regime in Czechoslovakia is not essentially different from the regimes in the surrounding countries". Anthony Eden in the Munich debate acknowledged that there had been "discrimination, even severe discrimination" against the Sudeten Germans. In 1937, Attlee wrote a book entitled The Labour Party in Perspective that sold fairly well in which he set out some of his views. He argued that there was no point in Labour compromising on its socialist principles in the belief that this would achieve electoral success. He wrote: "I find that the proposition often reduces itself to this – that if the Labour Party would drop its socialism and adopt a Liberal platform, many Liberals would be pleased to support it. I have heard it said more than once that if Labour would only drop its policy of nationalisation everyone would be pleased, and it would soon obtain a majority. I am convinced it would be fatal for the Labour Party." He also wrote that there was no point in "watering down Labour's socialist creed in order to attract new adherents who cannot accept the full socialist faith. On the contrary, I believe that it is only a clear and bold policy that will attract this support". In the late 1930s, Attlee sponsored a Jewish mother and her two children, enabling them to leave Germany in 1939 and move to the UK. On arriving in Britain, Attlee invited one of the children into his home in Stanmore, north-west London, where he stayed for several months. Deputy Prime Minister Attlee remained as Leader of the Opposition when the Second World War broke out in September 1939. The ensuing disastrous Norwegian campaign would result in a motion of no confidence in Neville Chamberlain. Although Chamberlain survived this, the reputation of his administration was so badly and publicly damaged that it became clear a coalition government would be necessary. Even if Attlee had personally been prepared to serve under Chamberlain in an emergency coalition government, he would never have been able to carry Labour with him. Consequently, Chamberlain tendered his resignation, and Labour and the Conservatives entered a coalition government led by Winston Churchill on 10 May 1940, with Attlee joining the Cabinet as Lord Privy Seal on 12 May. Attlee and Churchill quickly agreed that the War Cabinet would consist of three Conservatives (initially Churchill, Chamberlain and Lord Halifax) and two Labour members (initially himself and Arthur Greenwood) and that Labour should have slightly more than one third of the posts in the coalition government. Attlee and Greenwood played a vital role in supporting Churchill during a series of War Cabinet debates over whether or not to negotiate peace terms with Hitler following the Fall of France in May 1940; both supported Churchill and gave him the majority he needed in the War Cabinet to continue Britain's resistance. Only Attlee and Churchill remained in the War Cabinet from the formation of the Government of National Unity in May 1940 through to the election in May 1945. Attlee was initially the Lord Privy Seal, before becoming Britain's first ever Deputy Prime Minister in 1942, as well as becoming the Dominions Secretary and Lord President of the Council on 28 September 1943. Attlee himself played a generally low key but vital role in the wartime government, working behind the scenes and in committees to ensure the smooth operation of government. In the coalition government, three inter-connected committees effectively ran the country. Churchill chaired the first two, the War cabinet and the Defence Committee, with Attlee deputising for him in these, and answering for the government in Parliament when Churchill was absent. Attlee himself instituted, and later chaired the third body, the Lord President's Committee, which was responsible for overseeing domestic affairs. As Churchill was most concerned with overseeing the war effort, this arrangement suited both men. Attlee himself had largely been responsible for creating these arrangements with Churchill's backing, streamlining the machinery of government and abolishing many committees. He also acted as a conciliator in the government, smoothing over tensions which frequently arose between Labour and Conservative Ministers. Many Labour activists were baffled by the top leadership role for a man they regarded as having little charisma; Beatrice Webb wrote in her diary in early 1940: He looked and spoke like an insignificant elderly clerk, without distinction in the voice, manner or substance of his discourse. To realise that this little nonentity is the Parliamentary Leader of the Labour Party ... and presumably the future P.M. [Prime Minister] is pitiable". 1945 election Following the defeat of Nazi Germany and the end of the War in Europe in May 1945, Attlee and Churchill favoured the coalition government remaining in place until Japan had been defeated. However, Herbert Morrison made it clear that the Labour Party would not be willing to accept this, and Churchill was forced to tender his resignation as Prime Minister and call an immediate election. The war had set in motion profound social changes within Britain and had ultimately led to a widespread popular desire for social reform. This mood was epitomised in the Beveridge Report of 1942, by the Liberal economist William Beveridge. The Report assumed that the maintenance of full employment would be the aim of post-war governments, and that this would provide the basis for the welfare state. Immediately upon its release, it sold hundreds of thousands of copies. All major parties committed themselves to fulfilling this aim, but most historians say that Attlee's Labour Party was seen by the electorate as the party most likely to follow it through. Labour campaigned on the theme of "Let Us Face the Future", positioning themselves as the party best placed to rebuild Britain following the war, and were widely viewed as having run a strong and positive campaign, while the Conservative campaign centred entirely around Churchill. Despite opinion polls indicating a strong Labour lead, opinion polls were then viewed as a novelty which had not proven their worth, and most commentators expected that Churchill's prestige and status as a "war hero" would ensure a comfortable Conservative victory. Before polling day, The Manchester Guardian surmised that "the chances of Labour sweeping the country and obtaining a clear majority ... are pretty remote". The News of the World predicted a working Conservative majority, while in Glasgow a pundit forecast the result as Conservatives 360, Labour 220, Others 60. Churchill, however, made some costly errors during the campaign. In particular, his suggestion during one radio broadcast that a future Labour Government would require "some form of a gestapo" to implement their policies was widely regarded as being in very bad taste and massively backfired. When the results of the election were announced on 26 July, they came as a surprise to most, including Attlee himself. Labour had won power by a huge landslide, winning 47.7 per cent of the vote to the Conservatives' 36 per cent. This gave them 393 seats in the House of Commons, a working majority of 146. This was the first time in history that the Labour Party had won a majority in Parliament. When Attlee went to see King George VI at Buckingham Palace to be appointed Prime Minister, the notoriously laconic Attlee and the famously tongue-tied King stood in silence; Attlee finally volunteered the remark, "I've won the election". The King replied "I know. I heard it on the Six O'Clock News". Prime Minister Domestic policy Francis (1995) argues there was consensus both in the Labour's national executive committee and at party conferences on a definition of socialism that stressed moral improvement as well as material improvement. The Attlee government was committed to rebuilding British society as an ethical commonwealth, using public ownership and controls to abolish extremes of wealth and poverty. Labour's ideology contrasted sharply with the contemporary Conservative Party's defence of individualism, inherited privileges, and income inequality. On 5 July 1948, Clement Attlee replied to a letter dated 22 June from James Murray and ten other MPs who raised concerns about West Indians who arrived on board the . As for the prime minister himself, he was not much focused on economic policy, letting others handle the issues. Nationalisation Attlee's government also carried out their manifesto commitment for nationalisation of basic industries and public utilities. The Bank of England and civil aviation were nationalised in 1946. Coal mining, the railways, road haulage, canals and Cable and Wireless were nationalised in 1947, and electricity and gas followed in 1948. The steel industry was nationalised in 1951. By 1951 about 20 per cent of the British economy had been taken into public ownership. Nationalisation failed to provide workers with a greater say in the running of the industries in which they worked. It did, however, bring about significant material gains for workers in the form of higher wages, reduced working hours, and improvements in working conditions, especially in regards to safety. As historian Eric Shaw noted of the years following nationalisation, the electricity and gas supply companies became "impressive models of public enterprise" in terms of efficiency, and the National Coal Board was not only profitable, but working conditions for miners had significantly improved as well. Within a few years of nationalisation, a number of progressive measures had been carried out which did much to improve conditions in the mines, including better pay, a five-day working week, a national safety scheme (with proper standards at all the collieries), a ban on boys under the age of 16 going underground, the introduction of training for newcomers before going down to the coalface, and the making of pithead baths into a standard facility. The newly established National Coal Board offered sick pay and holiday pay to miners. As noted by Martin Francis: Union leaders saw nationalisation as a means to pursue a more advantageous position within a framework of continued conflict, rather than as an opportunity to replace the old adversarial form of industrial relations. Moreover, most workers in nationalised industries exhibited an essentially instrumentalist attitude, favouring public ownership because it secured job security and improved wages rather than because it promised the creation of a new set of socialist relationships in the workplace. Agriculture The Attlee government placed strong emphasis on improving the quality of life in rural areas, benefiting both farmers and other consumers. Security of tenure for farmers was introduced, while consumers were protected by food subsidies and the redistributive effects of deficiency payments. Between 1945 and 1951, the quality of rural life was improved by improvements in gas, electricity, and water services, as well as in leisure and public amenities. In addition, the 1947 Transport Act improved provision of rural bus services, while the Agriculture Act 1947 established a more generous subsidy system for farmers. Legislation was also passed in 1947 and 1948 which established a permanent Agricultural Wages Board to fix minimum wages for agricultural workers. Attlee's government made it possible for farm workers to borrow up to 90 per cent of the cost of building their own houses, and received a subsidy of £15 a year for 40 years towards that cost. Grants were also made to meet up to half the cost of supplying water to farm buildings and fields, the government met half the cost of bracken eradication and lime spreading, and grants were paid for bringing hill farming land into use that had previously been considered unfit for farming purposes. In 1946, the National Agricultural Advisory Service was set up to supply agricultural advice and information. The Hill Farming Act 1946 introduced for upland areas a system of grants for buildings, land improvement, and infrastructural improvements such as roads and electrification. The Act also continued a system of headage payments for hill sheep and cattle that had been introduced during the war. The Agricultural Holdings Act 1948 enabled (in effect) tenant farmers to have lifelong tenancies and made provision for compensation in the event of cessations of tenancies. In addition, the Livestock Rearing Act 1951 extended the provisions of the Hill Farming Act 1946 to the upland store cattle and sheep sector. At a time of world food shortages, it was vital that farmers produced the maximum possible quantities. The government encouraged farmers via subsidies for modernisation, while the National Agricultural Advisory Service provided expertise and price guarantees. As a result of the Attlee government's initiatives in agriculture, there was a 20 per cent increase in output between 1947 and 1952, while Britain adopted one of the most mechanised and efficient farming industries in the world. Education The Attlee government ensured provisions of the Education Act 1944 were fully implemented, with free secondary education becoming a right for the first time. Fees in state grammar schools were eliminated, while new, modern secondary schools were constructed. The school leaving age was raised to 15 in 1947, an accomplishment helped brought into fruition by initiatives such as the HORSA ("Huts Operation for Raising the School-leaving Age") scheme and the S.F.O.R.S.A. (furniture) scheme. University scholarships were introduced to ensure that no one who was qualified "should be deprived of a university education for financial reasons", while a large school building programme was organised. A rapid increase in the number of trained teachers took place, and the number of new school places was increased. Increased Treasury funds were made available for education, particularly for upgrading school buildings suffering from years of neglect and war damage. Prefabricated classrooms were built, and 928 new primary schools were constructed between 1945 and 1950. The provision of free school meals was expanded, and opportunities for university entrants were increased. State scholarships to universities were increased, and the government adopted a policy of supplementing university scholarships awards to a level sufficient to cover fees plus maintenance. Many thousands of ex-servicemen were assisted to go through college who could never have contemplated it before the war. Free milk was also made available to all schoolchildren for the first time. In addition, spending on technical education rose, and the number of nursery schools was increased. Salaries for teachers were also improved, and funds were allocated towards improving existing schools. In 1947 the Arts Council of Great Britain was set up to encourage the arts. The Ministry of Education was established under the 1944 Act, and free County Colleges were set up for the compulsory part-time instruction of teenagers between the ages of 15 and 18 who were not in full-time education. An Emergency Training Scheme was also introduced which turned out an extra 25,000 teachers in 1945–1951. In 1947, Regional Advisory Councils were set up to bring together industry and education to find out the needs of young workers "and advise on the provision required, and to secure reasonable economy of provision". That same year, thirteen Area Training Organisations were set up in England and one in Wales to coordinate teacher training. Attlee's government, however, failed to introduce the comprehensive education for which many socialists had hoped. This reform was eventually carried out by Harold Wilson's government. During its time in office, the Attlee government increased spending on education by over 50 per cent, from £6.5 billion to £10 billion. Economy The most significant problem facing Attlee and his ministers remained the economy, as the war effort had left Britain nearly bankrupt. Overseas investments had been used up to pay for the war. The transition to a peacetime economy, and the maintaining of strategic military commitments abroad led to continuous and severe problems with the balance of trade. This resulted in strict rationing of food and other essential goods continuing in the post war period to force a reduction in consumption in an effort to limit imports, boost exports, and stabilise the Pound Sterling so that Britain could trade its way out of its financial state. The abrupt end of the American Lend-Lease programme in August 1945 almost caused a crisis. Some relief was provided by the Anglo-American loan, negotiated in December 1945. The conditions attached to the loan included making the pound fully convertible to the US dollar. When this was introduced in July 1947, it led to a currency crisis and convertibility had to be suspended after just five weeks. The UK benefited from the American Marshall Aid program in 1948, and the economic situation improved significantly. Another balance of payments crisis in 1949 forced Chancellor of the Exchequer, Stafford Cripps, into devaluation of the pound. Despite these problems, one of the main achievements of Attlee's government was the maintenance of near full employment. The government maintained most of the wartime controls over the economy, including control over the allocation of materials and manpower, and unemployment rarely rose above 500,000, or 3 per cent of the total workforce. Labour shortages proved a more frequent problem. The inflation rate was also kept low during his term. The rate of unemployment rarely rose above 2 per cent during Attlee's time in office, whilst there was no hard-core of long-term unemployed. Both production and productivity rose as a result of new equipment, while the average working week was shortened. The government was less successful in housing, which was the responsibility of Aneurin Bevan. The government had a target to build 400,000 new houses a year to replace those which had been destroyed in the war, but shortages of materials and manpower meant that less than half this number were built. Nevertheless, millions of people were rehoused as a result of the Attlee government's housing policies. Between August 1945 and December 1951, 1,016,349 new homes were completed in England, Scotland, and Wales. When the Attlee government was voted out of office in 1951, the economy had been improved compared to 1945. The period from 1946 to 1951 saw continuous full employment and steadily rising living standards, which increased by about 10 per cent each year. During that same period, the economy grew by 3 per cent a year, and by 1951 the UK had "the best economic performance in Europe, while output per person was increasing faster than in the United States". Careful planning after 1945 also ensured that demobilisation was carried out without having a negative impact upon economic recovery, and that unemployment stayed at very low levels. In addition, the number of motor cars on the roads rose from 3 million to 5 million from 1945 to 1951, and seaside holidays were taken by far more people than ever before. A Monopolies and Restrictive Practices (Inquiry and Control) Act was passed in 1948, which allowed for investigations of restrictive practices and monopolies. Energy 1947 proved a particularly difficult year for the government; an exceptionally cold winter that year caused coal mines to freeze and cease production, creating widespread power cuts and food shortages. The Minister of Fuel and Power, Emanuel Shinwell was widely blamed for failing to ensure adequate coal stocks, and soon resigned from his post. The Conservatives capitalised on the crisis with the slogan 'Starve with Strachey and shiver with Shinwell' (referring to the Minister of Food John Strachey). The crisis led to an unsuccessful plot by Hugh Dalton to replace Attlee as Prime Minister with Ernest Bevin. Later that year Stafford Cripps tried to persuade Attlee to stand aside for Bevin. These plots petered out after Bevin refused to cooperate. Later that year, Dalton resigned as Chancellor after inadvertently leaking details of the budget to a journalist. He was replaced by Cripps. Foreign policy In foreign affairs, the Attlee government was concerned with four main issues: post-war Europe, the onset of the Cold War, the establishment of the United Nations, and decolonisation. The first two were closely related, and Attlee was assisted by Foreign Secretary Ernest Bevin. Attlee also attended the later stages of the Potsdam Conference, where he negotiated with President Harry S. Truman and Joseph Stalin.In the immediate aftermath of the war, the Government faced the challenge of managing relations with Britain's former war-time ally, Stalin and the Soviet Union. Ernest Bevin was a passionate anti-communist, based largely on his experience of fighting communist influence in the trade union movement. Bevin's initial approach to the USSR as Foreign Secretary was "wary and suspicious, but not automatically hostile". Attlee himself sought warm relations with Stalin. He put his trust in the United Nations, rejected notions that the Soviet Union was bent on world conquest, and warned that treating Moscow as an enemy would turn it into one. This put Attlee at sword's point with his foreign minister, the Foreign Office, and the military who all saw the Soviets as a growing threat to Britain's role in the Middle East. Suddenly in January 1947, Attlee reversed his position and agreed with Bevin on a hardline anti-Soviet policy. In an early "good-will" gesture that was later heavily criticised, the Attlee government allowed the Soviets to purchase, under the terms of a 1946 UK-USSR Trade agreement, a total of 25 Rolls-Royce Nene jet engines in September 1947 and March 1948. The agreement included an agreement not to use them for military purposes. The price was fixed under a commercial contract; a total of 55 jet engines were sold to the USSR in 1947. However, the Cold War intensified during this period and the Soviets, who at the time were well behind the West in jet technology, reverse-engineered the Nene and installed their own version in the MiG-15 interceptor. This was used to good effect against US-UK forces in the subsequent Korean War, as well as in several later MiG models. After Stalin took political control of most of Eastern Europe, and began to subvert other governments in the Balkans, Attlee's and Bevin's worst fears of Soviet intentions were realised. The Attlee government then became instrumental in the creation of the successful NATO defence alliance to protect Western Europe against any Soviet expansion. In a crucial contribution to the economic stability of post-war Europe, Attlee's Cabinet was instrumental in promoting the American Marshall Plan for the economic recovery of Europe. He called it one of the "most bold, enlightened and good-natured acts in the history of nations". A group of Labour MPs, organised under the banner of "Keep Left", urged the government to steer a middle way between the two emerging superpowers, and advocated the creation of a "third force" of European powers to stand between the US and USSR. However, deteriorating relations between Britain and the USSR, as well as Britain's economic reliance on America following the Marshall Plan, steered policy towards supporting the US. In January 1947, fear of both Soviet and American nuclear intentions led to a secret meeting of the Cabinet, where the decision was made to press ahead with the development of Britain's independent nuclear deterrent, an issue which later caused a split in the Labour Party. Britain's first successful nuclear test, however, did not occur until 1952, one year after Attlee had left office. The London dock strike of July 1949, led by Communists, was suppressed when the Attlee government sent in 13,000 Army troops and passed special legislation to promptly end the strike. His response reveals Attlee's growing concern that Soviet expansionism, supported by the British Communist Party, was a genuine threat to national security, and that the docks were highly vulnerable to sabotage ordered by Moscow. He noted that the strike was caused not by local grievances, but to help communist unions who were on strike in Canada. Attlee agreed with MI5 that he faced "a very present menace". Decolonisation Decolonisation was never a major election issue, but Attlee gave the matter a great deal of attention and was the chief leader in beginning the process of decolonisation of the British Empire. East Asia In August 1948, the Chinese Communists' victories caused Attlee to begin preparing for a Communist takeover of China. It kept open consulates in Communist-controlled areas and rejected the Chinese Nationalists' requests that British citizens assist in the defence of Shanghai. By December, the government concluded that although British property in China would likely be nationalised, British traders would benefit in the long run from a stable, industrialising Communist China. Retaining Hong Kong was especially important to him; although the Chinese Communists promised to not interfere with its rule, Britain reinforced the Hong Kong Garrison during 1949. When the victorious Chinese Communists government declared on 1 October 1949 that it would exchange diplomats with any country that ended relations with the Chinese Nationalists, Britain became the first western country to formally recognise the People's Republic of China in January 1950. In 1954, a Labour Party delegation including Attlee visited China at the invitation of then Foreign Minister Zhou Enlai. Attlee became the first high-ranking western politician to meet Mao Zedong. South Asia Attlee orchestrated the granting of independence to India and Pakistan in 1947. Attlee in 1928–1934 had been a member of the Indian Statutory Commission (otherwise known as the Simon Commission). He became the Labour Party expert on India and by 1934 was committed to granting India the same independent dominion status that Canada, Australia, New Zealand and South Africa had recently been given. He faced strong resistance from the die-hard Conservative imperialists, led by Churchill, who opposed both independence and efforts led by Prime Minister Stanley Baldwin to set up a system of limited local control by Indians themselves. Attlee and the Labour leadership were sympathetic to both the Congress led by Jawaharlal Nehru and the Pakistan movement led by Muhammad Ali Jinnah. During the Second World War, Attlee was in charge of Indian affairs. He set up the Cripps Mission in 1942, which tried and failed to bring the factions together. When Congress called for passive resistance in the Quit India movement of 1942–1945, it was the British regime ordered the widespread arrest and internment for the duration of tens of thousands of Congress leaders as part of its efforts to crush the revolt. Labour's election Manifesto in 1945 called for "the advancement of India to responsible self-government". In 1942 the British Raj tried to enlist all major political parties in support of the war effort. Congress, led by Nehru and Gandhi, demanded immediate independence and full control by Congress of all of India. That demand was rejected by the British, and Congress opposed the war effort with its "Quit India campaign". The Raj immediately responded in 1942 by imprisoning the major national, regional and local Congress leaders for the duration. Attlee did not object. By contrast, the Muslim League, led by Muhammad Ali Jinnah, strongly supported the war effort. They greatly enlarged their membership and won favour from London for their decision. Attlee retained a fondness for Congress and until 1946, accepted their thesis that they were a non-religious party that accepted Hindus, Muslims, Sikhs, and everyone else. Nevertheless, this difference in opinion between the Congress and the Muslim League towards the British war effort encouraged Attlee and his government to consider further negotiations with the Muslim League. The Muslim League insisted that it was the only true representative of all of the Muslims of India. With violence escalating in India after the war, but with British financial power at a low ebb, large-scale military involvement was impossible. Viceroy Wavell said he needed a further seven army divisions to prevent communal violence if independence negotiations failed. No divisions were available; independence was the only option. Given the increasing demands of the Muslim League, independence implied a partition that set off heavily Muslim Pakistan from the main portion of India. After becoming Prime Minister in 1945 Attlee originally planned to give India Dominion status in 1948. Attlee suggested in his memoirs that "traditional" colonial rule in Asia was no longer viable. He said that he expected it to meet renewed opposition after the war both by local national movements as well as by the United States. The prime minister's biographer John Bew says that Attlee hoped for a transition to a multilateral world order and a Commonwealth, and that the old British empire "should not be supported beyond its natural lifespan" and instead be ended "on the right note." His exchequer Hugh Dalton meanwhile feared that post-war Britain could no longer afford to garrison its empire. Ultimately the Labour government gave full independence to India and Pakistan in 1947 through the Indian Independence Act. This involved creating a demarcation between the two regions which was known as the Radcliffe Line. The boundary between the newly created states of Pakistan and India involved the widespread resettlement of millions of Hindus, Sikhs and Muslims. Almost immediately, extreme anti-Hindu and anti-Sikh violence ensued in Lahore, Multan and Dacca when the Punjab province and the Bengal province were split in the Partition of India. This was followed by a rapid increase in widespread anti-Muslim violence in several areas including Amritsar, Rajkot, Jaipur, Calcutta and Delhi. Historian Yasmin Khan estimates that over a million people were killed of which several were women and children. Gandhi himself was assassinated in January 1948. Attlee remarked Gandhi as the "greatest citizen" of India and added, "this one man has been the major factor in every consideration of the Indian problem. He had become the expression of the aspirations of the Indian people for independence". Historian Andrew Roberts says the independence of India was a "national humiliation" but it was necessitated by urgent financial, administrative, strategic and political needs. Churchill in 1940–1945 had tightened the hold on India and imprisoned the Congress leadership, with Attlee's approval. Labour had looked forward to making it a fully independent dominion like Canada or Australia. Many of the Congress leaders in the India had studied in England, and were highly regarded as fellow idealistic socialists by Labour leaders. Attlee was the Labour expert on India and took special charge of decolonisation. Attlee found that Churchill's viceroy, Field Marshal Wavell, was too imperialistic, too keen on military solutions, and too neglectful of Indian political alignments. The new Viceroy was Lord Mountbatten, the dashing war hero and a cousin of the King. Attlee also sponsored the peaceful transition to independence in 1948 of Burma (Myanmar) and Ceylon (Sri Lanka). Palestine One of the most urgent problems facing Attlee concerned the future of the British mandate in Palestine, which had become too troublesome and expensive to handle. British policies in Palestine were perceived by the Zionist movement and the Truman administration to be pro-Arab and anti-Jewish, and Britain soon found itself unable to maintain public order in the face of a Jewish insurgency and a civil war. During this period, 70,000 Holocaust survivors attempted to reach Palestine as part of the Aliyah Bet refugee movement. Attlee's government tried several tactics to prevent the migration. Five ships were bombed by the Secret Intelligence Service (though with no casualties) with a fake Palestinian group created to take responsibility. The navy apprehended over 50,000 refugees en route, interning them in detention camps in Cyprus. Conditions in the camps were harsh and faced global criticism. Later, the refugee ship Exodus 1947 would be sent back to mainland Europe, instead of being taken to Cyprus. In response to the increasingly unpopular mandate, Attlee ordered the evacuation of all British military personnel and handed over the issue to the United Nations, a decision which was widely supported by the general public in Britain. With the establishment of the state of Israel in 1948, the camps in Cyprus were eventually closed, with their former occupants finally completing their journey to the new country. Africa The government's policies with regard to the other colonies, particularly those in Africa, focused on keeping them as strategic Cold War assets while modernising their economies. The Labour Party had long attracted aspiring leaders from Africa and had developed elaborate plans before the war. Implementing them overnight with an empty treasury proved too challenging. A major military base was built in Kenya, and the African colonies came under an unprecedented degree of direct control from London. Development schemes were implemented to help solve Britain's post-war balance of payments crisis and raise African living standards. This "new colonialism" worked slowly, and had failures such as the Tanganyika groundnut scheme. Elections The 1950 election gave Labour a massively reduced majority of five seats compared to the triple-digit majority of 1945. Although re-elected, the result was seen by Attlee as very disappointing, and was widely attributed to the effects of post-war austerity denting Labour's appeal to middle-class voters. With such a small majority leaving him dependent on a small number of MPs to govern, Attlee's second term was much tamer than his first. Some major reforms were nevertheless passed, particularly regarding industry in urban areas and regulations to limit air and water pollution. By 1951, the Attlee government was exhausted, with several of its most senior ministers ailing or ageing, and with a lack of new ideas. Attlee's record for settling internal differences in the Labour Party fell in April 1951, when there was a damaging split over an austerity Budget brought in by the Chancellor, Hugh Gaitskell, to pay for the cost of Britain's participation in the Korean War. Aneurin Bevan resigned to protest against the new charges for "teeth and spectacles" in the National Health Service introduced by that Budget, and was joined in this action by several senior ministers, including the future Prime Minister Harold Wilson, then the President of the Board of Trade. Thus escalated a battle between the left and right wings of the Party that continues today. Finding it increasingly impossible to govern, Attlee's only chance was to call a snap election in October 1951, in the hope of achieving a more workable majority and to regain authority. The gamble failed: Labour narrowly lost to the Conservative Party, despite winning considerably more votes (achieving the largest Labour vote in electoral history). Attlee tendered his resignation as Prime Minister the following day, after six years and three months in office. Return to opposition Following the defeat in 1951, Attlee continued to lead the party as Leader of the Opposition. His last four years as leader were, however, widely seen as one of the Labour Party's weaker periods. The period was dominated by infighting between the Labour Party's right wing, led by Hugh Gaitskell, and its left, led by Aneurin Bevan. Many Labour MPs felt that Attlee should have retired following 1951 election and allowed a younger man to lead the party. Bevan openly called for him to stand down in the summer of 1954. One of his main reasons for staying on as leader was to frustrate the leadership ambitions of Herbert Morrison, whom Attlee disliked for both political and personal reasons. At one time, Attlee had favoured Aneurin Bevan to succeed him as leader, but this became problematic after Bevan almost irrevocably split the party. Attlee, now aged 72, contested the 1955 general election against Anthony Eden, which saw Labour lose 18 seats, and the Conservatives increase their majority. In an interview with the News Chronicle columnist Percy Cudlipp in mid-September 1955, Attlee made clear his own thinking together with his preference for the leadership succession, stating: He retired as Leader of the Labour Party on 7 December 1955, having led the party for twenty years, and on 14 December Hugh Gaitskell was elected as his successor. Global policy He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt a Constitution for the Federation of Earth. Retirement He subsequently retired from the House of Commons and was elevated to the peerage as Earl Attlee and Viscount Prestwood on 16 December 1955, taking his seat in the House of Lords on 25 January. He believed Eden had been forced into taking a strong stand on the Suez Crisis by his backbenchers. In 1958, Attlee, along with numerous notables, established the Homosexual Law Reform Society: this campaigned for the decriminalisation of homosexual acts in private by consenting adults, a reform that was voted through Parliament nine years later. In May 1961, he travelled to Washington, D.C., to meet with President Kennedy. In 1962, he spoke twice in the House of Lords against the British government's application for the UK to join the European Communities ("Common Market"). In his second speech delivered in November, Attlee claimed that Britain had a separate parliamentary tradition from the Continental European countries that comprised the EC. He also claimed that if Britain became a member, EC rules would prevent the British government from planning the economy and that Britain's traditional policy had been outward-looking rather than Continental. He attended Winston Churchill's funeral in January 1965. He was frail by that time, and had to remain seated in the freezing cold as the coffin was carried, having tired himself out by standing at the rehearsal the previous day. He lived to see the Labour Party return to power under Harold Wilson in 1964, and also to see his old constituency of Walthamstow West fall to the Conservatives in a by-election in September 1967. Death Attlee died peacefully in his sleep of pneumonia, at the age of 84 at Westminster Hospital on 8 October 1967. Two thousand people attended his funeral in November, including the then-Prime Minister Harold Wilson and the Duke of Kent, representing the Queen. He was cremated and his ashes were buried at Westminster Abbey. Upon his death, the title passed to his son Martin Richard Attlee, 2nd Earl Attlee (1927–1991), who defected from Labour to the SDP in 1981. It is now held by Clement Attlee's grandson John Richard Attlee, 3rd Earl Attlee. The third earl (a member of the Conservative Party) retained his seat in the Lords as one of the hereditary peers to remain under an amendment to Labour's House of Lords Act 1999. Attlee's estate was sworn for probate purposes at a value of £7,295, (equivalent to £ in ) a relatively modest sum for so prominent a figure, and only a fraction of the £75,394 in his father's estate when he died in 1908. Legacy The quotation about Attlee, "A modest man, but then he has so much to be modest about", is commonly ascribed to Churchill—though Churchill denied saying it, and respected Attlee's service in the War cabinet. Attlee's modesty and quiet manner hid a great deal that has only come to light with historical reappraisal. Attlee himself is said to have responded to critics with a limerick: "There were few who thought him a starter, Many who thought themselves smarter. But he ended PM, CH and OM, an Earl and a Knight of the Garter". The journalist and broadcaster Anthony Howard called him "the greatest Prime Minister of the 20th century". His leadership style of consensual government, acting as a chairman rather than a president, won him much praise from historians and politicians alike. Christopher Soames, the British Ambassador to France during the Conservative government of Edward Heath and cabinet minister under Margaret Thatcher, remarked that "Mrs Thatcher was not really running a team. Every time you have a Prime Minister who wants to make all the decisions, it mainly leads to bad results. Attlee didn't. That's why he was so damn good". Thatcher herself wrote in her 1995 memoirs, which charted her life from her beginnings in Grantham to her victory at the 1979 general election, that she admired Attlee, writing: "Of Clement Attlee, however, I was an admirer. He was a serious man and a patriot. Quite contrary to the general tendency of politicians in the 1990s, he was all substance and no show". Attlee's government presided over the successful transition from a wartime economy to peacetime, tackling problems of demobilisation, shortages of foreign currency, and adverse deficits in trade balances and government expenditure. Further domestic policies that he brought about included the creation of the National Health Service and the post-war Welfare state, which became key to the reconstruction of post-war Britain. Attlee and his ministers did much to transform the UK into a more prosperous and egalitarian society during their time in office with reductions in poverty and a rise in the general economic security of the population. In foreign affairs, he did much to assist with the post-war economic recovery of Europe. He proved a loyal ally of the US at the onset of the Cold War. Due to his style of leadership, it was not he, but Ernest Bevin who masterminded foreign policy. It was Attlee's government that decided Britain should have an independent nuclear weapons programme, and work on it began in 1947. Bevin, Attlee's Foreign Secretary, famously stated that "We've got to have it [nuclear weapons] and it's got to have a bloody Union Jack on it". The first operational British nuclear bomb was not detonated until October 1952, about one year after Attlee had left office. Independent British atomic research was prompted partly by the US McMahon Act, which nullified wartime expectations of postwar US–UK collaboration in nuclear research, and prohibited Americans from communicating nuclear technology even to allied countries. British atomic bomb research was kept secret even from some members of Attlee's own cabinet, whose loyalty or discretion seemed uncertain. Although a socialist, Attlee still believed in the British Empire of his youth. He thought of it as an institution that was a power for good in the world. Nevertheless, he saw that a large part of it needed to be self-governing. Using the Dominions of Canada, Australia, and New Zealand as a model, he continued the transformation of the empire into the modern-day British Commonwealth. His greatest achievement, surpassing many of these, was perhaps the establishment of a political and economic consensus about the governance of Britain that all three major parties subscribed to for three decades, fixing the arena of political discourse until the late-1970s. In 2004, he was voted the most successful British Prime Minister of the 20th century by a poll of 139 academics organised by Ipsos MORI. A blue plaque unveiled in 1979 commemorates Attlee at 17 Monkhams Avenue, in Woodford Green in the London borough of Redbridge. Attlee was elected a Fellow of the Royal Society in 1947. Attlee was awarded an Honorary Fellowship of Queen Mary College on 15 December 1948. In the 1960s a new suburb near Curepipe in British Mauritius was given the name Cité Atlee in his honour. Statues On 30 November 1988, a bronze statue of Clement Attlee was unveiled by Harold Wilson (the next Labour Prime Minister after Attlee) outside Limehouse Library in Attlee's former constituency. By then Wilson was the last surviving member of Attlee's cabinet, and the unveiling of the statue would be one of the last public appearances by Wilson, who was by that point in the early stages of Alzheimer's disease; he died at the age of 79 in May 1995. Limehouse Library was closed in 2003, after which the statue was vandalised. The council surrounded it with protective hoarding for four years, before eventually removing it for repair and recasting in 2009. The restored statue was unveiled by Peter Mandelson in April 2011, in its new position less than a mile away at the Queen Mary University of London's Mile End campus. There is also a statue of Clement Attlee in the Houses of Parliament that was erected, instead of a bust, by parliamentary vote in 1979. The sculptor was Ivor Roberts-Jones. Cultural depictions Personal life Attlee met Violet Millar while on a long trip with friends to Italy in 1921. They fell in love and were soon engaged, marrying at Christ Church, Hampstead, on 10 January 1922. It would come to be a devoted marriage, with Attlee providing protection and Violet providing a home that was an escape for Attlee from political turmoil. She died in 1964. They had four children: Lady Janet Helen (1923–2019), she married the scientist Harold Shipton (1920–2007) at Ellesborough Parish Church in 1947. Lady Felicity Ann (1925–2007), married the business executive John Keith Harwood (1926–1989) at Little Hampden in 1955 Martin Richard, Viscount Prestwood, later 2nd Earl Attlee (1927–1991), married Anne Henderson on 16 February 1955. Lady Alison Elizabeth (1930–2016), married Richard Davis at Great Missenden in 1952. Religious views Although his parents were devout Anglicans, with one of his brothers becoming a clergyman and one of his sisters a missionary, Attlee himself is usually regarded as an agnostic. In an interview he described himself as "incapable of religious feeling", saying that he believed in "the ethics of Christianity" but not "the mumbo-jumbo". When asked whether he was an agnostic, Attlee replied "I don't know". Honours and arms See also Briggs Plan Ethical socialism Information Research Department Malayan Emergency New village Postwar Britain References Citations Notes Bibliography Further reading Biographical Beckett, Francis. Clem Attlee (1998) – updated and revised and expanded edition, Clem Attlee: Labour's Great Reformer (2015) Bew, John. Citizen Clem: A Biography of Attlee, (London: 2016, British edition); Clement Attlee: The Man Who Made Modern Britain (New York: Oxford University Press, 2017, U.S. edition) Burridge, Trevor. Clement Attlee: A Political Biography (1985), scholarly Cohen, David. Churchill & Attlee: The Unlikely Allies who Won the War (Biteback Publishing, 2018), popular Crowcroft, Robert. Attlee's War: World War II and the Making of a Labour Leader (IB Tauris, 2011) Harris, Kenneth. Attlee (1982), scholarly authorised biography Howell, David. Attlee (2006) Jago, Michael. Clement Attlee: The Inevitable Prime Minister (2014) Pearce, Robert. Attlee (1997), 206pp Thomas-Symonds, Nicklaus. Attlee: A Life in Politics (IB Tauris, 2010). Whiting, R. C. "Attlee, Clement Richard, first Earl Attlee (1883–1967)", Oxford Dictionary of National Biography, 2004; online edn, Jan 2011 accessed 12 June 2013 doi:10.1093/ref:odnb/30498 Biographies of his cabinet and associates Rosen, Greg. ed. Dictionary of Labour Biography. (Politicos Publishing, 2002); Morgan, Kenneth O. Labour people: Leaders and Lieutenants, Hardie to Kinnock (1987) Scholarly studies Addison, Paul. No Turning Back: The Peaceful Revolutions of Post-War Britain (2011) excerpt and text search , detailed coverage of nationalisation, welfare state and planning. Crowcroft, Robert, and Kevin Theakston. "The Fall of the Attlee Government, 1951", in Timothy Heppell and Kevin Theakston, eds. How Labour Governments Fall (Palgrave Macmillan UK, 2013). PP 61–82. Francis, Martin. Ideas and policies under Labour, 1945–1951: building a new Britain (Manchester University Press, 1997). Golant, W. "The Emergence of CR Attlee as Leader of the Parliamentary Labour Party in 1935", Historical Journal, 13#2 (1970): 318–332. in JSTOR Jackson, Ben. "Citizen and Subject: Clement Attlee's Socialism", History Workshop Journal (2018). Vol. 86 pp 291–298. online. Jeffreys, Kevin. "The Attlee Years, 1935–1955", in Brivati, Brian, and Heffernan, Richard, eds., The Labour Party: A Centenary History, Palgrave Macmillan UK, 2000. 68–86. Kynaston, David. Austerity Britain, 1945–1951 (2008). Mioni, Michele. "The Attlee government and welfare state reforms in post-war Italian Socialism (1945–51): Between universalism and class policies", Labor History, 57#2 (2016): 277–297. DOI:10.1080/0023656X.2015.1116811 Morgan, Kenneth O. Labour in Power 1945–1951 (1984), 564 pp. Ovendale, R. ed., The foreign policy of the British Labour governments, 1945–51 (1984) · Pugh, Martin. Speak for Britain!: A New History of the Labour Party (2011) excerpt and text search Swift, John. Labour in Crisis: Clement Attlee & the Labour Party in Opposition, 1931–1940 (2001) Tomlinson, Jim. Democratic Socialism and Economic Policy: The Attlee Years, 1945–1951 (2002) Excerpt and text search Weiler, Peter. "British Labour and the cold war: the foreign policy of the Labour governments, 1945–1951", Journal of British Studies, 26#1 (1987): 54–82. in JSTOR Works Clement Attlee published his memoirs, As it Happened, in 1954. Francis Williams' A Prime Minister Remembers, based on interviews with Attlee, was published in 1961. Attlee's other publications The Social Worker (1920) Metropolitan Borough Councils Their Constitution, Powers and Duties – Fabian Tract No 190 (1920) The Town Councillor (1925) The Will and the Way to Socialism (1935) The Labour Party in Perspective (1937) Collective Security Under the United Nations (1958) Empire into Commonwealth (1961) External links Clement Attlee – Thanksgiving Speech 1950 – UK Parliament Living Heritage More about Clement Attlee on the Downing Street website. Annotated bibliography for Clement Attlee from the Alsos Digital Library for Nuclear Issues Drawing of Clement Attlee in the UK Parliamentary Collections 1883 births 1967 deaths 20th-century English politicians 20th-century prime ministers of the United Kingdom Academics of Ruskin College Academics of the London School of Economics Alumni of University College, Oxford Alumni of the Inns of Court School of Law Men's association football players not categorized by position British Army personnel of World War I British Empire in World War II British Secretaries of State for Dominion Affairs British Secretaries of State British socialists British Zionists British people of World War II British social democrats Burials at Westminster Abbey Chancellors of the Duchy of Lancaster Deaths from pneumonia in England Deputy Prime Ministers of the United Kingdom Clement English agnostics English men's footballers Fellows of the Royal Society Fleet Town F.C. players Foreign Office personnel of World War II Knights of the Garter Labour Party (UK) MPs for English constituencies Labour Party (UK) hereditary peers Labour Party prime ministers of the United Kingdom Leaders of the Labour Party (UK) Leaders of the Opposition (United Kingdom) Lord Presidents of the Council Lords Privy Seal Mayors of places in Greater London Members of Stepney Metropolitan Borough Council Members of the Fabian Society Members of the Order of Merit Members of the Order of the Companions of Honour Members of the Privy Council of the United Kingdom Ministers in the Attlee governments, 1945–1951 Ministers in the Churchill wartime government, 1940–1945 National Council for Civil Liberties people People educated at Haileybury and Imperial Service College People from Putney People of the Cold War Prime Ministers of the United Kingdom South Lancashire Regiment officers UK MPs 1922–1923 UK MPs 1923–1924 UK MPs 1924–1929 UK MPs 1929–1931 UK MPs 1931–1935 UK MPs 1935–1945 UK MPs 1945–1950 UK MPs 1950–1951 UK MPs 1951–1955 UK MPs 1955–1959 UK MPs who were granted peerages United Kingdom Postmasters General United Kingdom home front during World War II World War II political leaders Earls created by Elizabeth II Members of the Inner Temple British Eurosceptics Labour Party (UK) mayors World Constitutional Convention call signatories
5768
https://en.wikipedia.org/wiki/Catullus
Catullus
Gaius Valerius Catullus (; 84 - 54 BCE), often referred to simply as Catullus (), was a Latin poet of the late Roman Republic who wrote chiefly in the neoteric style of poetry, focusing on personal life rather than classical heroes. His surviving works are still read widely and continue to influence poetry and other forms of art. Catullus's poems were widely appreciated by contemporary poets, significantly influencing Ovid and Virgil, among others. After his rediscovery in the Late Middle Ages, Catullus again found admirers such as Petrarch. The explicit sexual imagery which he uses in some of his poems has shocked many readers. Yet, at many instruction levels, Catullus is considered a resource for teachers of Latin. Catullus's style is highly personal, humorous, and emotional; he frequently uses hyperbole, anaphora, alliteration, and diminutives. In 25 of his poems, he mentions his devotion to a woman he refers to as "Lesbia", who is widely believed to have been the Roman aristocrat Clodia Metelli. One of the most famous of his poems is his 5th, which is often recognized for its passionate language and opening line: "" ("Let us live, my Lesbia, and let us love"). Life Gāius Valerius Catullus was born to a leading equestrian family of Verona, in Cisalpine Gaul. The social prominence of the Catullus family allowed the father of Gaius Valerius to entertain Julius Caesar when he was the Promagistrate (proconsul) of both Gallic provinces. In a poem, Catullus describes his happy homecoming to the family villa at Sirmio, on Lake Garda, near Verona; he also owned a villa near the resort of Tibur (modern Tivoli). Catullus appears to have spent most of his young adult years in Rome. His friends there included the poets Licinius Calvus, and Helvius Cinna, Quintus Hortensius (son of the orator and rival of Cicero) and the biographer Cornelius Nepos, to whom Catullus dedicated a libellus of poems, the relation of which to the extant collection remains a matter of debate. He appears to have been acquainted with the poet Marcus Furius Bibaculus. A number of prominent contemporaries appear in his poetry, including Cicero, Caesar and Pompey. According to an anecdote preserved by Suetonius, Caesar did not deny that Catullus's lampoons left an indelible stain on his reputation, but when Catullus apologized, he invited the poet for dinner the very same day. It was probably in Rome that Catullus fell deeply in love with the "Lesbia" of his poems, who is usually identified with Clodia Metelli, a sophisticated woman from the aristocratic house of patrician family Claudii Pulchri, sister of the infamous Publius Clodius Pulcher, and wife to proconsul Quintus Caecilius Metellus Celer. In his poems Catullus describes several stages of their relationship: initial euphoria, doubts, separation, and his wrenching feelings of loss. Clodia had several other partners; "From the poems one can adduce no fewer than five lovers in addition to Catullus: Egnatius (poem 37), Gellius (poem 91), Quintius (poem 82), Rufus (poem 77), and Lesbius (poem 79)." There is also some question surrounding her husband's mysterious death in 59 BCE, with some critics believing he was domestically poisoned. However, a sensitive and passionate Catullus could not relinquish his flame for Clodia, regardless of her obvious indifference to his desire for a deep and permanent relationship. In his poems, Catullus wavers between devout, sweltering love and bitter, scornful insults that he directs at her blatant infidelity (as demonstrated in poems 11 and 58). His passion for her is unrelenting—yet it is unclear when exactly the couple split up for good. Catullus's poems about the relationship display striking depth and psychological insight. He spent the provincial command year from summer 57 to summer 56 BCE in Bithynia on the staff of the commander Gaius Memmius. While in the East, he traveled to the Troad to perform rites at his brother's tomb, an event recorded in a moving poem. No ancient biography of Catullus has survived. His life has to be pieced together from scattered references to him in other ancient authors and from his poems. Thus it is uncertain when he was born and when he died. Jerome stated that he was born in 87 BCE and died in Rome on his 30th year. However, Catullus’ poems include references to events of 55 and 54 BCE. Since the Roman consular fasti make it somewhat easy to confuse 87–57 BCE with 84–54 BCE, many scholars accept the dates 84 BC–54 BCE, supposing that his latest poems and the publication of his libellus coincided with the year of his death. Other authors suggest 52 or 51 BCE as the year of the poet's death. Though upon his elder brother's death Catullus lamented that their "whole house was buried along" with the deceased, the existence (and prominence) of Valerii Catulli is attested in the following centuries. T.P. Wiseman argues that after the brother's death Catullus could have married, and that, in this case, the later Valerii Catulli may have been his descendants. Poetry Sources and organization Catullus's poems have been preserved in an anthology of 116 carmina (the actual number of poems may slightly vary in various editions), which can be divided into three parts according to their form: sixty short poems in varying meters, called polymetra, eight longer poems, and forty-eight epigrams. There is no scholarly consensus on whether Catullus himself arranged the order of the poems. The longer poems differ from the polymetra and the epigrams not only in length but also in their subjects: There are seven hymns and one mini-epic, or epyllion, the most highly prized form for the "new poets". The polymetra and the epigrams can be divided into four major thematic groups (ignoring a rather large number of poems that elude such categorization): poems to and about his friends (e.g., an invitation like poem 13). erotic poems: some of them (eg. (50, 9, 99) etc. are about his attraction toward other men, but others are about women, especially about one he calls "Lesbia" (which served as a false name for his married girlfriend, Clodia, source and inspiration of many of his poems). In modern terms, he would likely be called bisexual, though the Romans had no labels such as this. invectives: often rude and sometimes downright obscene poems targeted at friends-turned-traitors (e.g., poem 16), other lovers of Lesbia, well-known poets, politicians (e.g., Julius Caesar) and rhetors, including Cicero. condolences: some poems of Catullus are solemn in nature. 96 comforts a friend in the death of a loved one; several others, most famously 101, lament the death of his brother. All these poems describe the lifestyle of Catullus and his friends, who, despite Catullus's temporary political post in Bithynia, lived their lives withdrawn from politics. They were interested mainly in poetry and love. Above all other qualities, Catullus seems to have valued venustas, or charm, in his acquaintances, a theme which he explores in a number of his poems. The ancient Roman concept of virtus (i.e., of virtue that had to be proved by a political or military career), which Cicero suggested as the solution to the societal problems of the late Republic, meant little to them. However Catullus does not reject traditional notions, but rather their particular application to the vita activa of politics and war. Indeed, he tries to reinvent these notions from a personal point of view and to introduce them into human relationships. For example, he applies the word fides, which traditionally meant faithfulness towards one's political allies, to his relationship with Lesbia and reinterprets it as unconditional faithfulness in love. So, despite the seeming frivolity of his lifestyle, Catullus measured himself and his friends by quite ambitious standards. Intellectual influences Catullus's poetry was influenced by the innovative poetry of the Hellenistic Age, and especially by Callimachus and the Alexandrian school, which had propagated a new style of poetry that deliberately turned away from the classical epic poetry in the tradition of Homer. Cicero called these local innovators neoteroi () or "moderns" (in Latin poetae novi or 'new poets'), in that they cast off the heroic model handed down from Ennius in order to strike new ground and ring a contemporary note. Catullus and Callimachus did not describe the feats of ancient heroes and gods (except perhaps in re-evaluating and predominantly artistic circumstances, e.g. poems 63 and 64), focusing instead on small-scale personal themes. Although these poems sometimes seem quite superficial and their subjects often are mere everyday concerns, they are accomplished works of art. Catullus described his work as expolitum, or polished, to show that the language he used was very carefully and artistically composed. Catullus was also an admirer of Sappho, a female poet of the seventh century BCE. Catullus 51 partly translates, partly imitates, and transforms Sappho 31. Some hypothesize that 61 and 62 were perhaps inspired by lost works of Sappho but this is purely speculative. Both of the latter are epithalamia, a form of laudatory or erotic wedding-poetry that Sappho was famous for. Catullus twice used a meter that Sappho was known for, called the Sapphic stanza, in poems 11 and 51, perhaps prompting his successor Horace's interest in the form. Catullus, as was common to his era, was greatly influenced by stories from Greek and Roman myth. His longer poems—such as 63, 64, 65, 66, and 68—allude to mythology in various ways. Some stories he refers to are the wedding of Peleus and Thetis, the departure of the Argonauts, Theseus and the Minotaur, Ariadne's abandonment, Tereus and Procne, as well as Protesilaus and Laodamia. Style Catullus wrote in many different meters including hendecasyllabic verse and elegiac couplets (common in love poetry). A great part of his poetry shows strong and occasionally wild emotions, especially in regard to Lesbia (e.g., poems 5 and 7). His love poems are very emotional and ardent, and are relatable to this day. Catullus describes his Lesbia as having multiple suitors and often showing little affection towards him. He also demonstrates a great sense of humour such as in Catullus 13. Musical settings The Hungarian born British composer Matyas Seiber set poem 31 for unaccompanied mixed chorus Sirmio in 1957. The American composer Ned Rorem set Catullus 101 to music for voice and piano; the song, "Catallus: On the Burial of His Brother", was originally published in 1969. Catullus Dreams (2011) is a song cycle by David Glaser set to texts of Catullus, scored for soprano and seven instruments; it premiered at Symphony Space in New York by soprano Linda Larson and Sequitur Ensemble. "Carmina Catulli" is a song cycle arranged from 17 of Catullus's poems by American composer Michael Linton. The cycle was recorded in December 2013 and premiered at Carnegie Hall's Weill Recital Hall in March 2014 by French baritone Edwin Crossley-Mercer and pianist Jason Paul Peterson. Thomas Campion also wrote a lute-song using his own translation of the first six lines of Catullus 5 followed by two verses of his own; the translation by Richard Crashaw was set to music in a four-part glee by Samuel Webbe Jr. It was also set to music, in a three-part glee by John Stafford Smith. Catullus 5, the love poem "Vivamus mea Lesbia atque amemus", in the translation by Ben Jonson, was set to music, (lute accompanied song) by Alfonso Ferrabosco the younger. Dutch composer Bertha Tideman-Wijers used Catullus's text for her composition Variations on Valerius "Where that one already turns or turns." The Icelandic composer Jóhann Jóhannsson set Catullus 85 to music; entitled "Odi Et Amo", the song is found on Jóhannsson's album Englabörn, and is sung through a vocoder, and the music is played by a string quartet and piano. Catulli Carmina is a cantata by Carl Orff set to the texts of Catullus. Finnish jazz singer Reine Rimón has recorded poems of Catullus set to standard jazz tunes. Cultural depictions The 1888 play Lesbia by Richard Davey depicts the relationship between Catullus and Lesbia, based on incidents from his poems. Catullus was the main protagonist of the historical novel Farewell, Catullus (1953) by Pierson Dixon. The novel shows the corruption of Roman society. Vladimir Nabokov's Novel Lolita makes multiple explicit and implicit allusions to Catullus' work. W. G. Hardy's novel The City of Libertines (1957) tells the fictionalized story of Catullus and a love affair during the time of Julius Caesar. The Financial Post described the book as "an authentic story of an absorbing era". A poem by Catullus is being recited to Cleopatra in the eponymous 1963 film when Julius Caesar comes to visit her; they talk about him (Cleopatra: 'Catullus doesn't approve of you. Why haven't you had him killed?' Caesar: 'Because I approve of him.') and Caesar then recites other poems by him. The American poet Louis Zukofsky in 1969 wrote a set of homophonic translations of Catullus that attempted in English to replicate the sound as primary emphasis, rather than the more common emphasis on sense of the originals (although the relationship between sound and sense there is often misrepresented and has been clarified by careful study); his Catullus versions have had extensive influence on contemporary innovative poetry and homophonic translation, including the work of poets Robert Duncan, Robert Kelly, and Charles Bernstein. Catullus is the protagonist of Tom Holland's 1995 novel Attis. Catullus appears in Steven Saylor's novel The Venus Throw as the embittered ex lover of Clodia, sister of Publius Clodius Pulcher, whom he calls Lesbia. See also Codex Vaticanus Ottobonianus Latinus 1829 Poetry of Catullus Prosody (Latin) References Further reading Calinski, T. (2021). . Darmstadt: WBG Academic Claes, P. (2002). Concatenatio Catulliana, A New Reading of the Carmina. Amsterdam: J.C. Gieben Hild, Christian (2013). . St. Ingbert: Röhrig. . Kaggelaris, N. (2015), "Wedding Cry: Sappho (Fr. 109 LP, Fr. 104(a) LP)- Catullus (c. 62. 20-5)- modern greek folk songs" [in Greek] in Avdikos, E.- Koziou-Kolofotia, B. (ed.) Modern Greek folk songs and history, Karditsa, pp. 260–70 Radici Colace, P., , 1985, pp. 53–71. Radici Colace, P., , 1987, 39-57. Radici Colace, P., , Reggio Calabria 1989, 137-142. Radici Colace, P., , in AA.VV., , Pisa 1992, 1-13. Radici Colace, P., , Messana n.s.15, 1993, 23-44. Radici Colace, P., , (Napoli 9 maggio 1995) ―A.I.O.N.‖ XVIII, 1996, 155-167. Radici Colace, P., , in ―Paideia‖ LXIV, 2009, 553-561 External links Works by Catullus at Perseus Digital Library Catullus translations: Catullus's work in Latin and multiple (ten or more) modern languages, including scanned versions of every poem Catullus in Latin and English Catullus translated exclusively in English Translated by A. S. Kline Catullus Online: searchable Latin text, repertory of conjectures, and images of the most important manuscripts Catullus: Latin text, concordances and frequency list Catullus purified: a brief history of Carmen 16 by Thomas Nelson Winter SORGLL: Catullus 5, read by Robert Sonkowsky 1st-century BC Romans 1st-century BC Roman poets Elegiac poets Golden Age Latin writers Iambic poets 80s BC births 54 BC deaths Writers from Verona Bisexual male writers Bisexual poets Italian bisexual people Italian LGBT poets LGBT history in Italy Valerii Ancient Roman LGBT people
5769
https://en.wikipedia.org/wiki/C.%20S.%20Forester
C. S. Forester
Cecil Louis Troughton Smith (27 August 1899 – 2 April 1966), known by his pen name Cecil Scott "C. S." Forester, was an English novelist known for writing tales of naval warfare, such as the 12-book Horatio Hornblower series depicting a Royal Navy officer during the Napoleonic Wars. The Hornblower novels A Ship of the Line and Flying Colours were jointly awarded the James Tait Black Memorial Prize for fiction in 1938. His other works include The African Queen (1935; turned into a 1951 film by John Huston) and The Good Shepherd (1955; turned into a 2020 film, Greyhound, adapted by and starring Tom Hanks). Early years Forester was born in Cairo. After the family broke up when he was still at an early age his mother took him with her to London, where he was educated at Alleyn's School and Dulwich College. He began to study medicine at Guy's Hospital, but left without completing his degree. He was of good height and somewhat athletic, but wore glasses and had a slender physique, so he failed his Army physical and was told that there was no chance that he would be accepted. He began writing seriously, using his pen name, in around 1921. Second World War During the Second World War Forester moved to the United States, where he worked for the British Ministry of Information and wrote propaganda to encourage the U.S. to join the Allies. He eventually settled in Berkeley, California. In 1942, while he was living in Washington, D.C., he met Roald Dahl and encouraged him to write about his experiences in the RAF. According to Dahl's autobiography, Lucky Break, Forester asked him about his experiences as a fighter pilot, and this prompted Dahl to write his first story, "A Piece of Cake". Literary career Forester wrote many novels, but he is best known for the 12-book Horatio Hornblower series about an officer in the Royal Navy during the Napoleonic Wars. He began the series with Hornblower fairly high in rank in the first novel, which was published in 1937, but demand for more stories led him to fill in Hornblower's life story, and he wrote novels detailing his rise from the rank of midshipman. The last completed novel was published in 1962. Hornblower's fictional adventures were based on real events, but Forester wrote the body of the works carefully to avoid entanglements with real world history, so that Hornblower is always off on another mission when a great naval battle occurs during the Napoleonic Wars. Forester's other novels include The African Queen (1935) and The General (1936); two novels about the Peninsular War, Death to the French (published in the United States as Rifleman Dodd) and The Gun (filmed as The Pride and the Passion in 1957); and seafaring stories that do not involve Hornblower, such as Brown on Resolution (1929), The Captain from Connecticut (1941), The Ship (1943), and Hunting the Bismarck (1959), which was used as the basis of the screenplay for the film Sink the Bismarck! (1960). Several of his novels have been filmed, including The African Queen (1951), directed by John Huston. Forester is also credited as story writer on several films not based on his published novels, including Commandos Strike at Dawn (1942). Forester also wrote several volumes of short stories set during the Second World War. Those in The Nightmare (1954) were based on events in Nazi Germany, ending at the Nuremberg trials. The linked stories in The Man in the Yellow Raft (1969) follow the career of the destroyer USS Boon, while many of the stories in Gold from Crete (1971) follow the destroyer HMS Apache. The last of the stories in Gold from Crete is If Hitler Had Invaded England, which offers an imagined sequence of events starting with Hitler's attempt to implement Operation Sea Lion and culminating in the early military defeat of Nazi Germany in the summer of 1941. His non-fiction works about seafaring include The Age of Fighting Sail (1956), an account of the sea battles between Great Britain and the United States in the War of 1812. Forester also published the crime novels Payment Deferred (1926) and Plain Murder (1930), as well as two children's books. Poo-Poo and the Dragons (1942) was created as a series of stories told to his son George to encourage him to finish his meals. George had mild food allergies and needed encouragement to eat. The Barbary Pirates (1953) is a children's history of early 19th-century pirates. Forester appeared as a contestant on the television quiz programme You Bet Your Life, hosted by Groucho Marx, in an episode broadcast on 1 November 1956. A previously unknown novel of Forester's, The Pursued, was discovered in 2003 and published by Penguin Classics on 3 November 2011. Personal life Forester married Kathleen Belcher in 1926. They had two sons, John, born in 1929, and George, born in 1933. The couple divorced in 1945. In 1947 he married Dorothy Foster. Kathleen Belcher’s great uncle was Capt. Edward Belcher, RN, who achieved renown as a hydrographer and explorer. After his retirement, Belcher devoted much of his time to writing. After penning biographical material, he turned his hand to naval fiction, inventing a character called Horatio Howard Brenton, and attributing great feats and adventures to him. It is possible that Forester found some inspiration in these stories for his own Horatio Hornblower. Forester died in Fullerton, California on 2 April 1966. John Forester wrote a two-volume biography of his father, including many elements of Forester's life which became clear to his son only after his father's death. Bibliography Horatio Hornblower 1950 Mr Midshipman Hornblower. Michael Joseph. 1941 "The Hand of Destiny".Collier's 1950 "Hornblower and the Widow McCool" ("Hornblower’s Temptation" ""Hornblower and the Big Decision"). The Saturday Evening Post 1952 Lieutenant Hornblower. Michael Joseph. 1962 Hornblower and the Hotspur. Michael Joseph. 1967 Hornblower and the Crisis, an unfinished novel. Michael Joseph. Published in the US as Hornblower During the Crisis (posthumous) 1953 Hornblower and the Atropos. Michael Joseph. 1937 The Happy Return. Michael Joseph. Published in the US as Beat to Quarters 1938 A Ship of the Line. Michael Joseph. 1941 "Hornblower's Charitable Offering". Argosy 1938 Flying Colours. Michael Joseph. 1941 "Hornblower and His Majesty". Collier's 1945 The Commodore. Michael Joseph. Published in the US as Commodore Hornblower 1946 Lord Hornblower. Michael Joseph. 1958 Hornblower in the West Indies. Michael Joseph. Published in the US as Admiral Hornblower in the West Indies 1967 "The Last Encounter". Sunday Mirror, 8 May 1966 (posthumous). 1964 The Hornblower Companion. Michael Joseph. (Supplementary book comprising another short story, "The Point and the Edge" only as an outline, "The Hornblower Atlas" and "Some Personal Notes") Omnibus 1964 The Young Hornblower. (a compilation of books 1, 2 & 3). Michael Joseph. 1965 Captain Hornblower (a compilation of books 5, 6 & 7). Michael Joseph. 1968 Admiral Hornblower (a compilation of books 8, 9, 10 & 11). Michael Joseph. 2011 Hornblower Addendum – Five Short Stories (originally published in magazines) Other novels 1924 A Pawn among Kings. Methuen. 1924 The Paid Piper. Methuen. 1926 Payment Deferred. Methuen. 1927 Love Lies Dreaming. John Lane. 1927 The Wonderful Week. John Lane. 1928 The Daughter of the Hawk. John Lane. 1929 Brown on Resolution. John Lane. 1930 Plain Murder. John Lane. 1931 Two-and-Twenty. John Lane. 1932 Death to the French. John Lane. Published in the U.S. as Rifleman Dodd. Little Brown. 1933 The Gun. John Lane. 1934 The Peacemaker. Heinemann. 1935 The African Queen. Heinemann. 1935 The Pursued (a lost novel rediscovered in 1999 and published by Penguin Classics in 2011) 1936 The General. Michael Joseph. First published as a serial in the News Chronicle 14–18 January 1935 1940 The Earthly Paradise. Michael Joseph. Published in the U.S. as To the Indies. 1941 The Captain from Connecticut. Michael Joseph. 1942 Poo-Poo and the Dragons. Michael Joseph. 1943 The Ship. Michael Joseph. 1948 The Sky and the Forest. Michael Joseph. 1951 Randall and the River of Time. Michael Joseph. 1955 The Good Shepherd. Michael Joseph. Short stories "The Wandering Gentile", Liverpool Echo, 1955 Posthumous 1967 Long before Forty (autobiographical). Michael Joseph. 1971 Gold from Crete (short stories). Michael Joseph. 2011 The Pursued (novel). Penguin. Collections 1944 The Bedchamber Mystery; to which is added the story of The Eleven Deckchairs and Modernity and Maternity. S. J. Reginald Saunders. Published in the US as Three Matronly Mysteries. eNet Press 1954 The Nightmare. Michael Joseph 1969 The Man in the Yellow Raft. Michael Joseph (posthumous) Plays in three acts; John Lane 1931 U 97 1933 Nurse Cavell. (with C. E. Bechhofer Roberts) Non-fiction 1922 Victor Emmanuel II. Methuen (?) 1927 Victor Emmanuel II and the Union of Italy. Methuen. 1924 Napoleon and his Court. Methuen. 1925 Josephine, Napoleon’s Empress. Methuen. 1928 Louis XIV, King of France and Navarre. Methuen. 1929 Lord Nelson. John Lane. 1929 The Voyage of the Annie Marble. John Lane. 1930 The Annie Marble in Germany. John Lane. 1936 Marionettes at Home. Michael Joseph Ltd. 1953 The Adventures of John Wetherell. Doubleday & Company, Inc. 1953 The Barbary Pirates. Landmark Books, Random House. Published in the UK in 1956 by Macdonald & Co. 1957 The Naval War of 1812. Michael Joseph. Published in the US as The Age of Fighting Sail 1959 Hunting the Bismarck. Michael Joseph. Published in the US as The Last Nine Days of the Bismark and Sink the Bismarck Non-fiction short pieces "Calmness under Air Raids in Franco Territory". Western Mail, 28 April 1937 "Who Is Financing Franco?". Aberdeen Press & Journal, 5 May 1937 ”Sabotage". Sunday Graphic, 11 September 1938 "Saga of the Submarines". Falkirk Herald, 1 August 1945 "Hollywood Coincidence". Leicester Chronicle, 3 September 1955 Film adaptations In addition to providing the source material for numerous adaptations (not all of which are listed below), Forester was also credited as "adapted for the screen by" for Captain Horatio Hornblower. Payment Deferred (1932), based on a 1931 play which was in turn based on Forester's novel of the same name Brown on Resolution (1935), based on the novel of the same name Eagle Squadron (1942), story Commandos Strike at Dawn (1942), short story "The Commandos" Forever and a Day (1943), story Captain Horatio Hornblower (1951), based on the novels The Happy Return, A Ship of the Line and Flying Colours The African Queen (1951), the novel of the same name Sailor of the King (1953), the novel Brown on Resolution The Pride and the Passion (1957), the novel The Gun Sink the Bismarck! (1960), the novel The Last Nine Days of the Bismarck Hornblower (1998–2003 series of made-for-television movies), based on the novels Mr. Midshipman Hornblower, Lieutenant Hornblower and Hornblower and the Hotspur Greyhound (2020), the novel The Good Shepherd See also Honor Harrington – a fictional space captain and admiral in the Honorverse novels by David Weber, inspired by Horatio Hornblower (see dedication in On Basilisk Station) Patrick O'Brian – author of the Aubrey–Maturin series Dudley Pope – author of the Ramage series Richard Woodman – author of the Nathaniel Drinkwater series Douglas Reeman (writing as Alexander Kent) – The Bolitho novels References Further reading Sternlicht, Sanford V., C.S. Forester and the Hornblower saga (Syracuse University Press, 1999) Van der Kiste, John, C.S. Forester's Crime Noir: A view of the murder stories (KDP, 2018) External links C. S. Forester Collection at the Harry Ransom Center C. S. Forester Society, which publishes the e-journal Reflections C. S. Forester on You Bet Your Life in 1956 1899 births 1966 deaths 20th-century English novelists 20th-century English male writers 20th-century pseudonymous writers Alumni of King's College London English historical novelists English male novelists James Tait Black Memorial Prize recipients Nautical historical novelists People educated at Alleyn's School People educated at Dulwich College Writers about the Age of Sail Writers from London Writers of historical fiction set in the modern age
5770
https://en.wikipedia.org/wiki/List%20of%20country%20calling%20codes
List of country calling codes
Country calling codes, country dial-in codes, international subscriber dialing (ISD) codes, or most commonly, telephone country codes are telephone number prefixes for reaching telephone subscribers in foreign countries or areas via international telecommunication networks. Country codes are defined by the International Telecommunication Union (ITU) in ITU-T standards E.123 and E.164. The prefixes enable international direct dialing (IDD). Country codes constitute the international telephone numbering plan. They are used only when dialing a telephone number in a country or world region other than the caller's. Country codes are dialed before the national telephone number, but require at least one additional prefix, the international call prefix which is an exit code from the national numbering plan to the international one. In most countries, this prefix is 00, an ITU recommendation; it is 011 in the countries of the North American Numbering Plan while a minority of countries use other prefixes. Overview This table lists in its first column the initial digits of the country code shared by each country in each row, which is arranged in columns for the last digit. When three-digit codes share a common leading pair, the two-digit code is unassigned, being ambiguous (denoted by "ambig."). Unassigned codes are denoted by a dash (—). Countries are identified by ISO 3166-1 alpha-2 country codes; codes for non-geographic services are denoted by two asterisks (**). Ordered by world zone World zones are organized principally, but only approximately, by geographic location. Exceptions exist for political and historical alignments. Zone 1: North American Numbering Plan (NANP) NANP members are assigned three-digit numbering plan area (NPA) codes under the common country prefix 1, shown in the format 1 (NPA). 1 North American Numbering Plan 1 – , including United States territories: 1 (340) – 1 (670) – 1 (671) – 1 (684) – 1 (787, 939) – 1 – Caribbean nations, Dutch and British Overseas Territories: 1 (242) – 1 (246) – 1 (264) – 1 (268) – 1 (284) – 1 (345) – 1 (441) – 1 (473) – 1 (649) – 1 (658, 876) – 1 (664) – 1 (721) – 1 (758) – 1 (767) – 1 (784) – 1 (809, 829, 849) – 1 (868) – 1 (869) – Zone 2: Mostly Africa (but also Aruba, Faroe Islands, Greenland and British Indian Ocean Territory) 20 – 210 – unassigned 211 – 212 – (including Western Sahara) 213 – 214 – unassigned 215 – unassigned 216 – 217 – unassigned 218 – 219 – unassigned 220 – 221 – 222 – 223 – 224 – 225 – 226 – 227 – 228 – 229 – 230 – 231 – 232 – 233 – 234 – 235 – 236 – 237 – 238 – 239 – 240 – 241 – 242 – 243 – 244 – 245 – 246 – 247 – 248 – 249 – 250 – 251 – 252 – (including ) 253 – 254 – 255 – 255 (24) – , in place of never-implemented 259 256 – 257 – 258 – 259 – unassigned (was intended for People's Republic of Zanzibar but never implemented – see 255 Tanzania) 260 – 261 – 262 – 262 (269,639) – (formerly at 269 Comoros) 263 – 264 – (formerly 27 (6x) as South West Africa) 265 – 266 – 267 – 268 – 269 – (formerly assigned to Mayotte, now at 262) 27 – 28x – unassigned (reserved for country code expansion) 290 – 290 (8) – 291 – 292 – unassigned 293 – unassigned 294 – unassigned 295 – unassigned (formerly assigned to San Marino, now at 378) 296 – unassigned 297 – 298 – 299 – Zones 3–4: Europe Some of the larger countries were assigned two-digit codes to compensate for their usually longer domestic numbers. Small countries were assigned three-digit codes, which also has been the practice since the 1980s. 30 – 31 – 32 – 33 – 34 – 350 – 351 – 351 (291) – (landlines only) 351 (292) – (landlines only, Horta, Azores area) 351 (295) – (landlines only, Angra do Heroísmo area) 351 (296) – (landlines only, Ponta Delgada and São Miguel Island area) 352 – 353 – 354 – 355 – 356 – 357 – (including ) 358 – 358 (18) – 359 – 36 – (formerly assigned to Turkey, now at 90) 37 – unassigned (formerly assigned to East Germany until its reunification with West Germany, now part of 49 Germany) 370 – (formerly 7/012 as Lithuanian SSR) 371 – (formerly 7/013 as Latvian SSR) 372 – (formerly 7/014 as Estonian SSR) 373 – (formerly 7/042 as Moldavian SSR) 374 – (formerly 7/885 as Armenian SSR) 374 (47) – (landlines, formerly 7/893) 374 (97) – (mobile phones) 375 – 376 – (formerly 33 628) 377 – (formerly 33 93) 378 – (interchangeably with 39 0549; earlier was allocated 295 but never used) 379 – (assigned but uses 39 06698). 38 – unassigned (formerly assigned to Yugoslavia until its break-up in 1991) 380 – 381 – 382 – 383 – 384 – unassigned 385 – 386 – 387 – 388 – unassigned (formerly assigned to the European Telephony Numbering Space) 389 – 39 – 39 (0549) – (interchangeably with 378) 39 (06 698) – (assigned 379 but not in use) 40 – 41 – 41 (91) – Campione d'Italia, an Italian enclave 42 – unassigned (formerly assigned to Czechoslovakia until its breakup in 1993) 420 – 421 – 422 – unassigned 423 – (formerly at 41 (75)) 424 – unassigned 425 – unassigned 426 – unassigned 427 – unassigned 428 – unassigned 429 – unassigned 43 – 44 – 44 (1481) – 44 (1534) – 44 (1624) – 45 – 46 – 47 – 47 (79) – 48 – 49 – Zone 5: South and Central Americas 500 – 500 – 501 – 502 – 503 – 504 – 505 – 506 – 507 – 508 – 509 – 51 – 52 – 53 – 54 – 55 – 56 – 57 – 58 – 590 – (including Saint Barthélemy, Saint Martin) 591 – 592 – 593 – 594 – 595 – 596 – (formerly assigned to Peru, now 51) 597 – 598 – 599 – Former , now grouped as follows: 599 3 – 599 4 – 599 5 – unassigned (formerly assigned to Sint Maarten, now included in NANP as 1 (721)) 599 7 – 599 8 – unassigned (formerly assigned to Aruba, now at 297) 599 9 – Zone 6: Southeast Asia and Oceania 60 – 61 – (see also 672 below) 61 (8 9162) – 61 (8 9164) – 62 – 63 – 64 – 64 – 65 – 66 – 670 – (formerly 62/39 during the Indonesian occupation; formerly assigned to Northern Mariana Islands, now part of NANP as 1 (670)) 671 – unassigned (formerly assigned to Guam, now part of NANP as 1 (671)) 672 – Australian External Territories (see also 61 Australia above); formerly assigned to Portuguese Timor (see 670) 672 (1x) – Australian Antarctic Territory 672 (3) – 673 – 674 – 675 – 676 – 677 – 678 – 679 – 680 – 681 – 682 – 683 – 684 – unassigned (formerly assigned to American Samoa, now part of NANP as 1 (684)) 685 – 686 – 687 – 688 – 689 – 690 – 691 – 692 – 693 – unassigned 694 – unassigned 695 – unassigned 696 – unassigned 697 – unassigned 698 – unassigned 699 – unassigned Zone 7: Russia and neighboring regions Formerly assigned to the Soviet Union until its dissolution in 1991. 7 (1–5, 8, 9) – 7 (840, 940) – (interchangeably with 995 (44)) 7 (850, 929) – (interchangeably with 995 (34)) 7 (6, 7) – (assigned 997 but unused) Zone 8: East Asia and special services 800 – Universal International Freephone Service (UIFN) 801 – unassigned 802 – unassigned 803 – unassigned 804 – unassigned 805 – unassigned 806 – unassigned 807 – unassigned 808 – Universal International Shared Cost Numbers 809 – unassigned 81 – 82 – 83x – unassigned (reserved for country code expansion) 84 – 850 – 851 – unassigned 852 – 853 – 854 – unassigned 855 – 856 – 857 – unassigned (formerly assigned to ANAC satellite service) 858 – unassigned (formerly assigned to ANAC satellite service) 859 – unassigned 86 – 870 – Inmarsat 871 – unassigned (formerly assigned to Inmarsat Atlantic East, discontinued in 2008) 872 – unassigned (formerly assigned to Inmarsat Pacific, discontinued in 2008) 873 – unassigned (formerly assigned to Inmarsat Indian, discontinued in 2008) 874 – unassigned (formerly assigned to Inmarsat Atlantic West, discontinued in 2008) 875 – unassigned (reserved for future maritime mobile service) 876 – unassigned (reserved for future maritime mobile service) 877 – unassigned (reserved for future maritime mobile service) 878 – unassigned (formerly used for Universal Personal Telecommunications Service, discontinued in 2022) 879 – unassigned (reserved for national non-commercial purposes) 880 – 881 – Global Mobile Satellite System 882 – International Networks 883 – International Networks 884 – unassigned 885 – unassigned 886 – 887 – unassigned 888 – unassigned (formerly assigned to OCHA for Telecommunications for Disaster Relief service) 889 – unassigned 89x – unassigned (reserved for country code expansion) Zone 9: Mostly Middle East, West Asia, Central Asia, parts of South Asia and Eastern Europe 90 – 90 (392) – 91 – 92 – 92 (581) – 92 (582) – 93 – 94 – 95 – 960 – 961 – 962 – 963 – 964 – 965 – 966 – 967 – 968 – 969 – unassigned (formerly assigned to South Yemen until its unification with North Yemen, now part of 967 Yemen) 970 – 971 – 972 – 973 – 974 – 975 – 976 – 977 – 978 – unassigned (formerly assigned to Dubai, now part of 971 United Arab Emirates) 979 – Universal International Premium Rate Service (UIPRS); (formerly assigned to Abu Dhabi, now part of 971 United Arab Emirates) 98 – 990 – unassigned 991 – unassigned (formerly used for International Telecommunications Public Correspondence Service) 992 – 993 – 994 – 995 – 995 (34) – (interchangeably with 7 (850, 929)) 995 (44) – (interchangeably with 7 (840, 940)) 996 – 997 – (assigned but uses 7 (6xx, 7xx)) 998 – 999 – unassigned (reserved for future global service) Alphabetical order Locations with no country code In Antarctica, telecommunication services are provided by the parent country of each base: Other places with no country codes in use, although a code may be reserved: See also List of mobile telephone prefixes by country National conventions for writing telephone numbers References External links Communication-related lists International telecommunications Lists of country codes Telecommunications lists
5771
https://en.wikipedia.org/wiki/Christopher%20Marlowe
Christopher Marlowe
Christopher Marlowe, also known as Kit Marlowe (; baptised 26 February 156430 May 1593), was an English playwright, poet and translator of the Elizabethan era. Marlowe is among the most famous of the Elizabethan playwrights. Based upon the "many imitations" of his play Tamburlaine, modern scholars consider him to have been the foremost dramatist in London in the years just before his mysterious early death. Some scholars also believe that he greatly influenced William Shakespeare, who was baptised in the same year as Marlowe and later succeeded him as the pre-eminent Elizabethan playwright. Marlowe was the first to achieve critical reputation for his use of blank verse, which became the standard for the era. His plays are distinguished by their overreaching protagonists. Themes found within Marlowe's literary works have been noted as humanistic with realistic emotions, which some scholars find difficult to reconcile with Marlowe's "anti-intellectualism" and his catering to the prurient tastes of his Elizabethan audiences for generous displays of extreme physical violence, cruelty, and bloodshed. Events in Marlowe's life were sometimes as extreme as those found in his plays. Differing sensational reports of Marlowe's death in 1593 abounded after the event and are contested by scholars today owing to a lack of good documentation. There have been many conjectures as to the nature and reason for his death, including a vicious bar-room fight, blasphemous libel against the church, homosexual intrigue, betrayal by another playwright, and espionage from the highest level: the Privy Council of Elizabeth I. An official coroner's account of Marlowe's death was discovered only in 1925, and it did little to persuade all scholars that it told the whole story, nor did it eliminate the uncertainties present in his biography. Early life Christopher Marlowe, the second of nine children, and oldest child after the death of his sister Mary in 1568, was born to Canterbury shoemaker John Marlowe and his wife Katherine, daughter of William Arthur of Dover. He was baptised at St George's Church, Canterbury, on 26 February 1564 (1563 in the old style dates in use at the time, which placed the new year on 25 March). Marlowe's birth was likely to have been a few days before, making him about two months older than William Shakespeare, who was baptised on 26 April 1564 in Stratford-upon-Avon. By age 14, Marlowe was a pupil at The King's School, Canterbury on a scholarship and two years later a student at Corpus Christi College, Cambridge, where he also studied through a scholarship with expectation that he would become an Anglican clergyman. Instead, he received his Bachelor of Arts degree in 1584. Marlowe mastered Latin during his schooling, reading and translating the works of Ovid. In 1587, the university hesitated to award his Master of Arts degree because of a rumour that he intended to go to the English seminary at Rheims in northern France, presumably to prepare for ordination as a Roman Catholic priest. If true, such an action on his part would have been a direct violation of royal edict issued by Queen Elizabeth I in 1585 criminalising any attempt by an English citizen to be ordained in the Roman Catholic Church. Large-scale violence between Protestants and Catholics on the European continent has been cited by scholars as the impetus for the Protestant English Queen's defensive anti-Catholic laws issued from 1581 until her death in 1603. Despite the dire implications for Marlowe, his degree was awarded on schedule when the Privy Council intervened on his behalf, commending him for his "faithful dealing" and "good service" to the Queen. The nature of Marlowe's service was not specified by the council, but its letter to the Cambridge authorities has provoked much speculation by modern scholars, notably the theory that Marlowe was operating as a secret agent for Privy Council member Sir Francis Walsingham. The only surviving evidence of the Privy Council's correspondence is found in their minutes, the letter being lost. There is no mention of espionage in the minutes, but its summation of the lost Privy Council letter is vague in meaning, stating that "it was not Her Majesties pleasure" that persons employed as Marlowe had been "in matters touching the benefit of his country should be defamed by those who are ignorant in th'affaires he went about." Scholars agree the vague wording was typically used to protect government agents, but they continue to debate what the "matters touching the benefit of his country" actually were in Marlowe's case and how they affected the 23-year-old writer as he launched his literary career in 1587. Adult life and legend Little is known about Marlowe's adult life. All available evidence, other than what can be deduced from his literary works, is found in legal records and other official documents. Writers of fiction and non-fiction have speculated about his professional activities, private life, and character. Marlowe has been described as a spy, a brawler, and a heretic, as well as a "magician", "duellist", "tobacco-user", "counterfeiter" and "rakehell". While J. A. Downie and Constance Kuriyama have argued against the more lurid speculations, it is the usually circumspect J. B. Steane who remarked, "it seems absurd to dismiss all of these Elizabethan rumours and accusations as 'the Marlowe myth. Much has been written on his brief adult life, including speculation of: his involvement in royally-sanctioned espionage; his vocal declaration as an atheist; his (possibly same-sex) sexual interests; and the puzzling circumstances surrounding his death. Spying Marlowe is alleged to have been a government spy. Park Honan and Charles Nicholl speculate that this was the case and suggest that Marlowe's recruitment took place when he was at Cambridge. In 1587, when the Privy Council ordered the University of Cambridge to award Marlowe his degree as Master of Arts, it denied rumours that he intended to go to the English Catholic college in Rheims, saying instead that he had been engaged in unspecified "affaires" on "matters touching the benefit of his country". Surviving college records from the period also indicate that, in the academic year 1584–1585, Marlowe had had a series of unusually lengthy absences from the university which violated university regulations. Surviving college buttery accounts, which record student purchases for personal provisions, show that Marlowe began spending lavishly on food and drink during the periods he was in attendance; the amount was more than he could have afforded on his known scholarship income. It has been speculated that Marlowe was the "Morley" who was tutor to Arbella Stuart in 1589. This possibility was first raised in a Times Literary Supplement letter by E. St John Brooks in 1937; in a letter to Notes and Queries, John Baker has added that only Marlowe could have been Arbella's tutor owing to the absence of any other known "Morley" from the period with an MA and not otherwise occupied. If Marlowe was Arbella's tutor, it might indicate that he was there as a spy, since Arbella, niece of Mary, Queen of Scots, and cousin of James VI of Scotland, later James I of England, was at the time a strong candidate for the succession to Elizabeth's throne. Frederick S. Boas dismisses the possibility of this identification, based on surviving legal records which document Marlowe's "residence in London between September and December 1589". Marlowe had been party to a fatal quarrel involving his neighbours and the poet Thomas Watson in Norton Folgate and was held in Newgate Prison for a fortnight. In fact, the quarrel and his arrest occurred on 18 September, he was released on bail on 1 October and he had to attend court, where he was acquitted on 3 December, but there is no record of where he was for the intervening two months. In 1592 Marlowe was arrested in the English garrison town of Flushing (Vlissingen) in the Netherlands, for alleged involvement in the counterfeiting of coins, presumably related to the activities of seditious Catholics. He was sent to the Lord Treasurer (Burghley), but no charge or imprisonment resulted. This arrest may have disrupted another of Marlowe's spying missions, perhaps by giving the resulting coinage to the Catholic cause. He was to infiltrate the followers of the active Catholic plotter William Stanley and report back to Burghley. Philosophy Marlowe was reputed to be an atheist, which held the dangerous implication of being an enemy of God and the state, by association. With the rise of public fears concerning The School of Night, or "School of Atheism" in the late 16th century, accusations of atheism were closely associated with disloyalty to the Protestant monarchy of England. Some modern historians consider that Marlowe's professed atheism, as with his supposed Catholicism, may have been no more than a sham to further his work as a government spy. Contemporary evidence comes from Marlowe's accuser in Flushing, an informer called Richard Baines. The governor of Flushing had reported that each of the men had "of malice" accused the other of instigating the counterfeiting and of intending to go over to the Catholic "enemy"; such an action was considered atheistic by the Church of England. Following Marlowe's arrest in 1593, Baines submitted to the authorities a "note containing the opinion of one Christopher Marly concerning his damnable judgment of religion, and scorn of God's word". Baines attributes to Marlowe a total of eighteen items which "scoff at the pretensions of the Old and New Testament" such as, "Christ was a bastard and his mother dishonest [unchaste]", "the woman of Samaria and her sister were whores and that Christ knew them dishonestly", "St John the Evangelist was bedfellow to Christ and leaned always in his bosom" (cf. John 13:23–25) and "that he used him as the sinners of Sodom". He also implied that Marlowe had Catholic sympathies. Other passages are merely sceptical in tone: "he persuades men to atheism, willing them not to be afraid of bugbears and hobgoblins". The final paragraph of Baines's document reads: Similar examples of Marlowe's statements were given by Thomas Kyd after his imprisonment and possible torture (see above); Kyd and Baines connect Marlowe with mathematician Thomas Harriot's and Sir Walter Raleigh's circle. Another document claimed about that time that "one Marlowe is able to show more sound reasons for Atheism than any divine in England is able to give to prove divinity, and that ... he hath read the Atheist lecture to Sir Walter Raleigh and others". Some critics believe that Marlowe sought to disseminate these views in his work and that he identified with his rebellious and iconoclastic protagonists. Plays had to be approved by the Master of the Revels before they could be performed and the censorship of publications was under the control of the Archbishop of Canterbury. Presumably these authorities did not consider any of Marlowe's works to be unacceptable other than the Amores. Sexuality It has been claimed that Marlowe was homosexual. Some scholars argue that the identification of an Elizabethan as gay or homosexual in the modern sense is "anachronistic," claiming that for the Elizabethans the terms were more likely to have been applied to homoerotic affections or sexual acts rather than to what we currently understand as a settled sexual orientation or personal role identity. Other scholars argue that the evidence is inconclusive and that the reports of Marlowe's homosexuality may be rumours produced after his death. Richard Baines reported Marlowe as saying: "all they that love not Tobacco & Boies were fools". David Bevington and Eric C. Rasmussen describe Baines's evidence as "unreliable testimony" and "[t]hese and other testimonials need to be discounted for their exaggeration and for their having been produced under legal circumstances we would now regard as a witch-hunt". J. B. Steane considered there to be "no evidence for Marlowe's homosexuality at all". Other scholars point to the frequency with which Marlowe explores homosexual themes in his writing: in Hero and Leander, Marlowe writes of the male youth Leander: "in his looks were all that men desire..." Edward the Second contains the following passage enumerating homosexual relationships: Marlowe wrote the only play about the life of Edward II up to his time, taking the humanist literary discussion of male sexuality much further than his contemporaries. The play was extremely bold, dealing with a star-crossed love story between Edward II and Piers Gaveston. Though it was a common practice at the time to reveal characters as homosexual to give audiences reason to suspect them as culprits in a crime, Christopher Marlowe's Edward II is portrayed as a sympathetic character. The decision to start the play Dido, Queen of Carthage with a homoerotic scene between Jupiter and Ganymede that bears no connection to the subsequent plot has long puzzled scholars. Arrest and death In early May 1593, several bills were posted about London threatening the Protestant refugees from France and the Netherlands who had settled in the city. One of these, the "Dutch church libel", written in rhymed iambic pentameter, contained allusions to several of Marlowe's plays and was signed, "Tamburlaine". On 11 May the Privy Council ordered the arrest of those responsible for the libels. The next day, Marlowe's colleague Thomas Kyd was arrested, his lodgings were searched and a three-page fragment of a heretical tract was found. In a letter to Sir John Puckering, Kyd asserted that it had belonged to Marlowe, with whom he had been writing "in one chamber" some two years earlier. In a second letter, Kyd described Marlowe as blasphemous, disorderly, holding treasonous opinions, being an irreligious reprobate and "intemperate & of a cruel hart". They had both been working for an aristocratic patron, probably Ferdinando Stanley, Lord Strange. A warrant for Marlowe's arrest was issued on 18 May, when the Privy Council apparently knew that he might be found staying with Thomas Walsingham, whose father was a first cousin of the late Sir Francis Walsingham, Elizabeth's principal secretary in the 1580s and a man more deeply involved in state espionage than any other member of the Privy Council. Marlowe duly presented himself on 20 May but there apparently being no Privy Council meeting on that day, was instructed to "give his daily attendance on their Lordships, until he shall be licensed to the contrary". On Wednesday, 30 May, Marlowe was killed. Various accounts of Marlowe's death were current over the next few years. In his Palladis Tamia, published in 1598, Francis Meres says Marlowe was "stabbed to death by a bawdy serving-man, a rival of his in his lewd love" as punishment for his "epicurism and atheism". In 1917, in the Dictionary of National Biography, Sir Sidney Lee wrote, on slender evidence, that Marlowe was killed in a drunken fight. His claim was not much at variance with the official account, which came to light only in 1925, when the scholar Leslie Hotson discovered the coroner's report of the inquest on Marlowe's death, held two days later on Friday 1 June 1593, by the Coroner of the Queen's Household, William Danby. Marlowe had spent all day in a house in Deptford, owned by the widow Eleanor Bull, with three men: Ingram Frizer, Nicholas Skeres and Robert Poley. All three had been employed by one or other of the Walsinghams. Skeres and Poley had helped snare the conspirators in the Babington plot and Frizer was a servant to Thomas Walsingham probably in the role of a financial or business agent, as he was for Walsingham's wife Audrey a few years later. These witnesses testified that Frizer and Marlowe had argued over payment of the bill (now famously known as the 'Reckoning') exchanging "divers malicious words" while Frizer was sitting at a table between the other two and Marlowe was lying behind him on a couch. Marlowe snatched Frizer's dagger and wounded him on the head. In the ensuing struggle, according to the coroner's report, Marlowe was stabbed above the right eye, killing him instantly. The jury concluded that Frizer acted in self-defence and within a month he was pardoned. Marlowe was buried in an unmarked grave in the churchyard of St. Nicholas, Deptford immediately after the inquest, on 1 June 1593. The complete text of the inquest report was published by Leslie Hotson in his book, The Death of Christopher Marlowe, in the introduction to which Prof. George Kittredge said, "The mystery of Marlowe's death, heretofore involved in a cloud of contradictory gossip and irresponsible guess-work, is now cleared up for good and all on the authority of public records of complete authenticity and gratifying fullness" but this confidence proved fairly short-lived. Hotson had considered the possibility that the witnesses had "concocted a lying account of Marlowe's behaviour, to which they swore at the inquest, and with which they deceived the jury" but came down against that scenario. Others began to suspect that this scenario was indeed the case. Writing to the Times Literary Supplement shortly after the book's publication, Eugénie de Kalb disputed that the struggle and outcome as described were even possible and Samuel A. Tannenbaum insisted the following year that such a wound could not have possibly resulted in instant death, as had been claimed. Even Marlowe's biographer John Bakeless acknowledged that "some scholars have been inclined to question the truthfulness of the coroner's report. There is something queer about the whole episode" and said that Hotson's discovery "raises almost as many questions as it answers". It has also been discovered more recently that the apparent absence of a local county coroner to accompany the Coroner of the Queen's Household would, if noticed, have made the inquest null and void. One of the main reasons for doubting the truth of the inquest concerns the reliability of Marlowe's companions as witnesses. As an agent provocateur for the late Sir Francis Walsingham, Robert Poley was a consummate liar, the "very genius of the Elizabethan underworld" and is on record as saying "I will swear and forswear myself, rather than I will accuse myself to do me any harm". The other witness, Nicholas Skeres, had for many years acted as a confidence trickster, drawing young men into the clutches of people in the money-lending racket, including Marlowe's apparent killer, Ingram Frizer, with whom he was engaged in such a swindle. Despite their being referred to as generosi (gentlemen) in the inquest report, the witnesses were professional liars. Some biographers, such as Kuriyama and Downie, take the inquest to be a true account of what occurred, but in trying to explain what really happened if the account was not true, others have come up with a variety of murder theories: Jealous of her husband Thomas's relationship with Marlowe, Audrey Walsingham arranged for the playwright to be murdered. Sir Walter Raleigh arranged the murder, fearing that under torture Marlowe might incriminate him. With Skeres the main player, the murder resulted from attempts by the Earl of Essex to use Marlowe to incriminate Sir Walter Raleigh. He was killed on the orders of father and son Lord Burghley and Sir Robert Cecil, who thought that his plays contained Catholic propaganda. He was accidentally killed while Frizer and Skeres were pressuring him to pay back money he owed them. Marlowe was murdered at the behest of several members of the Privy Council, who feared that he might reveal them to be atheists. The Queen ordered his assassination because of his subversive atheistic behaviour. Frizer murdered him because he envied Marlowe's close relationship with his master Thomas Walsingham and feared the effect that Marlowe's behaviour might have on Walsingham's reputation. Marlowe's death was faked to save him from trial and execution for subversive atheism. Since there are only written documents on which to base any conclusions and since it is probable that the most crucial information about his death was never committed to paper, it is unlikely that the full circumstances of Marlowe's death will ever be known. Reputation among contemporary writers For his contemporaries in the literary world, Marlowe was above all an admired and influential artist. Within weeks of his death, George Peele remembered him as "Marley, the Muses' darling"; Michael Drayton noted that he "Had in him those brave translunary things / That the first poets had" and Ben Jonson even wrote of "Marlowe's mighty line". Thomas Nashe wrote warmly of his friend, "poor deceased Kit Marlowe," as did the publisher Edward Blount in his dedication of Hero and Leander to Sir Thomas Walsingham. Among the few contemporary dramatists to say anything negative about Marlowe was the anonymous author of the Cambridge University play The Return from Parnassus (1598) who wrote, "Pity it is that wit so ill should dwell, / Wit lent from heaven, but vices sent from hell". The most famous tribute to Marlowe was paid by Shakespeare in As You Like It, where he not only quotes a line from Hero and Leander ("Dead Shepherd, now I find thy saw of might, 'Who ever lov'd that lov'd not at first sight?) but also gives to the clown Touchstone the words "When a man's verses cannot be understood, nor a man's good wit seconded with the forward child, understanding, it strikes a man more dead than a great reckoning in a little room." This appears to be a reference to Marlowe's murder which involved a fight over the "reckoning," the bill, as well as to a line in Marlowe's Jew of Malta, "Infinite riches in a little room." Shakespeare was much influenced by Marlowe in his work, as can be seen in the use of Marlovian themes in Antony and Cleopatra, The Merchant of Venice, Richard II and Macbeth (Dido, Jew of Malta, Edward II and Doctor Faustus, respectively). In Hamlet, after meeting with the travelling actors, Hamlet requests the Player perform a speech about the Trojan War, which at 2.2.429–432 has an echo of Marlowe's Dido, Queen of Carthage. In Love's Labour's Lost Shakespeare brings on a character "Marcade" (three syllables) in conscious acknowledgement of Marlowe's character "Mercury", also attending the King of Navarre, in Massacre at Paris. The significance, to those of Shakespeare's audience who were familiar with Hero and Leander, was Marlowe's identification of himself with the god Mercury. Shakespeare authorship theory An argument has arisen about the notion that Marlowe faked his death and then continued to write under the assumed name of William Shakespeare. Academic consensus rejects alternative candidates for authorship of Shakespeare's plays and sonnets, including Marlowe. Literary career Plays Six dramas have been attributed to the authorship of Christopher Marlowe either alone or in collaboration with other writers, with varying degrees of evidence. The writing sequence or chronology of these plays is mostly unknown and is offered here with any dates and evidence known. Among the little available information we have, Dido is believed to be the first Marlowe play performed, while it was Tamburlaine that was first to be performed on a regular commercial stage in London in 1587. Believed by many scholars to be Marlowe's greatest success, Tamburlaine was the first English play written in blank verse and, with Thomas Kyd's The Spanish Tragedy, is generally considered the beginning of the mature phase of the Elizabethan theatre. The play Lust's Dominion was attributed to Marlowe upon its initial publication in 1657, though scholars and critics have almost unanimously rejected the attribution. He may also have written or co-written Arden of Faversham. Poetry and translations Publication and responses to the poetry and translations credited to Marlowe primarily occurred posthumously, including: Amores, first book of Latin elegiac couplets by Ovid with translation by Marlowe (c. 1580s); copies publicly burned as offensive in 1599. The Passionate Shepherd to His Love, by Marlowe. (c. 1587–1588); a popular lyric of the time. Hero and Leander, by Marlowe (c. 1593, unfinished; completed by George Chapman, 1598; printed 1598). Pharsalia, Book One, by Lucan with translation by Marlowe. (c. 1593; printed 1600) Collaborations Modern scholars still look for evidence of collaborations between Marlowe and other writers. In 2016, one publisher was the first to endorse the scholarly claim of a collaboration between Marlowe and the playwright William Shakespeare: Henry VI by William Shakespeare is now credited as a collaboration with Marlowe in the New Oxford Shakespeare series, published in 2016. Marlowe appears as co-author of the three Henry VI plays, though some scholars doubt any actual collaboration. Contemporary reception Marlowe's plays were enormously successful, possibly because of the imposing stage presence of his lead actor, Edward Alleyn. Alleyn was unusually tall for the time and the haughty roles of Tamburlaine, Faustus and Barabas were probably written for him. Marlowe's plays were the foundation of the repertoire of Alleyn's company, the Admiral's Men, throughout the 1590s. One of Marlowe's poetry translations did not fare as well. In 1599, Marlowe's translation of Ovid was banned and copies were publicly burned as part of Archbishop Whitgift's crackdown on offensive material. Chronology of dramatic works (Patrick Cheney's 2004 Cambridge Companion to Christopher Marlowe presents an alternative timeline based upon printing dates.) Dido, Queen of Carthage (–1587) First official record 1594 First published 1594; posthumously First recorded performance between 1587 and 1593 by the Children of the Chapel, a company of boy actors in London. Significance This play is believed by many scholars to be the first play by Christopher Marlowe to be performed. Attribution The title page attributes the play to Marlowe and Thomas Nashe, yet some scholars question how much of a contribution Nashe made to the play. Evidence No manuscripts by Marlowe exist for this play. Tamburlaine, Part I (); Part II (–1588) First official record 1587, Part I First published 1590, Parts I and II in one octavo, London. No author named. First recorded performance 1587, Part I, by the Admiral's Men, London. Significance Tamburlaine is the first example of blank verse used in the dramatic literature of the Early Modern English theatre. Attribution Author name is missing from first printing in 1590. Attribution of this work by scholars to Marlowe is based upon comparison to his other verified works. Passages and character development in Tamburlane are similar to many other Marlowe works. Evidence No manuscripts by Marlowe exist for this play. Parts I and II were entered into the Stationers' Register on 14 August 1590. The two parts were published together by the London printer, Richard Jones, in 1590; a second edition in 1592, and a third in 1597. The 1597 edition of the two parts were published separately in quarto by Edward White; part I in 1605, and part II in 1606. The Jew of Malta (–1590) First official record 1592 First published 1592; earliest extant edition, 1633 First recorded performance 26 February 1592, by Lord Strange's acting company. Significance The performances of the play were a success and it remained popular for the next fifty years. This play helps to establish the strong theme of "anti-authoritarianism" that is found throughout Marlowe's works. Evidence No manuscripts by Marlowe exist for this play. The play was entered in the Stationers' Register on 17 May 1594 but the earliest surviving printed edition is from 1633. Doctor Faustus (–1592) First official record 1594–1597 First published 1601, no extant copy; first extant copy, 1604 (A text) quarto; 1616 (B text) quarto. First recorded performance 1594–1597; 24 revival performances occurred between these years by the Lord Admiral's Company, Rose Theatre, London; earlier performances probably occurred around 1589 by the same company. Significance This is the first dramatised version of the Faust legend of a scholar's dealing with the devil. Marlowe deviates from earlier versions of "The Devil's Pact" significantly: Marlowe's protagonist is unable to "burn his books" or repent to a merciful God to have his contract annulled at the end of the play; he is carried off by demons; and, in the 1616 quarto, his mangled corpse is found by the scholar characters. Attribution The 'B text' was highly edited and censored, owing in part to the shifting theatre laws regarding religious words onstage during the seventeenth-century. Because it contains several additional scenes believed to be the additions of other playwrights, particularly Samuel Rowley and William Bird (alias Borne), a recent edition attributes the authorship of both versions to "Christopher Marlowe and his collaborator and revisers." This recent edition has tried to establish that the 'A text' was assembled from Marlowe's work and another writer, with the 'B text' as a later revision. Evidence No manuscripts by Marlowe exist for this play. The two earliest-printed extant versions of the play, A and B, form a textual problem for scholars. Both were published after Marlowe's death and scholars disagree which text is more representative of Marlowe's original. Some editions are based on a combination of the two texts. Late-twentieth-century scholarly consensus identifies 'A text' as more representative because it contains irregular character names and idiosyncratic spelling, which are believed to reflect the author's handwritten manuscript or "foul papers". In comparison, 'B text' is highly edited with several additional scenes possibly written by other playwrights. Edward the Second () First official record 1593 First published 1590; earliest extant edition 1594 octavo First recorded performance 1592, performed by the Earl of Pembroke's Men. Significance Considered by recent scholars as Marlowe's "most modern play" because of its probing treatment of the private life of a king and unflattering depiction of the power politics of the time. The 1594 editions of Edward II and of Dido are the first published plays with Marlowe's name appearing as the author. Attribution Earliest extant edition of 1594. Evidence The play was entered into the Stationers' Register on 6 July 1593, five weeks after Marlowe's death. The Massacre at Paris (–1593) First official record , alleged foul sheet by Marlowe of "Scene 19"; although authorship by Marlowe is contested by recent scholars, the manuscript is believed written while the play was first performed and with an unknown purpose. First published undated, or later, octavo, London; while this is the most complete surviving text, it is near half the length of Marlowe's other works and possibly a reconstruction. The printer and publisher credit, "E.A. for Edward White," also appears on the 1605/06 printing of Marlowe's Tamburlaine. First recorded performance 26 Jan 1593, by Lord Strange's Men, at Henslowe's Rose Theatre, London, under the title The Tragedy of the Guise; 1594, in the repertory of the Admiral's Men. Significance The Massacre at Paris is considered Marlowe's most dangerous play, as agitators in London seized on its theme to advocate the murders of refugees from the low countries of the Spanish Netherlands, and it warns Elizabeth I of this possibility in its last scene. It features the silent "English Agent", whom tradition has identified with Marlowe and his connexions to the secret service. Highest grossing play for Lord Strange's Men in 1593. Attribution A 1593 loose manuscript sheet of the play, called a foul sheet, is alleged to be by Marlowe and has been claimed by some scholars as the only extant play manuscript by the author. It could also provide an approximate date of composition for the play. When compared with the extant printed text and his other work, other scholars reject the attribution to Marlowe. The only surviving printed text of this play is possibly a reconstruction from memory of Marlowe's original performance text. Current scholarship notes that there are only 1147 lines in the play, half the amount of a typical play of the 1590s. Other evidence that the extant published text may not be Marlowe's original is the uneven style throughout, with two-dimensional characterisations, deteriorating verbal quality and repetitions of content. Evidence Never appeared in the Stationer's Register. Memorials The Muse of Poetry, a bronze sculpture by Edward Onslow Ford references Marlowe and his work. It was erected on Buttermarket, Canterbury in 1891, and now stands outside the Marlowe Theatre in the city. In July 2002, a memorial window to Marlowe was unveiled by the Marlowe Society at Poets' Corner in Westminster Abbey. Controversially, a question mark was added to his generally accepted date of death. On 25 October 2011 a letter from Paul Edmondson and Stanley Wells was published by The Times newspaper, in which they called on the Dean and Chapter to remove the question mark on the grounds that it "flew in the face of a mass of unimpugnable evidence". In 2012, they renewed this call in their e-book Shakespeare Bites Back, adding that it "denies history" and again the following year in their book Shakespeare Beyond Doubt. The Marlowe Theatre in Canterbury, Kent, UK, was named for Marlowe in 1949. Marlowe in fiction Marlowe has been used as a character in books, theatre, film, television, games and radio. Modern compendia Modern scholarly collected works of Marlowe include: The Complete Works of Christopher Marlowe (edited by Roma Gill in 1986; Clarendon Press published in partnership with Oxford University Press) The Complete Plays of Christopher Marlowe (edited by J. B. Steane in 1969; edited by Frank Romany and Robert Lindsey, Revised Edition, 2004, Penguin) Works of Marlowe in performance Radio BBC Radio broadcast adaptations of Marlowe's six plays from May to October 1993. Royal Shakespeare Company Royal Shakespeare Company Dido, Queen of Carthage, directed by Kimberly Sykes, with Chipo Chung as Dido. Swan Theatre, 2017. Tamburlaine the Great, directed by Terry Hands, with Anthony Sher as Tamburlaine. Swan Theatre, 1992; Barbican Theatre, 1993. Tamburlaine the Great directed by Michael Boyd, with Jude Owusu as Tamburlaine. Swan Theatre, 2018. The Jew of Malta, directed by Barry Kyle, with Jasper Britton as Barabas. Swan Theatre, 1987; People's Theatre, and Barbican Theatre, 1988. The Jew of Malta, directed by Justin Audibert, with Jasper Britton as Barabas. Swan Theatre, 2015. Edward II, directed by Gerard Murphy, with Simon Russell Beale as Edward. Swan Theatre, 1990. Doctor Faustus, directed by John Barton, with Ian McKellen as Faustus. Nottingham Playhouse and Aldwych Theatre, 1974, and Royal Shakespeare Theatre, 1975. Doctor Faustus directed by Barry Kyle with Gerard Murphy as Faustus, Swan Theatre and Pit Theatre, 1989. Doctor Faustus directed by Maria Aberg, with Sandy Grierson and Oliver Ryan sharing the roles of Faustus and Mephistophilis. Swan Theatre and Barbican Theatre, 2016. Royal National Theatre Royal National Theatre Tamburlaine, directed by Peter Hall, with Albert Finney as Tamburlaine. Olivier Theatre, 1976. Dido, Queen of Carthage, directed by James McDonald with Anastasia Hille as Dido. Cottesloe Theatre, 2009. Edward II, directed by Joe Hill-Gibbins, with John Heffernan as Edward. Olivier Theatre, 2013. Shakespeare's Globe Shakespeare's Globe Dido, Queen of Carthage, directed by Tim Carroll, with Rakie Ayola as Dido, 2003. Edward II, directed by Timothy Walker, with Liam Brennan as Edward, 2003. Malthouse Theatre The Marlowe Sessions Dido, Queen of Carthage, Directed/Produced by Ray Mia, Performance direction by Stephen Unwin, with Thalissa Teixeira as Dido, 2022. Tamburlaine The Great, Part 1, Directed/Produced by Ray Mia, Performance direction by Phillip Breen, with Alan Cox as Tamburlaine, 2022. The Jew Of Malta, Directed/Produced by Ray Mia, Performance direction by Stephen Unwin, with Adrian Schiller as Barrabus, 2022. Tamburlaine The Great, Part 2, Directed/Produced by Ray Mia, Performance direction by Phillip Breen, with Alan Cox as Tamburlaine, 2022. Edward The Second, Directed/Produced by Ray Mia, Performance direction by Abigail Rokison, with Jack Holden as Edward II, 2022. The Massacre At Paris, Directed/Produced by Ray Mia, Performance direction by Abigail Rokison, with Michael Maloney as Guise, 2022. Dr Faustus, Directed/Produced by Ray Mia, Performance direction by Phillip Breen, with Dominic West as Faustus and Talulah Riley as Mephistopheles, 2022. The Poetry of Christopher Marlowe, Directed/Produced by Ray Mia, Performance direction by Philip Bird, read by Jack Holden, Fisayo Akinade and Philip Bird, 2022. Other stage Tamburlaine. Yale University, 1919. Tamburlaine, directed by Tyrone Guthrie, with Donald Wolfit as Tamburlaine. The Old Vic, 1951. Doctor Faustus, co-directed by Orson Welles and John Houseman, with Welles as Faustus and Jack Carter as Mephistopheles. Maxine Elliott's Theatre, 1937. Doctor Faustus, directed by Adrian Noble. Royal Exchange, 1981. Edward II, directed by Toby Robertson, with John Barton as Edward. Cambridge, 1951. Edward II, directed by Toby Robertson, with Derek Jacobi as Edward. Cambridge, 1958. Edward II, directed by Toby Robertson, with Ian McKellen as Edward. Assembly Rooms, 1969. Edward II, directed by Jim Stone, Washington Stage Company, 1993; Edward II, directed by Jozsef Ruszt. Budapest, 1998; Edward II, directed by Michael Grandage, with Joseph Fiennes as Edward. Crucible Theatre, 2001. The Massacre in Paris, directed by Patrice Chéreau. France, 1972. Stage adaptations Edward II, Phoenix Society, London, 1923. Leben Eduards des Zweiten von England, by Bertolt Brecht (the first play he directed). Munich Chamber Theatre, Germany, 1924. The Life of Edward II of England, by Marlowe and Bertold Brecht, directed by Frank Dunlop. National Theatre, 1968. Edward II, adapted as a ballet, choreographed by David Bintley. Stuttgart Ballet, 1995. Doctor Faustus, additional text by Colin Teevan, directed by Jamie Lloyd, with Kit Harington as Faustus. Duke of York's Theatre, 2016. Faustus, That Damned Woman by Chris Bush, directed by Caroline Byrne. Lyric Theatre, 2020. Film Doctor Faustus, based on Nevill Coghill's 1965 production, adapted for Richard Burton and Elizabeth Taylor, 1967. Edward II, directed by Derek Jarman, 1991. Faust, with some Marlowe dialogue, directed by Jan Švankmajer, 1994. Notes References Sources Via Google Books Further reading Bevington, David, and Eric Rasmussen, eds. Doctor Faustus and Other Plays. Oxford English Drama. Oxford University Press, 1998. Conrad, B. Der wahre Shakespeare: Christopher Marlowe. (German non-Fiction book) 5th Edition, 2016. Cornelius R. M. Christopher Marlowe's Use of the Bibleu. New York: P. Lang, 1984. Marlowe, Christopher. Complete Works. Vol. 3: Edward II., ed. R. Rowland. Oxford: Clarendon Press, 1994. (pp. xxii–xxiii) Oz, Avraham, ed. Marlowe. New Casebooks. Houndmills, Basingstoke and London: Palgrave/Macmillan, 2003. Parker, John. The Aesthetics of Antichrist: From Christian Drama to Christopher Marlowe. Ithaca: Cornell University Press, 2007. Shepard, Alan. Marlowe's Soldiers: Rhetorics of Masculinity in the Age of the Armada, Ashgate, 2002. Sim, James H. Dramatic Uses of Biblical Allusions in Marlowe and Shakespeare, Gainesville: University of Florida Press, 1966. Wraight A. D.; Stern, Virginia F. In Search of Christopher Marlowe: A Pictorial Biography, London: Macdonald, 1965. External links The Marlowe Society The works of Marlowe at Perseus Project The complete works , with modernised spelling, on Peter Farey's Marlowe page. BBC audio file. In Our Time Radio 4 discussion programme on Marlowe and his work The Marlowe Bibliography Online is an initiative of the Marlowe Society of America and the University of Melbourne. Its purpose is to facilitate scholarship on the works of Christopher Marlowe by providing a searchable annotated bibliography of relevant scholarship 1564 births 1593 crimes 1593 deaths 16th-century English dramatists and playwrights 16th-century English poets 16th-century spies Pre–17th-century atheists 16th-century translators Alumni of Corpus Christi College, Cambridge Deaths by stabbing in England English male dramatists and playwrights English male poets English murder victims English Renaissance dramatists English spies People educated at The King's School, Canterbury Writers from Canterbury People murdered in England People of the Elizabethan era University Wits Latin–English translators
5772
https://en.wikipedia.org/wiki/Cricket%20%28disambiguation%29
Cricket (disambiguation)
Cricket is a bat-and-ball sport contested by two teams. Cricket also commonly refers to: Cricket (insect) Cricket(s) or The Cricket(s) may also refer to: Film and television The Cricket (1917 film), a silent American drama film The Cricket (1980 film), an erotic drama film Crickets (film), a 2006 Japanese drama film Christine Blair or Cricket, a character in The Young and the Restless Cricket, a character in To Have and Have Not Matthew "Rickety Cricket" Mara, a character in It's Always Sunny in Philadelphia Cricket, a character in Big City Greens "Cricket", 5th episode of Servant (TV Series) Literature Cricket (magazine), an American literary magazine for children The Cricket (magazine), a 1960s music magazine "Chrząszcz" or "Cricket", a poem by Jan Brzechwa Cricket, a character in Fire on the Mountain Music The Crickets, a rock and roll band formed by Buddy Holly Cricket (musical), a musical by Andrew Lloyd Webber and Tim Rice Crickets (album), by Joe Nichols, 2013 Cricket (producer), Kosovo-Albanian record producer Crickets, a video album by Dredg released alongside their 2002 album El Cielo "Crickets", a song by Drop City Yacht Club "Cricket", a song by the Kinks from Preservation Act 1 Vehicles Cricket (1914 automobile), an early American automobile Plymouth Cricket (disambiguation), two automobiles Cricket-class coastal destroyer, a 1906 class of Royal Navy ships HMS Cricket (1915), an Insect-class gunboat HMS Cricket (shore establishment), Hampshire, 1943-1946 Other uses Cricket (darts), a game using the standard 20-number dartboard Cricket (roofing), a ridge structure designed to divert water on a roof Cricket (series), a series of cricket video games Cricket (warning sound), an audible warning in the cockpits of commercial aircraft Cricket dolls, a talking doll released by Playmates Toys in 1986 Cricket, North Carolina Cricket Wireless, wireless service provider, a subsidiary of AT&T Inc. Programmable Cricket, robotic toys Clicker or cricket, a noisemaker Cricket, a variation of the float breakdancing technique Cricket, a data collection software on top of RRDtool See also Colomban Cri-cri, a light plane Cricket House and Cricket Park, in Cricket St Thomas, England HMS Cricket, a list of Royal Navy ships Shturcite, a Bulgarian band, translated as The Crickets, or an album by them Tettigoniidae, known as katydids or bush-crickets Cricut, a cutting machine for home crafters
5776
https://en.wikipedia.org/wiki/Caving
Caving
Caving – also known as spelunking in the United States and Canada and potholing in the United Kingdom and Ireland – is the recreational pastime of exploring wild cave systems (as distinguished from show caves). In contrast, speleology is the scientific study of caves and the cave environment. The challenges involved in caving vary according to the cave being visited; in addition to the total absence of light beyond the entrance, negotiating pitches, squeezes, and water hazards can be difficult. Cave diving is a distinct, and more hazardous, sub-speciality undertaken by a small minority of technically proficient cavers. In an area of overlap between recreational pursuit and scientific study, the most devoted and serious-minded cavers become accomplished at the surveying and mapping of caves and the formal publication of their efforts. These are usually published freely and publicly, especially in the UK and other European countries, although in the US, these are generally private. Sometimes categorized as an "extreme sport", it is not commonly considered as such by longtime enthusiasts, who may dislike the term for its connotation of disregard for safety. Many caving skills overlap with those involved in canyoning and mine and urban exploration. Motivation Caving is often undertaken for the enjoyment of the outdoor activity or for physical exercise, as well as original exploration, similar to mountaineering or diving. Physical or biological science is also an important goal for some cavers, while others are engaged in cave photography. Virgin cave systems comprise some of the last unexplored regions on Earth and much effort is put into trying to locate, enter and survey them. In well-explored regions (such as most developed nations), the most accessible caves have already been explored, and gaining access to new caves often requires cave digging or cave diving. One old technique used by hill people in the United States to find caves worth exploring was to yell into a hole and listen for an echo. On finding a hole, the size of which did not matter, the would-be cave explorer would yell into the opening and listen for an echo. If there was none, the hole was just a hole. If there was an echo, the size of the cave could be determined by the length and strength of the echoes. This method is simple, cheap, and effective. The explorer could then enlarge the hole to make an entrance. Meriwether Lewis, of the Lewis and Clark Expedition, used the yelling technique to find caves in Kentucky when he was a boy. Since caves were dark, and flashlights had not been invented, Lewis, and other explorers, made torches out of knots of pine tree branches. Such torches burned a long time and cast a bright light. Caving, in certain areas, has also been utilized as a form of eco and adventure tourism, for example in New Zealand. Tour companies have established an industry leading and guiding tours into and through caves. Depending on the type of cave and the type of tour, the experience could be adventure-based or ecological-based. There are tours led through lava tubes by a guiding service (e.g. Lava River Cave, the oceanic islands of Tenerife, Iceland and Hawaii). Caving has also been described as an "individualist's team sport" by some, as cavers can often make a trip without direct physical assistance from others but will generally go in a group for companionship or to provide emergency help if needed. Some however consider the assistance cavers give each other as a typical team sport activity. Etymology The term Potholing refers to the act of exploring potholes, a word originating in the north of England for predominantly vertical caves. Clay Perry, an American caver of the 1940s, wrote about a group of men and boys who explored and studied caves throughout New England. This group referred to themselves as spelunkers, a term derived from the Latin ("cave, cavern, den"), itself from the Greek spēlynks ("cave"). This is regarded as the first use of the word in the Americas. Throughout the 1950s, spelunking was the general term used for exploring caves in US English. It was used freely, without any positive or negative connotations, although only rarely outside the US. In the 1960s, the terms spelunking and spelunker began to be considered déclassé among experienced enthusiasts. In 1985, Steve Knutson – editor of the National Speleological Society (NSS) publication American Caving Accidents – made the following distinction: This sentiment is exemplified by bumper stickers and T-shirts displayed by some cavers: "Cavers rescue spelunkers". Nevertheless, outside the caving community, "spelunking" and "spelunkers" predominately remain neutral terms referring to the practice and practitioners, without any respect to skill level. History In the mid-nineteenth century, John Birkbeck explored potholes in England, notably Gaping Gill in 1842 and Alum Pot in 1847–8, returning there in the 1870s. In the mid-1880s, Herbert E. Balch began exploring Wookey Hole Caves and in the 1890s, Balch was introduced to the caves of the Mendip Hills. One of the oldest established caving clubs, Yorkshire Ramblers' Club, was founded in 1892. Caving as a specialized pursuit was pioneered by Édouard-Alfred Martel (1859–1938), who first achieved the descent and exploration of the Gouffre de Padirac, in France, as early as 1889 and the first complete descent of a 110-metre wet vertical shaft at Gaping Gill in 1895. He developed his own techniques based on ropes and metallic ladders. Martel visited Kentucky and notably Mammoth Cave National Park in October 1912. In the 1920s famous US caver Floyd Collins made important explorations in the area and in the 1930s, as caving became increasingly popular, small exploration teams both in the Alps and in the karstic high plateaus of southwest France (Causses and Pyrenees) transformed cave exploration into both a scientific and recreational activity. Robert de Joly, Guy de Lavaur and Norbert Casteret were prominent figures of that time, surveying mostly caves in Southwest France. During World War II, an alpine team composed of Pierre Chevalier, Fernand Petzl, Charles Petit-Didier and others explored the Dent de Crolles cave system near Grenoble, which became the deepest explored system in the world (-658m) at that time. The lack of available equipment during the war forced Pierre Chevalier and the rest of the team to develop their own equipment, leading to technical innovation. The scaling-pole (1940), nylon ropes (1942), use of explosives in caves (1947) and mechanical rope-ascenders (Henri Brenot's "monkeys", first used by Chevalier and Brenot in a cave in 1934) can be directly associated to the exploration of the Dent de Crolles cave system. In 1941, American cavers organized themselves into the National Speleological Society (NSS) to advance the exploration, conservation, study and understanding of caves in the United States. American caver Bill Cuddington, known as "Vertical Bill", further developed the single-rope technique (SRT) in the late 1950s. In 1958, two Swiss alpinists, Juesi and Marti teamed together, creating the first rope ascender known as the Jumar. In 1968 Bruno Dressler asked Fernand Petzl, who worked as a metals machinist, to build a rope-ascending tool, today known as the Petzl Croll, that he had developed by adapting the Jumar to vertical caving. Pursuing these developments, Petzl started in the 1970s a caving equipment manufacturing company named Petzl. The development of the rappel rack and the evolution of mechanical ascension systems extended the practice and safety of vertical exploration to a wider range of cavers. Practice and equipment Hard hats are worn to protect the head from bumps and falling rocks. The caver's primary light source is usually mounted on the helmet in order to keep the hands free. Electric LED lights are most common. Many cavers carry two or more sources of light – one as primary and the others as backup in case the first fails. More often than not, a second light will be mounted to the helmet for quick transition if the primary fails. Carbide lamp systems are an older form of illumination, inspired by miner's equipment, and are still used by some cavers, particularly on remote expeditions where electric charging facilities are not available. The type of clothes worn underground varies according to the environment of the cave being explored, and the local culture. In cold caves, the caver may wear a warm base layer that retains its insulating properties when wet, such as a fleece ("furry") suit or polypropylene underwear, and an oversuit of hard-wearing (e.g., cordura) or waterproof (e.g., PVC) material. Lighter clothing may be worn in warm caves, particularly if the cave is dry, and in tropical caves thin polypropylene clothing is used, to provide some abrasion protection while remaining as cool as possible. Wetsuits may be worn if the cave is particularly wet or involves stream passages. On the feet boots are worn – hiking-style boots in drier caves, or rubber boots (such as wellies) often with neoprene socks ("wetsocks") in wetter caves. Knee-pads (and sometimes elbow-pads) are popular for protecting joints during crawls. Depending on the nature of the cave, gloves are sometimes worn to protect the hands against abrasion or cold. In pristine areas and for restoration, clean oversuits and powder-free, non-latex surgical gloves are used to protect the cave itself from contaminants. Ropes are used for descending or ascending pitches (single rope technique or SRT) or for protection. Knots commonly used in caving are the figure-of-eight- (or figure-of-nine-) loop, bowline, alpine butterfly, and Italian hitch. Ropes are usually rigged using bolts, slings, and carabiners. In some cases cavers may choose to bring and use a flexible metal ladder. In addition to the equipment already described, cavers frequently carry packs containing first-aid kits, emergency equipment, and food. Containers for securely transporting urine are also commonly carried. On longer trips, containers for securely transporting feces out of the cave are carried. During very long trips, it may be necessary to camp in the cave – some cavers have stayed underground for many days, or in particularly extreme cases, for weeks at a time. This is particularly the case when exploring or mapping extensive cave systems, where it would be impractical to retrace the route back to the surface regularly. Such long trips necessitate the cavers carrying provisions, sleeping, and cooking equipment. Safety Caves can be dangerous places; hypothermia, falling, flooding, falling rocks and physical exhaustion are the main risks. Rescuing people from underground is difficult and time-consuming, and requires special skills, training, and equipment. Full-scale cave rescues often involve the efforts of dozens of rescue workers (often other long-time cavers who have participated in specialized courses, as normal rescue staff are not sufficiently experienced in cave environments), who may themselves be put in jeopardy in effecting the rescue. This said, caving is not necessarily a high-risk sport (especially if it does not involve difficult climbs or diving). As in all physical sports, knowing one's limitations is key. Caving in warmer climates carries the risk of contracting histoplasmosis, a fungal infection that is contracted from bird or bat droppings. It can cause pneumonia and can disseminate in the body to cause continued infections. In many parts of the world, leptospirosis ("a type of bacterial infection spread by animals" including rats) is a distinct threat due to the presence of rat urine in rainwater or precipitation that enters the caves water system. Complications are uncommon, but can be serious. Safety risks while caving can be minimized by using a number of techniques: Checking that there is no danger of flooding during the expedition. Rainwater funneled underground can flood a cave very quickly, trapping people in cut-off passages and drowning them. In the UK, drowning accounts for almost half of all caving fatalities (see List of UK caving fatalities). Using teams of several cavers, preferably at least four. If an injury occurs, one caver stays with the injured person while the other two go out for help, providing assistance to each other on their way out. Notifying people outside the cave as to the intended return time. After an appropriate delay without a return, these will then organize a search party (usually made up by other cavers trained in cave rescues, as even professional emergency personnel are unlikely to have the skills to effect a rescue in difficult conditions). Use of helmet-mounted lights (hands-free) with extra batteries. American cavers recommend a minimum of three independent sources of light per person, but two lights is common practice among European cavers. Sturdy clothing and footwear, as well as a helmet, are necessary to reduce the impact of abrasions, falls, and falling objects. Synthetic fibers and woolens, which dry quickly, shed water, and are warm when wet, are vastly preferred to cotton materials, which retain water and increase the risk of hypothermia. It is also helpful to have several layers of clothing, which can be shed (and stored in the pack) or added as needed. In watery cave passages, polypropylene thermal underwear or wetsuits may be required to avoid hypothermia. Cave passages look different from different directions. In long or complex caves, even experienced cavers can become lost. To reduce the risk of becoming lost, it is necessary to memorize the appearance of key navigational points in the cave as they are passed by the exploring party. Each member of a cave party shares responsibility for being able to remember the route out of the cave. In some caves it may be acceptable to mark a small number of key junctions with small stacks or "cairns" of rocks, or to leave a non-permanent mark such as high-visibility flagging tape tied to a projection. Vertical caving uses ladders or single rope technique (SRT) to avoid the need for climbing passages that are too difficult. SRT is a complex skill and requires proper training and well-maintained equipment. Some drops that are abseiled down may be as deep as several hundred meters (for example Harwoods Hole). Cave conservation Many cave environments are very fragile. Many speleothems can be damaged by even the slightest touch and some by impacts as slight as a breath. Research suggests that increased carbon dioxide levels can lead to "a higher equilibrium concentration of calcium within the drip waters feeding the speleothems, and hence causes dissolution of existing features." In 2008, researchers found evidence that respiration from cave visitors may generate elevated carbon dioxide concentrations in caves, leading to increased temperatures of up to 3 °C and a dissolution of existing features. Pollution is also of concern. Since water that flows through a cave eventually comes out in streams and rivers, any pollution may ultimately end up in someone's drinking water, and can even seriously affect the surface environment, as well. Even minor pollution such as dropping organic material can have a dramatic effect on the cave biota. Cave-dwelling species are also very fragile, and often, a particular species found in a cave may live within that cave alone, and be found nowhere else in the world, such as Alabama cave shrimp. Cave-dwelling species are accustomed to a near-constant climate of temperature and humidity, and any disturbance can be disruptive to the species' life cycles. Though cave wildlife may not always be immediately visible, it is typically nonetheless present in most caves. Bats are one such fragile species of cave-dwelling animal. Bats which hibernate are most vulnerable during the winter season, when no food supply exists on the surface to replenish the bat's store of energy should it be awakened from hibernation. Bats which migrate are most sensitive during the summer months when they are raising their young. For these reasons, visiting caves inhabited by hibernating bats is discouraged during cold months; and visiting caves inhabited by migratory bats is discouraged during the warmer months when they are most sensitive and vulnerable. Due to an affliction affecting bats in the northeastern US known as white nose syndrome (WNS), the US Fish & Wildlife Service has called for a moratorium effective March 26, 2009, on caving activity in states known to have hibernacula (MD, NY, VT, NH, MA, CT, NJ, PA, VA, and WV) affected by WNS, as well as adjoining states. Some cave passages may be marked with flagging tape or other indicators to show biologically, aesthetically, or archaeologically sensitive areas. Marked paths may show ways around notably fragile areas such as a pristine floor of sand or silt which may be thousands of years old, dating from the last time water flowed through the cave. Such deposits may easily be spoiled forever by a single misplaced step. Active formations such as flowstone can be similarly marred with a muddy footprint or handprint, and ancient human artifacts, such as fiber products, may even crumble to dust under all but the most gentle touch. In 1988, concerned that cave resources were becoming increasingly damaged through unregulated use, Congress enacted the Federal Cave Resources Protection Act, giving land management agencies in the United States expanded authority to manage cave conservation on public land. Caving organizations Cavers in many countries have created organizations for the administration and oversight of caving activities within their nations. The oldest of these is the French Federation of Speleology (originally Société de spéléologie) founded by Édouard-Alfred Martel in 1895, which produced the first periodical journal in speleology, Spelunca. The first University-based speleological institute in the world was founded in 1920 in Cluj-Napoca, Romania, by Emil Racovita, a Romanian biologist, zoologist, speleologist and explorer of Antarctica. The British Speleological Association was established in 1935 and the National Speleological Society in the US was founded in 1941 (originally formed as the Speleological Society of the District of Columbia on May 6, 1939). An international speleological congress was proposed at a meeting in Valence-sur-Rhone, France in 1949 and first held in 1953 in Paris. The International Union of Speleology (UIS) was founded in 1965. See also References ro:Speologie vi:Thám hiểm hang động
5778
https://en.wikipedia.org/wiki/Cave
Cave
A cave or cavern is a natural void in the ground, specifically a space large enough for a human to enter. Caves often form by the weathering of rock and often extend deep underground. The word cave can refer to smaller openings such as sea caves, rock shelters, and grottos, that extend a relatively short distance into the rock and they are called exogene caves. Caves which extend further underground than the opening is wide are called endogene caves. Speleology is the science of exploration and study of all aspects of caves and the cave environment. Visiting or exploring caves for recreation may be called caving, potholing, or spelunking. Formation types The formation and development of caves is known as speleogenesis; it can occur over the course of millions of years. Caves can range widely in size, and are formed by various geological processes. These may involve a combination of chemical processes, erosion by water, tectonic forces, microorganisms, pressure, and atmospheric influences. Isotopic dating techniques can be applied to cave sediments, to determine the timescale of the geological events which formed and shaped present-day caves. It is estimated that a cave cannot be more than vertically beneath the surface due to the pressure of overlying rocks. This does not, however, impose a maximum depth for a cave which is measured from its highest entrance to its lowest point, as the amount of rock above the lowest point is dependent on the topography of the landscape above it. For karst caves the maximum depth is determined on the basis of the lower limit of karst forming processes, coinciding with the base of the soluble carbonate rocks. Most caves are formed in limestone by dissolution. Caves can be classified in various other ways as well, including a contrast between active and relict: active caves have water flowing through them; relict caves do not, though water may be retained in them. Types of active caves include inflow caves ("into which a stream sinks"), outflow caves ("from which a stream emerges"), and through caves ("traversed by a stream"). Solutional Solutional caves or karst caves are the most frequently occurring caves. Such caves form in rock that is soluble; most occur in limestone, but they can also form in other rocks including chalk, dolomite, marble, salt, and gypsum. Except for salt caves, solutional caves result when rock is dissolved by natural acid in groundwater that seeps through bedding planes, faults, joints, and comparable features. Over time cracks enlarge to become caves and cave systems. The largest and most abundant solutional caves are located in limestone. Limestone dissolves under the action of rainwater and groundwater charged with H2CO3 (carbonic acid) and naturally occurring organic acids. The dissolution process produces a distinctive landform known as karst, characterized by sinkholes and underground drainage. Limestone caves are often adorned with calcium carbonate formations produced through slow precipitation. These include flowstones, stalactites, stalagmites, helictites, soda straws and columns. These secondary mineral deposits in caves are called speleothems. The portions of a solutional cave that are below the water table or the local level of the groundwater will be flooded. Lechuguilla Cave in New Mexico and nearby Carlsbad Cavern are now believed to be examples of another type of solutional cave. They were formed by H2S (hydrogen sulfide) gas rising from below, where reservoirs of oil give off sulfurous fumes. This gas mixes with groundwater and forms H2SO4 (sulfuric acid). The acid then dissolves the limestone from below, rather than from above, by acidic water percolating from the surface. Primary Caves formed at the same time as the surrounding rock are called primary caves. Lava tubes are formed through volcanic activity and are the most common primary caves. As lava flows downhill, its surface cools and solidifies. Hot liquid lava continues to flow under that crust, and if most of it flows out, a hollow tube remains. Such caves can be found in the Canary Islands, Jeju-do, the basaltic plains of Eastern Idaho, and in other places. Kazumura Cave near Hilo, Hawaii is a remarkably long and deep lava tube; it is . Lava caves include but are not limited to lava tubes. Other caves formed through volcanic activity include rifts, lava molds, open vertical conduits, inflationary, blisters, among others. Sea or littoral Sea caves are found along coasts around the world. A special case is littoral caves, which are formed by wave action in zones of weakness in sea cliffs. Often these weaknesses are faults, but they may also be dykes or bedding-plane contacts. Some wave-cut caves are now above sea level because of later uplift. Elsewhere, in places such as Thailand's Phang Nga Bay, solutional caves have been flooded by the sea and are now subject to littoral erosion. Sea caves are generally around in length, but may exceed . Corrasional or erosional Corrasional or erosional caves are those that form entirely by erosion by flowing streams carrying rocks and other sediments. These can form in any type of rock, including hard rocks such as granite. Generally there must be some zone of weakness to guide the water, such as a fault or joint. A subtype of the erosional cave is the wind or aeolian cave, carved by wind-born sediments. Many caves formed initially by solutional processes often undergo a subsequent phase of erosional or vadose enlargement where active streams or rivers pass through them. Glacier Glacier caves are formed by melting ice and flowing water within and under glaciers. The cavities are influenced by the very slow flow of the ice, which tends to collapse the caves again. Glacier caves are sometimes misidentified as "ice caves", though this latter term is properly reserved for bedrock caves that contain year-round ice formations. Fracture Fracture caves are formed when layers of more soluble minerals, such as gypsum, dissolve out from between layers of less soluble rock. These rocks fracture and collapse in blocks of stone. Talus Talus caves are formed by the openings among large boulders that have fallen down into a random heap, often at the bases of cliffs. These unstable deposits are called talus or scree, and may be subject to frequent rockfalls and landslides. Anchialine Anchialine caves are caves, usually coastal, containing a mixture of freshwater and saline water (usually sea water). They occur in many parts of the world, and often contain highly specialized and endemic fauna. Physical patterns Branchwork caves resemble surface dendritic stream patterns; they are made up of passages that join downstream as tributaries. Branchwork caves are the most common of cave patterns and are formed near sinkholes where groundwater recharge occurs. Each passage or branch is fed by a separate recharge source and converges into other higher order branches downstream. Angular network caves form from intersecting fissures of carbonate rock that have had fractures widened by chemical erosion. These fractures form high, narrow, straight passages that persist in widespread closed loops. Anastomotic caves largely resemble surface braided streams with their passages separating and then meeting further down drainage. They usually form along one bed or structure, and only rarely cross into upper or lower beds. Spongework caves are formed when solution cavities are joined by mixing of chemically diverse water. The cavities form a pattern that is three-dimensional and random, resembling a sponge. Ramiform caves form as irregular large rooms, galleries, and passages. These randomized three-dimensional rooms form from a rising water table that erodes the carbonate rock with hydrogen-sulfide enriched water. Pit caves (vertical caves, potholes, or simply "pits") consist of a vertical shaft rather than a horizontal cave passage. They may or may not be associated with one of the above structural patterns. Geographic distribution Caves are found throughout the world, although the distribution of documented cave system is heavily skewed towards those countries where caving has been popular for many years (such as France, Italy, Australia, the UK, the United States, etc.). As a result, explored caves are found widely in Europe, Asia, North America and Oceania, but are sparse in South America, Africa, and Antarctica. This is a rough generalization, as large expanses of North America and Asia contain no documented caves, whereas areas such as the Madagascar dry deciduous forests and parts of Brazil contain many documented caves. As the world's expanses of soluble bedrock are researched by cavers, the distribution of documented caves is likely to shift. For example, China, despite containing around half the world's exposed limestone—more than —has relatively few documented caves. Records and superlatives The cave system with the greatest total length of surveyed passage is Mammoth Cave in Kentucky, US, at . The longest surveyed underwater cave, and second longest overall, is Sistema Ox Bel Ha in Yucatán, Mexico at . The deepest known cave—measured from its highest entrance to its lowest point—is Veryovkina Cave in Abkhazia, Georgia, with a depth of . This was the first cave to be explored to a depth of more than . (The first cave to be descended below was Gouffre Berger in France.) The Sarma and Illyuzia-Mezhonnogo-Snezhnaya caves in Georgia, (, and respectively) are the current second- and third-deepest caves. The deepest outside Georgia is Lamprechtsofen Vogelschacht Weg Schacht in Austria, which is deep. The deepest vertical shaft in a cave is in Vrtoglavica Cave in Slovenia. The second deepest is Ghar-e-Ghala at in the Parau massif near Kermanshah in Iran. The deepest underwater cave bottomed by a remotely operated underwater vehicle at , is the Hranice Abyss in the Czech Republic. The Miao Room is the world's largest known room by volume, with a measured volume of . The largest known room by surface is Sarawak Chamber, in the Gunung Mulu National Park (Miri, Sarawak, Borneo, Malaysia), a sloping, boulder strewn chamber with an area of . The largest room in a show cave is the Salle de la Verna in the French Pyrenees. The largest passage ever discovered is in the Son Doong Cave in Phong Nha-Kẻ Bàng National Park in Quảng Bình Province, Vietnam. It is in length, high and wide over most of its length, but over high and wide for part of its length. Five longest surveyed Mammoth Cave, Kentucky, US Sistema Ox Bel Ha, Mexico Sistema Sac Actun/Sistema Dos Ojos, Mexico Jewel Cave, South Dakota, US Shuanghedong Cave Network, China Ecology Cave-inhabiting animals are often categorized as troglobites (cave-limited species), troglophiles (species that can live their entire lives in caves, but also occur in other environments), trogloxenes (species that use caves, but cannot complete their life cycle fully in caves) and accidentals (animals not in one of the previous categories). Some authors use separate terminology for aquatic forms (for example, stygobites, stygophiles, and stygoxenes). Of these animals, the troglobites are perhaps the most unusual organisms. Troglobitic species often show a number of characteristics, termed troglomorphic, associated with their adaptation to subterranean life. These characteristics may include a loss of pigment (often resulting in a pale or white coloration), a loss of eyes (or at least of optical functionality), an elongation of appendages, and an enhancement of other senses (such as the ability to sense vibrations in water). Aquatic troglobites (or stygobites), such as the endangered Alabama cave shrimp, live in bodies of water found in caves and get nutrients from detritus washed into their caves and from the feces of bats and other cave inhabitants. Other aquatic troglobites include cave fish, and cave salamanders such as the olm and the Texas blind salamander. Cave insects such as Oligaphorura (formerly Archaphorura) schoetti are troglophiles, reaching in length. They have extensive distribution and have been studied fairly widely. Most specimens are female, but a male specimen was collected from St Cuthberts Swallet in 1969. Bats, such as the gray bat and Mexican free-tailed bat, are trogloxenes and are often found in caves; they forage outside of the caves. Some species of cave crickets are classified as trogloxenes, because they roost in caves by day and forage above ground at night. Because of the fragility of cave ecosystems, and the fact that cave regions tend to be isolated from one another, caves harbor a number of endangered species, such as the Tooth cave spider, liphistius trapdoor spider, and the gray bat. Caves are visited by many surface-living animals, including humans. These are usually relatively short-lived incursions, due to the lack of light and sustenance. Cave entrances often have typical florae. For instance, in the eastern temperate United States, cave entrances are most frequently (and often densely) populated by the bulblet fern, Cystopteris bulbifera. Archaeological and cultural importance Throughout history, primitive peoples have made use of caves. The earliest human fossils found in caves come from a series of caves near Krugersdorp and Mokopane in South Africa. The cave sites of Sterkfontein, Swartkrans, Kromdraai B, Drimolen, Malapa, Cooper's D, Gladysvale, Gondolin and Makapansgat have yielded a range of early human species dating back to between three and one million years ago, including Australopithecus africanus, Australopithecus sediba and Paranthropus robustus. However, it is not generally thought that these early humans were living in the caves, but that they were brought into the caves by carnivores that had killed them. The first early hominid ever found in Africa, the Taung Child in 1924, was also thought for many years to come from a cave, where it had been deposited after being predated on by an eagle. However, this is now debated (Hopley et al., 2013; Am. J. Phys. Anthrop.). Caves do form in the dolomite of the Ghaap Plateau, including the Early, Middle and Later Stone Age site of Wonderwerk Cave; however, the caves that form along the escarpment's edge, like that hypothesised for the Taung Child, are formed within a secondary limestone deposit called tufa. There is numerous evidence for other early human species inhabiting caves from at least one million years ago in different parts of the world, including Homo erectus in China at Zhoukoudian, Homo rhodesiensis in South Africa at the Cave of Hearths (Makapansgat), Homo neanderthalensis and Homo heidelbergensis in Europe at Archaeological Site of Atapuerca, Homo floresiensis in Indonesia, and the Denisovans in southern Siberia. In southern Africa, early modern humans regularly used sea caves as shelter starting about 180,000 years ago when they learned to exploit the sea for the first time. The oldest known site is PP13B at Pinnacle Point. This may have allowed rapid expansion of humans out of Africa and colonization of areas of the world such as Australia by 60–50,000 years ago. Throughout southern Africa, Australia, and Europe, early modern humans used caves and rock shelters as sites for rock art, such as those at Giant's Castle. Caves such as the yaodong in China were used for shelter; other caves were used for burials (such as rock-cut tombs), or as religious sites (such as Buddhist caves). Among the known sacred caves are China's Cave of a Thousand Buddhas and the sacred caves of Crete. Caves and acoustics The importance of sound in caves predates a modern understanding of acoustics. Archaeologists have uncovered relationships between paintings of dots and lines, in specific areas of resonance, within the caves of Spain and France, as well as instruments depicting paleolithic motifs, indicators of musical events and rituals. Clusters of paintings were often found in areas with notable acoustics, sometimes even replicating the sounds of the animals depicted on the walls. The human voice was also theorized to be used as an echolocation device to navigate darker areas of the caves where torches were less useful. Dots of red ochre are often found in spaces with the highest resonance, where the production of paintings was too difficult. Caves continue to provide usage for modern-day explorers of acoustics. Today Cumberland Caverns provides one of the best examples for modern musical usages of caves. Not only are caves utilized for the reverberations, but for the dampening qualities of their abnormal faces as well. The irregularities in the walls of the Cumberland Caverns diffuse sounds bouncing off the walls and give the space and almost recording studio-like quality. During the 20th century musicians began to explore the possibility of using caves as locations as clubs and concert halls, including the likes of Dinah Shore, Roy Acuff, and Benny Goodman. Unlike today, these early performances were typically held in the mouths of the caves, as the lack of technology made depths of the interior inaccessible with musical equipment. In Luray Caverns, Virginia, a functioning organ has been developed that generates sound by mallets striking stalactites, each with a different pitch. See also References Erosion landforms Fluvial landforms
5781
https://en.wikipedia.org/wiki/Chinese%20numerals
Chinese numerals
Chinese numerals are words and characters used to denote numbers in written Chinese. Today, speakers of Chinese languages use three written numeral systems: the system of Arabic numerals used worldwide, and two indigenous systems. The more familiar indigenous system is based on Chinese characters that correspond to numerals in the spoken language. These may be shared with other languages of the Chinese cultural sphere such as Korean, Japanese, and Vietnamese. Most people and institutions in China primarily use the Arabic or mixed Arabic-Chinese systems for convenience, with traditional Chinese numerals used in finance, mainly for writing amounts on cheques, banknotes, some ceremonial occasions, some boxes, and on commercials. The other indigenous system consists of the Suzhou numerals, or huama, a positional system, the only surviving form of the rod numerals. These were once used by Chinese mathematicians, and later by merchants in Chinese markets, such as those in Hong Kong until the 1990s, but were gradually supplanted by Arabic numerals. Characters used as numerals The Chinese character numeral system consists of the Chinese characters used by the Chinese written language to write spoken numerals. Similar to spelling-out numbers in English (e.g., "one thousand nine hundred forty-five"), it is not an independent system per se. Since it reflects spoken language, it does not use the positional system as in Arabic numerals, in the same way that spelling out numbers in English does not. Ordinary numerals There are characters representing the numbers zero through nine, and other characters representing larger numbers such as tens, hundreds, thousands, ten thousands and hundred millions. There are two sets of characters for Chinese numerals: one for everyday writing, known as (), and one for use in commercial, accounting or financial contexts, known as (). The latter arose because the characters used for writing numerals are geometrically simple, so simply using those numerals cannot prevent forgeries in the same way spelling numbers out in English would. A forger could easily change the everyday characters 三十 (30) to 五千 (5000) just by adding a few strokes. That would not be possible when writing using the financial characters 參拾 (30) and 伍仟 (5000). They are also referred to as "banker's numerals", "anti-fraud numerals", or "banker's anti-fraud numerals". For the same reason, rod numerals were never used in commercial records. Characters with regional usage Large numbers For numbers larger than 10,000, similarly to the long and short scales in the West, there have been four systems in ancient and modern usage. The original one, with unique names for all powers of ten up to the 14th, is ascribed to the Yellow Emperor in the 6th century book by Zhen Luan, . In modern Chinese, only the second system is used, in which the same ancient names are used, but each represents a myriad, 萬 times the previous: In practice, this situation does not lead to ambiguity, with the exception of , which means 1012 according to the system in common usage throughout the Chinese communities as well as in Japan and Korea, but has also been used for 106 in recent years (especially in mainland China for megabyte). To avoid problems arising from the ambiguity, the PRC government never uses this character in official documents, but uses (wànyì) or instead. Partly due to this, combinations of and are often used instead of the larger units of the traditional system as well, for example instead of . The ROC government in Taiwan uses to mean 1012 in official documents. Large numbers from Buddhism Numerals beyond zǎi come from Buddhist texts in Sanskrit, but are mostly found in ancient texts. Some of the following words are still being used today, but may have transferred meanings. Small numbers The following are characters used to denote small order of magnitude in Chinese historically. With the introduction of SI units, some of them have been incorporated as SI prefixes, while the rest have fallen into disuse. Small numbers from Buddhism SI prefixes In the People's Republic of China, the early translation for the SI prefixes in 1981 was different from those used today. The larger (, , , , ) and smaller Chinese numerals (, , , , ) were defined as translation for the SI prefixes as mega, giga, tera, peta, exa, micro, nano, pico, femto, atto, resulting in the creation of yet more values for each numeral. The Republic of China (Taiwan) defined as the translation for mega and as the translation for tera. This translation is widely used in official documents, academic communities, informational industries, etc. However, the civil broadcasting industries sometimes use 兆赫 to represent "megahertz". Today, the governments of both China and Taiwan use phonetic transliterations for the SI prefixes. However, the governments have each chosen different Chinese characters for certain prefixes. The following table lists the two different standards together with the early translation. Reading and transcribing numbers Whole numbers Multiple-digit numbers are constructed using a multiplicative principle; first the digit itself (from 1 to 9), then the place (such as 10 or 100); then the next digit. In Mandarin, the multiplier (liǎng) is often used rather than (èr) for all numbers 200 and greater with the "2" numeral (although as noted earlier this varies from dialect to dialect and person to person). Use of both (liǎng) or (èr) are acceptable for the number 200. When writing in the Cantonese dialect, (yi6) is used to represent the "2" numeral for all numbers. In the southern Min dialect of Chaozhou (Teochew), (no6) is used to represent the "2" numeral in all numbers from 200 onwards. Thus: For the numbers 11 through 19, the leading "one" () is usually omitted. In some dialects, like Shanghainese, when there are only two significant digits in the number, the leading "one" and the trailing zeroes are omitted. Sometimes, the one before "ten" in the middle of a number, such as 213, is omitted. Thus: Notes: Nothing is ever omitted in large and more complicated numbers such as this. In certain older texts like the Protestant Bible or in poetic usage, numbers such as 114 may be written as [100] [10] [4] (). Outside of Taiwan, digits are sometimes grouped by myriads instead of thousands. Hence it is more convenient to think of numbers here as in groups of four, thus 1,234,567,890 is regrouped here as 12,3456,7890. Larger than a myriad, each number is therefore four zeroes longer than the one before it, thus 10000 × () = (). If one of the numbers is between 10 and 19, the leading "one" is omitted as per the above point. Hence (numbers in parentheses indicate that the number has been written as one number rather than expanded): In Taiwan, pure Arabic numerals are officially always and only grouped by thousands. Unofficially, they are often not grouped, particularly for numbers below 100,000. Mixed Arabic-Chinese numerals are often used in order to denote myriads. This is used both officially and unofficially, and come in a variety of styles: Interior zeroes before the unit position (as in 1002) must be spelt explicitly. The reason for this is that trailing zeroes (as in 1200) are often omitted as shorthand, so ambiguity occurs. One zero is sufficient to resolve the ambiguity. Where the zero is before a digit other than the units digit, the explicit zero is not ambiguous and is therefore optional, but preferred. Thus: Fractional values To construct a fraction, the denominator is written first, followed by , then the literary possessive particle , and lastly the numerator. This is the opposite of how fractions are read in English, which is numerator first. Each half of the fraction is written the same as a whole number. For example, to express "two thirds", the structure "three parts of-this two" is used. Mixed numbers are written with the whole-number part first, followed by , then the fractional part. Percentages are constructed similarly, using as the denominator. (The number 100 is typically expressed as , like the English "one hundred". However, for percentages, is used on its own.) Because percentages and other fractions are formulated the same, Chinese are more likely than not to express 10%, 20% etc. as "parts of 10" (or 1/10, 2/10, etc. i.e. ; shí fēnzhī yī, ; shí fēnzhī èr, etc.) rather than "parts of 100" (or 10/100, 20/100, etc. i.e. ; bǎi fēnzhī shí, ; bǎi fēnzhī èrshí, etc.) In Taiwan, the most common formation of percentages in the spoken language is the number per hundred followed by the word , a contraction of the Japanese ; pāsento, itself taken from the English "percent". Thus 25% is ; èrshíwǔ pā. Decimal numbers are constructed by first writing the whole number part, then inserting a point (), and finally the fractional part. The fractional part is expressed using only the numbers for 0 to 9, similarly to English. functions as a number and therefore requires a measure word. For example: . Ordinal numbers Ordinal numbers are formed by adding ("sequence") before the number. The Heavenly Stems are a traditional Chinese ordinal system. Negative numbers Negative numbers are formed by adding fù () before the number. Usage Chinese grammar requires the use of classifiers (measure words) when a numeral is used together with a noun to express a quantity. For example, "three people" is expressed as , "three ( particle) person", where / is a classifier. There exist many different classifiers, for use with different sets of nouns, although / is the most common, and may be used informally in place of other classifiers. Chinese uses cardinal numbers in certain situations in which English would use ordinals. For example, (literally "three story/storey") means "third floor" ("second floor" in British ). Likewise, (literally "twenty-one century") is used for "21st century". Numbers of years are commonly spoken as a sequence of digits, as in ("two zero zero one") for the year 2001. Names of months and days (in the Western system) are also expressed using numbers: ("one month") for January, etc.; and ("week one") for Monday, etc. There is only one exception: Sunday is , or informally , both literally "week day". When meaning "week", "" and "" are interchangeable. "" or "" means "day of worship". Chinese Catholics call Sunday "" , "Lord's day". Full dates are usually written in the format 2001年1月20日 for January 20, 2001 (using "year", "month", and "day") – all the numbers are read as cardinals, not ordinals, with no leading zeroes, and the year is read as a sequence of digits. For brevity the , and may be dropped to give a date composed of just numbers. For example "6-4" in Chinese is "six-four", short for "month six, day four" i.e. June Fourth, a common Chinese shorthand for the 1989 Tiananmen Square protests (because of the violence that occurred on June 4). For another example 67, in Chinese is sixty seven, short for year nineteen sixty seven, a common Chinese shorthand for the Hong Kong 1967 leftist riots. Counting rod and Suzhou numerals In the same way that Roman numerals were standard in ancient and medieval Europe for mathematics and commerce, the Chinese formerly used the rod numerals, which is a positional system. The Suzhou numerals () system is a variation of the Southern Song rod numerals. Nowadays, the huāmǎ system is only used for displaying prices in Chinese markets or on traditional handwritten invoices. Hand gestures There is a common method of using of one hand to signify the numbers one to ten. While the five digits on one hand can easily express the numbers one to five, six to ten have special signs that can be used in commerce or day-to-day communication. Historical use of numerals in China Most Chinese numerals of later periods were descendants of the Shang dynasty oracle numerals of the 14th century BC. The oracle bone script numerals were found on tortoise shell and animal bones. In early civilizations, the Shang were able to express any numbers, however large, with only nine symbols and a counting board though it was still not positional . Some of the bronze script numerals such as 1, 2, 3, 4, 10, 11, 12, and 13 became part of the system of rod numerals. In this system, horizontal rod numbers are used for the tens, thousands, hundred thousands etc. It is written in Sunzi Suanjing that "one is vertical, ten is horizontal". The counting rod numerals system has place value and decimal numerals for computation, and was used widely by Chinese merchants, mathematicians and astronomers from the Han dynasty to the 16th century. In 690 AD, Empress Wǔ promulgated Zetian characters, one of which was "〇". The word is now used as a synonym for the number zero. Alexander Wylie, Christian missionary to China, in 1853 already refuted the notion that "the Chinese numbers were written in words at length", and stated that in ancient China, calculation was carried out by means of counting rods, and "the written character is evidently a rude presentation of these". After being introduced to the rod numerals, he said "Having thus obtained a simple but effective system of figures, we find the Chinese in actual use of a method of notation depending on the theory of local value [i.e. place-value], several centuries before such theory was understood in Europe, and while yet the science of numbers had scarcely dawned among the Arabs." During the Ming and Qing dynasties (after Arabic numerals were introduced into China), some Chinese mathematicians used Chinese numeral characters as positional system digits. After the Qing period, both the Chinese numeral characters and the Suzhou numerals were replaced by Arabic numerals in mathematical writings. Cultural influences Traditional Chinese numeric characters are also used in Japan and Korea and were used in Vietnam before the 20th century. In vertical text (that is, read top to bottom), using characters for numbers is the norm, while in horizontal text, Arabic numerals are most common. Chinese numeric characters are also used in much the same formal or decorative fashion that Roman numerals are in Western cultures. Chinese numerals may appear together with Arabic numbers on the same sign or document. See also Chinese number gestures Numbers in Chinese culture Chinese units of measurement Chinese classifier Chinese grammar Japanese numerals Korean numerals Vietnamese numerals Celestial stem List of numbers in Sinitic languages Notes References Numerals Chinese characters Chinese mathematics
5783
https://en.wikipedia.org/wiki/Computer%20program
Computer program
A computer program is a sequence or set of instructions in a programming language for a computer to execute. It is one component of software, which also includes documentation and other intangible components. A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using the language's compiler. (Assembly language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within the language's interpreter. If the executable is requested for execution, then the operating system loads it into memory and starts a process. The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction. If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement. Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer. Example computer program The "Hello, World!" program is used to illustrate a language's basic syntax. The syntax of the language BASIC (1964) was intentionally limited to make the language easy to learn. For example, variables are not declared before being used. Also, variables are automatically initialized to zero. Here is an example computer program, in Basic, to average a list of numbers: 10 INPUT "How many numbers to average?", A 20 FOR I = 1 TO A 30 INPUT "Enter number:", B 40 LET C = C + B 50 NEXT I 60 LET D = C/A 70 PRINT "The average is", D 80 END Once the mechanics of basic computer programming are learned, more sophisticated and powerful languages are available to build large computer systems. History Improvements in software development are the result of improvements in computer hardware. At each stage in hardware's history, the task of computer programming changed dramatically. Analytical Engine In 1837, Jacquard's loom inspired Charles Babbage to attempt to build the Analytical Engine. The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled. The device had a "store" which consisted of memory to hold 1,000 numbers of 50 decimal digits each. Numbers from the "store" were transferred to the "mill" for processing. It was programmed using two sets of perforated cards. One set directed the operation and the other set inputted the variables. However, the thousands of cogged wheels and gears never fully worked together, even after Babbage spent more than £17,000 of government money. Ada Lovelace worked for Charles Babbage to create a description of the Analytical Engine (1843). The description contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine. This note is recognized by some historians as the world's first computer program. Other historians consider Babbage himself wrote the first computer program for the Analytical Engine. It listed a sequence of operations to compute the solution for a system of two linear equations. Universal Turing machine In 1936, Alan Turing introduced the Universal Turing machine, a theoretical device that can model every computation. It is a finite-state machine that has an infinitely long read/write tape. The machine can move the tape back and forth, changing its contents as it performs an algorithm. The machine starts in the initial state, goes through a sequence of steps, and halts when it encounters the halt state. All present-day computers are Turing complete. ENIAC The Electronic Numerical Integrator And Computer (ENIAC) was built between July 1943 and Fall 1945. It was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together. Its 40 units weighed 30 tons, occupied , and consumed $650 per hour (in 1940s currency) in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables were on wheels and needed to be rolled to fixed function panels. Function tables were connected to function panels by plugging heavy black cables into plugboards. Each function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a program took a week. It ran from 1947 until 1955 at Aberdeen Proving Ground, calculating hydrogen bomb parameters, predicting weather patterns, and producing firing tables to aim artillery guns. Stored-program computers Instead of plugging in cords and turning switches, a stored-program computer loads its instructions into memory just like it loads its data into memory. As a result, the computer could be programmed quickly and perform calculations at very fast speeds. Presper Eckert and John Mauchly built the ENIAC. The two engineers introduced the stored-program concept in a three-page memo dated February 1944. Later, in September 1944, Dr. John von Neumann began working on the ENIAC project. On June 30, 1945, von Neumann published the First Draft of a Report on the EDVAC, which equated the structures of the computer with the structures of the human brain. The design became known as the von Neumann architecture. The architecture was simultaneously deployed in the constructions of the EDVAC and EDSAC computers in 1949. The IBM System/360 (1964) was a family of computers, each having the same instruction set architecture. The Model 20 was the smallest and least expensive. Customers could upgrade and retain the same application software. The Model 195 was the most premium. Each System/360 model featured multiprogramming—having multiple processes in memory at once. When one process was waiting for input/output, another could compute. IBM planned for each model to be programmed using PL/1. A committee was formed that included COBOL, Fortran and ALGOL programmers. The purpose was to develop a language that was comprehensive, easy to use, extendible, and would replace Cobol and Fortran. The result was a large and complex language that took a long time to compile. Computers manufactured until the 1970s had front-panel switches for manual programming. The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also were automatically inputted via paper tape, punched cards or magnetic-tape. After the medium was loaded, the starting address was set via switches, and the execute button was pressed. Very Large Scale Integration A major milestone in software development was the invention of the Very Large Scale Integration (VLSI) circuit (1964). Following World War II, tube-based technology was replaced with point-contact transistors (1947) and bipolar junction transistors (late 1950s) mounted on a circuit board. During the 1960s, the aerospace industry replaced the circuit board with an integrated circuit chip. Robert Noyce, co-founder of Fairchild Semiconductor (1957) and Intel (1968), achieved a technological improvement to refine the production of field-effect transistors (1963). The goal is to alter the electrical resistivity and conductivity of a semiconductor junction. First, naturally occurring silicate minerals are converted into polysilicon rods using the Siemens process. The Czochralski process then converts the rods into a monocrystalline silicon, boule crystal. The crystal is then thinly sliced to form a wafer substrate. The planar process of photolithography then integrates unipolar transistors, capacitors, diodes, and resistors onto the wafer to build a matrix of metal–oxide–semiconductor (MOS) transistors. The MOS transistor is the primary component in integrated circuit chips. Originally, integrated circuit chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a matrix of read-only memory (ROM). The matrix resembled a two-dimensional array of fuses. The process to embed instructions onto the matrix was to burn out the unneeded connections. There were so many connections, firmware programmers wrote a computer program on another chip to oversee the burning. The technology became known as Programmable ROM. In 1971, Intel installed the computer program onto the chip and named it the Intel 4004 microprocessor. The terms microprocessor and central processing unit (CPU) are now used interchangeably. However, CPUs predate microprocessors. For example, the IBM System/360 (1964) had a CPU made from circuit boards containing discrete components on ceramic substrates. Sac State 8008 The Intel 4004 (1971) was a 4-bit microprocessor designed to run the Busicom calculator. Five months after its release, Intel released the Intel 8008, an 8-bit microprocessor. Bill Pentz led a team at Sacramento State to build the first microcomputer using the Intel 8008: the Sac State 8008 (1972). Its purpose was to store patient medical records. The computer supported a disk operating system to run a Memorex, 3-megabyte, hard disk drive. It had a color display and keyboard that was packaged in a single console. The disk operating system was programmed using IBM's Basic Assembly Language (BAL). The medical records application was programmed using a BASIC interpreter. However, the computer was an evolutionary dead-end because it was extremely expensive. Also, it was built at a public university lab for a specific purpose. Nonetheless, the project contributed to the development of the Intel 8080 (1974) instruction set. x86 series In 1978, the modern software development environment began when Intel upgraded the Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the cheaper Intel 8088. IBM embraced the Intel 8088 when they entered the personal computer market (1981). As consumer demand for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the x86 series. The x86 assembly language is a family of backward-compatible machine instructions. Machine instructions created in earlier microprocessors were retained throughout microprocessor upgrades. This enabled consumers to purchase new computers without having to purchase new application software. The major categories of instructions are: Memory instructions to set and access numbers and strings in random-access memory. Integer arithmetic logic unit (ALU) instructions to perform the primary arithmetic operations on integers. Floating point ALU instructions to perform the primary arithmetic operations on real numbers. Call stack instructions to push and pop words needed to allocate memory and interface with functions. Single instruction, multiple data (SIMD) instructions to increase speed when multiple processors are available to perform the same algorithm on an array of data. Changing programming environment VLSI circuits enabled the programming environment to advance from a computer terminal (until the 1990s) to a graphical user interface (GUI) computer. Computer terminals limited programmers to a single shell running in a command-line environment. During the 1970s, full-screen source code editing became possible through a text-based user interface. Regardless of the technology available, the goal is to program in a programming language. Programming paradigms and languages Programming language features exist to provide building blocks to be combined to express programming ideals. Ideally, a programming language should: express ideas directly in the code. express independent ideas independently. express relationships among ideas directly in the code. combine ideas freely. combine ideas only where combinations make sense. express simple ideas simply. The programming style of a programming language to provide these building blocks may be categorized into programming paradigms. For example, different paradigms may differentiate: procedural languages, functional languages, and logical languages. different levels of data abstraction. different levels of class hierarchy. different levels of input datatypes, as in container types and generic programming. Each of these programming styles has contributed to the synthesis of different programming languages. A programming language is a set of keywords, symbols, identifiers, and rules by which programmers can communicate instructions to the computer. They follow a set of rules called a syntax. Keywords are reserved words to form declarations and statements. Symbols are characters to form operations, assignments, control flow, and delimiters. Identifiers are words created by programmers to form constants, variable names, structure names, and function names. Syntax Rules are defined in the Backus–Naur form. Programming languages get their basis from formal languages. The purpose of defining a solution in terms of its formal language is to generate an algorithm to solve the underlining problem. An algorithm is a sequence of simple instructions that solve a problem. Generations of programming language The evolution of programming language began when the EDSAC (1949) used the first stored computer program in its von Neumann architecture. Programming the EDSAC was in the first generation of programming language. The first generation of programming language is machine language. Machine language requires the programmer to enter instructions using instruction numbers called machine code. For example, the ADD operation on the PDP-11 has instruction number 24576. The second generation of programming language is assembly language. Assembly language allows the programmer to use mnemonic instructions instead of remembering instruction numbers. An assembler translates each assembly language mnemonic into its machine language number. For example, on the PDP-11, the operation 24576 can be referenced as ADD in the source code. The four basic arithmetic operations have assembly instructions like ADD, SUB, MUL, and DIV. Computers also have instructions like DW (Define Word) to reserve memory cells. Then the MOV instruction can copy integers between registers and memory. The basic structure of an assembly language statement is a label, operation, operand, and comment. Labels allow the programmer to work with variable names. The assembler will later translate labels into physical memory addresses. Operations allow the programmer to work with mnemonics. The assembler will later translate mnemonics into instruction numbers. Operands tell the assembler which data the operation will process. Comments allow the programmer to articulate a narrative because the instructions alone are vague. The key characteristic of an assembly language program is it forms a one-to-one mapping to its corresponding machine language target. The third generation of programming language uses compilers and interpreters to execute computer programs. The distinguishing feature of a third generation language is its independence from particular hardware. Early languages include Fortran (1958), COBOL (1959), ALGOL (1960), and BASIC (1964). In 1973, the C programming language emerged as a high-level language that produced efficient machine language instructions. Whereas third-generation languages historically generated many machine instructions for each statement, C has statements that may generate a single machine instruction. Moreover, an optimizing compiler might overrule the programmer and produce fewer machine instructions than statements. Today, an entire paradigm of languages fill the imperative, third generation spectrum. The fourth generation of programming language emphasizes what output results are desired, rather than how programming statements should be constructed. Declarative languages attempt to limit side effects and allow programmers to write code with relatively few errors. One popular fourth generation language is called Structured Query Language (SQL). Database developers no longer need to process each database record one at a time. Also, a simple statement can generate output records without having to understand how they are retrieved. Imperative languages Imperative languages specify a sequential algorithm using declarations, expressions, and statements: A declaration introduces a variable name to the computer program and assigns it to a datatype – for example: var x: integer; An expression yields a value – for example: 2 + 2 yields 4 A statement might assign an expression to a variable or use the value of a variable to alter the program's control flow – for example: x := 2 + 2; if x = 4 then do_something(); Fortran FORTRAN (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system." It was designed for scientific calculations, without string handling facilities. Along with declarations, expressions, and statements, it supported: arrays. subroutines. "do" loops. It succeeded because: programming and debugging costs were below computer running costs. it was supported by IBM. applications at the time were scientific. However, non-IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler. The American National Standards Institute (ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports: records. pointers to arrays. COBOL COBOL (1959) stands for "COmmon Business Oriented Language." Fortran manipulated symbols. It was soon realized that symbols did not need to be numbers, so strings were introduced. The US Department of Defense influenced COBOL's development, with Grace Hopper being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal. COBOL's development was tightly controlled, so dialects did not emerge to require ANSI standards. As a consequence, it was not changed for 15 years until 1974. The 1990s version did make consequential changes, like object-oriented programming. Algol ALGOL (1960) stands for "ALGOrithmic Language." It had a profound influence on programming language design. Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable, structured design. Algol was first to define its syntax using the Backus–Naur form. This led to syntax-directed compilers. It added features like: block structure, where variables were local to their block. arrays with variable bounds. "for" loops. functions. recursion. Algol's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one branch. On another branch the descendants include C, C++ and Java. Basic BASIC (1964) stands for "Beginner's All-Purpose Symbolic Instruction Code." It was developed at Dartmouth College for all of their students to learn. If a student did not go on to a more powerful language, the student would still remember Basic. A Basic interpreter was installed in the microcomputers manufactured in the late 1970s. As the microcomputer industry grew, so did the language. Basic pioneered the interactive session. It offered operating system commands within its environment: The 'new' command created an empty slate. Statements evaluated immediately. Statements could be programmed by preceding them with a line number. The 'list' command displayed the program. The 'run' command executed the program. However, the Basic syntax was too simple for large programs. Recent dialects added structure and object-oriented extensions. Microsoft's Visual Basic is still widely used and produces a graphical user interface. C C programming language (1973) got its name because the language BCPL was replaced with B, and AT&T Bell Labs called the next version "C." Its purpose was to write the UNIX operating system. C is a relatively small language, making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s. Its growth also was because it has the facilities of assembly language, but uses a high-level syntax. It added advanced features like: inline assembler. arithmetic on pointers. pointers to functions. bit operations. freely combining complex operators. C allows the programmer to control which region of memory data is to be stored. Global variables and static variables require the fewest clock cycles to store. The stack is automatically used for the standard variable declarations. Heap memory is returned to a pointer variable from the malloc() function. The global and static data region is located just above the program region. (The program region is technically called the text region. It's where machine instructions are stored.) The global and static data region is technically two regions. One region is called the initialized data segment, where variables declared with default values are stored. The other region is called the block started by segment, where variables declared without default values are stored. Variables stored in the global and static data region have their addresses set at compile-time. They retain their values throughout the life of the process. The global and static region stores the global variables that are declared on top of (outside) the main() function. Global variables are visible to main() and every other function in the source code. On the other hand, variable declarations inside of main(), other functions, or within { } block delimiters are local variables. Local variables also include formal parameter variables. Parameter variables are enclosed within the parenthesis of function definitions. They provide an interface to the function. Local variables declared using the static prefix are also stored in the global and static data region. Unlike global variables, static variables are only visible within the function or block. Static variables always retain their value. An example usage would be the function int increment_counter(){static int counter = 0; counter++; return counter;} The stack region is a contiguous block of memory located near the top memory address. Variables placed in the stack are populated from top to bottom. A stack pointer is a special-purpose register that keeps track of the last memory address populated. Variables are placed into the stack via the assembly language PUSH instruction. Therefore, the addresses of these variables are set during runtime. The method for stack variables to lose their scope is via the POP instruction. Local variables declared without the static prefix, including formal parameter variables, are called automatic variables and are stored in the stack. They are visible inside the function or block and lose their scope upon exiting the function or block. The heap region is located below the stack. It is populated from the bottom to the top. The operating system manages the heap using a heap pointer and a list of allocated memory blocks. Like the stack, the addresses of heap variables are set during runtime. An out of memory error occurs when the heap pointer and the stack pointer meet. C provides the malloc() library function to allocate heap memory. Populating the heap with data is an additional copy function. Variables stored in the heap are economically passed to functions using pointers. Without pointers, the entire block of data would have to be passed to the function via the stack. C++ In the 1970s, software engineers needed language support to break large projects down into modules. One obvious feature was to decompose large projects physically into separate files. A less obvious feature was to decompose large projects logically into abstract datatypes. At the time, languages supported concrete (scalar) datatypes like integer numbers, floating-point numbers, and strings of characters. Abstract datatypes are structures of concrete datatypes, with a new name assigned. For example, a list of integers could be called integer_list. In object-oriented jargon, abstract datatypes are called classes. However, a class is only a definition; no memory is allocated. When memory is allocated to a class and bound to an identifier, it's called an object. Object-oriented imperative languages developed by combining the need for classes and the need for safe functional programming. A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as a method, member function, or operation. Object-oriented programming is executing operations on objects. Object-oriented languages support a syntax to model subset/superset relationships. In set theory, an element of a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other people do not have. Object-oriented languages model subset/superset relationships using inheritance. Object-oriented programming became the dominant language paradigm by the late 1990s. C++ (1985) was originally called "C with Classes." It was designed to expand C's capabilities by adding the object-oriented facilities of the language Simula. An object-oriented module is composed of two files. The definitions file is called the header file. Here is a C++ header file for the GRADE class in a simple school application: // grade.h // ------- // Used to allow multiple source files to include // this header file without duplication errors. // ---------------------------------------------- #ifndef GRADE_H #define GRADE_H class GRADE { public: // This is the constructor operation. // ---------------------------------- GRADE ( const char letter ); // This is a class variable. // ------------------------- char letter; // This is a member operation. // --------------------------- int grade_numeric( const char letter ); // This is a class variable. // ------------------------- int numeric; }; #endif A constructor operation is a function with the same name as the class name. It is executed when the calling operation executes the new statement. A module's other file is the source file. Here is a C++ source file for the GRADE class in a simple school application: // grade.cpp // --------- #include "grade.h" GRADE::GRADE( const char letter ) { // Reference the object using the keyword 'this'. // ---------------------------------------------- this->letter = letter; // This is Temporal Cohesion // ------------------------- this->numeric = grade_numeric( letter ); } int GRADE::grade_numeric( const char letter ) { if ( ( letter == 'A' || letter == 'a' ) ) return 4; else if ( ( letter == 'B' || letter == 'b' ) ) return 3; else if ( ( letter == 'C' || letter == 'c' ) ) return 2; else if ( ( letter == 'D' || letter == 'd' ) ) return 1; else if ( ( letter == 'F' || letter == 'f' ) ) return 0; else return -1; } Here is a C++ header file for the PERSON class in a simple school application: // person.h // -------- #ifndef PERSON_H #define PERSON_H class PERSON { public: PERSON ( const char *name ); const char *name; }; #endif Here is a C++ source file for the PERSON class in a simple school application: // person.cpp // ---------- #include "person.h" PERSON::PERSON ( const char *name ) { this->name = name; } Here is a C++ header file for the STUDENT class in a simple school application: // student.h // --------- #ifndef STUDENT_H #define STUDENT_H #include "person.h" #include "grade.h" // A STUDENT is a subset of PERSON. // -------------------------------- class STUDENT : public PERSON{ public: STUDENT ( const char *name ); GRADE *grade; }; #endif Here is a C++ source file for the STUDENT class in a simple school application: // student.cpp // ----------- #include "student.h" #include "person.h" STUDENT::STUDENT ( const char *name ): // Execute the constructor of the PERSON superclass. // ------------------------------------------------- PERSON( name ) { // Nothing else to do. // ------------------- } Here is a driver program for demonstration: // student_dvr.cpp // --------------- #include <iostream> #include "student.h" int main( void ) { STUDENT *student = new STUDENT( "The Student" ); student->grade = new GRADE( 'a' ); std::cout // Notice student inherits PERSON's name << student->name << ": Numeric grade = " << student->grade->numeric << "\n"; return 0; } Here is a makefile to compile everything: # makefile # -------- all: student_dvr clean: rm student_dvr *.o student_dvr: student_dvr.cpp grade.o student.o person.o c++ student_dvr.cpp grade.o student.o person.o -o student_dvr grade.o: grade.cpp grade.h c++ -c grade.cpp student.o: student.cpp student.h c++ -c student.cpp person.o: person.cpp person.h c++ -c person.cpp Declarative languages Imperative languages have one major criticism: assigning an expression to a non-local variable may produce an unintended side effect. Declarative languages generally omit the assignment statement and the control flow. They describe what computation should be performed and not how to compute it. Two broad categories of declarative languages are functional languages and logical languages. The principle behind a functional language is to use lambda calculus as a guide for a well defined semantic. In mathematics, a function is a rule that maps elements from an expression to a range of values. Consider the function: times_10(x) = 10 * x The expression 10 * x is mapped by the function times_10() to a range of values. One value happens to be 20. This occurs when x is 2. So, the application of the function is mathematically written as: times_10(2) = 20 A functional language compiler will not store this value in a variable. Instead, it will push the value onto the computer's stack before setting the program counter back to the calling function. The calling function will then pop the value from the stack. Imperative languages do support functions. Therefore, functional programming can be achieved in an imperative language, if the programmer uses discipline. However, a functional language will force this discipline onto the programmer through its syntax. Functional languages have a syntax tailored to emphasize the what. A functional program is developed with a set of primitive functions followed by a single driver function. Consider the snippet: function max(a,b){/* code omitted */} function min(a,b){/* code omitted */} function difference_between_largest_and_smallest(a,b,c) { return max(a,max(b,c)) - min(a, min(b,c)); } The primitives are max() and min(). The driver function is difference_between_largest_and_smallest(). Executing: put(difference_between_largest_and_smallest(10,4,7)); will output 6. Functional languages are used in computer science research to explore new language features. Moreover, their lack of side-effects have made them popular in parallel programming and concurrent programming. However, application developers prefer the object-oriented features of imperative languages. Lisp Lisp (1958) stands for "LISt Processor." It is tailored to process lists. A full structure of the data is formed by building lists of lists. In memory, a tree data structure is built. Internally, the tree structure lends nicely for recursive functions. The syntax to build a tree is to enclose the space-separated elements within parenthesis. The following is a list of three elements. The first two elements are themselves lists of two elements: ((A B) (HELLO WORLD) 94) Lisp has functions to extract and reconstruct elements. The function head() returns a list containing the first element in the list. The function tail() returns a list containing everything but the first element. The function cons() returns a list that is the concatenation of other lists. Therefore, the following expression will return the list x: cons(head(x), tail(x)) One drawback of Lisp is when many functions are nested, the parentheses may look confusing. Modern Lisp environments help ensure parenthesis match. As an aside, Lisp does support the imperative language operations of the assignment statement and goto loops. Also, Lisp is not concerned with the datatype of the elements at compile time. Instead, it assigns (and may reassign) the datatypes at runtime. Assigning the datatype at runtime is called dynamic binding. Whereas dynamic binding increases the language's flexibility, programming errors may linger until late in the software development process. Writing large, reliable, and readable Lisp programs requires forethought. If properly planned, the program may be much shorter than an equivalent imperative language program. Lisp is widely used in artificial intelligence. However, its usage has been accepted only because it has imperative language operations, making unintended side-effects possible. ML ML (1973) stands for "Meta Language." ML checks to make sure only data of the same type are compared with one another. For example, this function has one input parameter (an integer) and returns an integer: ML is not parenthesis-eccentric like Lisp. The following is an application of times_10(): times_10 2 It returns "20 : int". (Both the results and the datatype are returned.) Like Lisp, ML is tailored to process lists. Unlike Lisp, each element is the same datatype. Moreover, ML assigns the datatype of an element at compile-time. Assigning the datatype at compile-time is called static binding. Static binding increases reliability because the compiler checks the context of variables before they are used. Prolog Prolog (1972) stands for "PROgramming in LOGic." It is a logic programming language, based on formal logic. The language was developed by Alain Colmerauer and Philippe Roussel in Marseille, France. It is an implementation of Selective Linear Definite clause resolution, pioneered by Robert Kowalski and others at the University of Edinburgh. The building blocks of a Prolog program are facts and rules. Here is a simple example: cat(tom). % tom is a cat mouse(jerry). % jerry is a mouse animal(X) :- cat(X). % each cat is an animal animal(X) :- mouse(X). % each mouse is an animal big(X) :- cat(X). % each cat is big small(X) :- mouse(X). % each mouse is small eat(X,Y) :- mouse(X), cheese(Y). % each mouse eats each cheese eat(X,Y) :- big(X), small(Y). % each big animal eats each small animal After all the facts and rules are entered, then a question can be asked: Will Tom eat Jerry? ?- eat(tom,jerry). true The following example shows how Prolog will convert a letter grade to its numeric value: numeric_grade('A', 4). numeric_grade('B', 3). numeric_grade('C', 2). numeric_grade('D', 1). numeric_grade('F', 0). numeric_grade(X, -1) :- not X = 'A', not X = 'B', not X = 'C', not X = 'D', not X = 'F'. grade('The Student', 'A'). ?- grade('The Student', X), numeric_grade(X, Y). X = 'A', Y = 4 Here is a comprehensive example: 1) All dragons billow fire, or equivalently, a thing billows fire if the thing is a dragon: billows_fire(X) :- is_a_dragon(X). 2) A creature billows fire if one of its parents billows fire: billows_fire(X) :- is_a_creature(X), is_a_parent_of(Y,X), billows_fire(Y). 3) A thing X is a parent of a thing Y if X is the mother of Y or X is the father of Y: is_a_parent_of(X, Y):- is_the_mother_of(X, Y). is_a_parent_of(X, Y):- is_the_father_of(X, Y). 4) A thing is a creature if the thing is a dragon: is_a_creature(X) :- is_a_dragon(X). 5) Norberta is a dragon, and Puff is a creature. Norberta is the mother of Puff. is_a_dragon(norberta). is_a_creature(puff). is_the_mother_of(norberta, puff). Rule (2) is a recursive (inductive) definition. It can be understood declaratively, without the need to understand how it is executed. Rule (3) shows how functions are represented by using relations. Here, the mother and father functions ensure that every individual has only one mother and only one father. Prolog is an untyped language. Nonetheless, inheritance can be represented by using predicates. Rule (4) asserts that a creature is a superclass of a dragon. Questions are answered using backward reasoning. Given the question: ?- billows_fire(X). Prolog generates two answers : X = norberta X = puff Practical applications for Prolog are knowledge representation and problem solving in artificial intelligence. Object-oriented programming Object-oriented programming is a programming method to execute operations (functions) on objects. The basic idea is to group the characteristics of a phenomenon into an object container and give the container a name. The operations on the phenomenon are also grouped into the container. Object-oriented programming developed by combining the need for containers and the need for safe functional programming. This programming method need not be confined to an object-oriented language. In an object-oriented language, an object container is called a class. In a non-object-oriented language, a data structure (which is also known as a record) may become an object container. To turn a data structure into an object container, operations need to be written specifically for the structure. The resulting structure is called an abstract datatype. However, inheritance will be missing. Nonetheless, this shortcoming can be overcome. Here is a C programming language header file for the GRADE abstract datatype in a simple school application: /* grade.h */ /* ------- */ /* Used to allow multiple source files to include */ /* this header file without duplication errors. */ /* ---------------------------------------------- */ #ifndef GRADE_H #define GRADE_H typedef struct { char letter; } GRADE; /* Constructor */ /* ----------- */ GRADE *grade_new( char letter ); int grade_numeric( char letter ); #endif The grade_new() function performs the same algorithm as the C++ constructor operation. Here is a C programming language source file for the GRADE abstract datatype in a simple school application: /* grade.c */ /* ------- */ #include "grade.h" GRADE *grade_new( char letter ) { GRADE *grade; /* Allocate heap memory */ /* -------------------- */ if ( ! ( grade = calloc( 1, sizeof ( GRADE ) ) ) ) { fprintf(stderr, "ERROR in %s/%s/%d: calloc() returned empty.\n", , , ); exit( 1 ); } grade->letter = letter; return grade; } int grade_numeric( char letter ) { if ( ( letter == 'A' || letter == 'a' ) ) return 4; else if ( ( letter == 'B' || letter == 'b' ) ) return 3; else if ( ( letter == 'C' || letter == 'c' ) ) return 2; else if ( ( letter == 'D' || letter == 'd' ) ) return 1; else if ( ( letter == 'F' || letter == 'f' ) ) return 0; else return -1; } In the constructor, the function calloc() is used instead of malloc() because each memory cell will be set to zero. Here is a C programming language header file for the PERSON abstract datatype in a simple school application: /* person.h */ /* -------- */ #ifndef PERSON_H #define PERSON_H typedef struct { char *name; } PERSON; /* Constructor */ /* ----------- */ PERSON *person_new( char *name ); #endif Here is a C programming language source file for the PERSON abstract datatype in a simple school application: /* person.c */ /* -------- */ #include "person.h" PERSON *person_new( char *name ) { PERSON *person; if ( ! ( person = calloc( 1, sizeof ( PERSON ) ) ) ) { fprintf(stderr, "ERROR in %s/%s/%d: calloc() returned empty.\n", , , ); exit( 1 ); } person->name = name; return person; } Here is a C programming language header file for the STUDENT abstract datatype in a simple school application: /* student.h */ /* --------- */ #ifndef STUDENT_H #define STUDENT_H #include "person.h" #include "grade.h" typedef struct { /* A STUDENT is a subset of PERSON. */ /* -------------------------------- */ PERSON *person; GRADE *grade; } STUDENT; /* Constructor */ /* ----------- */ STUDENT *student_new( char *name ); #endif Here is a C programming language source file for the STUDENT abstract datatype in a simple school application: /* student.c */ /* --------- */ #include "student.h" #include "person.h" STUDENT *student_new( char *name ) { STUDENT *student; if ( ! ( student = calloc( 1, sizeof ( STUDENT ) ) ) ) { fprintf(stderr, "ERROR in %s/%s/%d: calloc() returned empty.\n", , , ); exit( 1 ); } /* Execute the constructor of the PERSON superclass. */ /* ------------------------------------------------- */ student->person = person_new( name ); return student; } Here is a driver program for demonstration: /* student_dvr.c */ /* ------------- */ #include <stdio.h> #include "student.h" int main( void ) { STUDENT *student = student_new( "The Student" ); student->grade = grade_new( 'a' ); printf( "%s: Numeric grade = %d\n", /* Whereas a subset exists, inheritance does not. */ student->person->name, /* Functional programming is executing functions just-in-time (JIT) */ grade_numeric( student->grade->letter ) ); return 0; } Here is a makefile to compile everything: # makefile # -------- all: student_dvr clean: rm student_dvr *.o student_dvr: student_dvr.c grade.o student.o person.o gcc student_dvr.c grade.o student.o person.o -o student_dvr grade.o: grade.c grade.h gcc -c grade.c student.o: student.c student.h gcc -c student.c person.o: person.c person.h gcc -c person.c The formal strategy to build object-oriented objects is to: Identify the objects. Most likely these will be nouns. Identify each object's attributes. What helps to describe the object? Identify each object's actions. Most likely these will be verbs. Identify the relationships from object to object. Most likely these will be verbs. For example: A person is a human identified by a name. A grade is an achievement identified by a letter. A student is a person who earns a grade. Syntax and semantics The syntax of a programming language is a list of production rules which govern its form. A programming language's form is the correct placement of its declarations, expressions, and statements. Complementing the syntax of a language are its semantics. The semantics describe the meanings attached to various syntactic constructs. A syntactic construct may need a semantic description because a form may have an invalid interpretation. Also, different languages might have the same syntax; however, their behaviors may be different. The syntax of a language is formally described by listing the production rules. Whereas the syntax of a natural language is extremely complicated, a subset of the English language can have this production rule listing: a sentence is made up of a noun-phrase followed by a verb-phrase; a noun-phrase is made up of an article followed by an adjective followed by a noun; a verb-phrase is made up of a verb followed by a noun-phrase; an article is 'the'; an adjective is 'big' or an adjective is 'small'; a noun is 'cat' or a noun is 'mouse'; a verb is 'eats'; The words in bold-face are known as "non-terminals". The words in 'single quotes' are known as "terminals". From this production rule listing, complete sentences may be formed using a series of replacements. The process is to replace non-terminals with either a valid non-terminal or a valid terminal. The replacement process repeats until only terminals remain. One valid sentence is: sentence noun-phrase verb-phrase article adjective noun verb-phrase the adjective noun verb-phrase the big noun verb-phrase the big cat verb-phrase the big cat verb noun-phrase the big cat eats noun-phrase the big cat eats article adjective noun the big cat eats the adjective noun the big cat eats the small noun the big cat eats the small mouse However, another combination results in an invalid sentence: the small mouse eats the big cat Therefore, a semantic is necessary to correctly describe the meaning of an eat activity. One production rule listing method is called the Backus–Naur form (BNF). BNF describes the syntax of a language and itself has a syntax. This recursive definition is an example of a meta-language. The syntax of BNF includes: ::= which translates to is made up of a[n] when a non-terminal is to its right. It translates to is when a terminal is to its right. | which translates to or. < and > which surround non-terminals. Using BNF, a subset of the English language can have this production rule listing: <sentence> ::= <noun-phrase><verb-phrase> <noun-phrase> ::= <article><adjective><noun> <verb-phrase> ::= <verb><noun-phrase> <article> ::= the <adjective> ::= big | small <noun> ::= cat | mouse <verb> ::= eats Using BNF, a signed-integer has the production rule listing: <signed-integer> ::= <sign><integer> <sign> ::= + | - <integer> ::= <digit> | <digit><integer> <digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 Notice the recursive production rule: <integer> ::= <digit> | <digit><integer> This allows for an infinite number of possibilities. Therefore, a semantic is necessary to describe a limitation of the number of digits. Notice the leading zero possibility in the production rules: <integer> ::= <digit> | <digit><integer> <digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 Therefore, a semantic is necessary to describe that leading zeros need to be ignored. Two formal methods are available to describe semantics. They are denotational semantics and axiomatic semantics. Software engineering and computer programming Software engineering is a variety of techniques to produce quality software. Computer programming is the process of writing or editing source code. In a formal environment, a systems analyst will gather information from managers about all the organization's processes to automate. This professional then prepares a detailed plan for the new or modified system. The plan is analogous to an architect's blueprint. Performance objectives The systems analyst has the objective to deliver the right information to the right person at the right time. The critical factors to achieve this objective are: The quality of the output. Is the output useful for decision-making? The accuracy of the output. Does it reflect the true situation? The format of the output. Is the output easily understood? The speed of the output. Time-sensitive information is important when communicating with the customer in real time. Cost objectives Achieving performance objectives should be balanced with all of the costs, including: Development costs. Uniqueness costs. A reusable system may be expensive. However, it might be preferred over a limited-use system. Hardware costs. Operating costs. Applying a systems development process will mitigate the axiom: the later in the process an error is detected, the more expensive it is to correct. Waterfall model The waterfall model is an implementation of a systems development process. As the waterfall label implies, the basic phases overlap each other: The investigation phase is to understand the underlying problem. The analysis phase is to understand the possible solutions. The design phase is to plan the best solution. The implementation phase is to program the best solution. The maintenance phase lasts throughout the life of the system. Changes to the system after it's deployed may be necessary. Faults may exist, including specification faults, design faults, or coding faults. Improvements may be necessary. Adaption may be necessary to react to a changing environment. Computer programmer A computer programmer is a specialist responsible for writing or modifying the source code to implement the detailed plan. A programming team is likely to be needed because most systems are too large to be completed by a single programmer. However, adding programmers to a project may not shorten the completion time. Instead, it may lower the quality of the system. To be effective, program modules need to be defined and distributed to team members. Also, team members must interact with one another in a meaningful and effective way. Computer programmers may be programming in the small: programming within a single module. Chances are a module will execute modules located in other source code files. Therefore, computer programmers may be programming in the large: programming modules so they will effectively couple with each other. Programming-in-the-large includes contributing to the application programming interface (API). Program modules Modular programming is a technique to refine imperative language programs. Refined programs may reduce the software size, separate responsibilities, and thereby mitigate software aging. A program module is a sequence of statements that are bounded within a block and together identified by a name. Modules have a function, context, and logic: The function of a module is what it does. The context of a module are the elements being performed upon. The logic of a module is how it performs the function. The module's name should be derived first by its function, then by its context. Its logic should not be part of the name. For example, function compute_square_root( x ) or function compute_square_root_integer( i : integer ) are appropriate module names. However, function compute_square_root_by_division( x ) is not. The degree of interaction within a module is its level of cohesion. Cohesion is a judgment of the relationship between a module's name and its function. The degree of interaction between modules is the level of coupling. Coupling is a judgement of the relationship between a module's context and the elements being performed upon. Cohesion The levels of cohesion from worst to best are: Coincidental Cohesion: A module has coincidental cohesion if it performs multiple functions, and the functions are completely unrelated. For example, function read_sales_record_print_next_line_convert_to_float(). Coincidental cohesion occurs in practice if management enforces silly rules. For example, "Every module will have between 35 and 50 executable statements." Logical Cohesion: A module has logical cohesion if it has available a series of functions, but only one of them is executed. For example, function perform_arithmetic( perform_addition, a, b ). Temporal Cohesion: A module has temporal cohesion if it performs functions related to time. One example, function initialize_variables_and_open_files(). Another example, stage_one(), stage_two(), ... Procedural Cohesion: A module has procedural cohesion if it performs multiple loosely related functions. For example, function read_part_number_update_employee_record(). Communicational Cohesion: A module has communicational cohesion if it performs multiple closely related functions. For example, function read_part_number_update_sales_record(). Informational Cohesion: A module has informational cohesion if it performs multiple functions, but each function has its own entry and exit points. Moreover, the functions share the same data structure. Object-oriented classes work at this level. Functional Cohesion: a module has functional cohesion if it achieves a single goal working only on local variables. Moreover, it may be reusable in other contexts. Coupling The levels of coupling from worst to best are: Content Coupling: A module has content coupling if it modifies a local variable of another function. COBOL used to do this with the alter verb. Common Coupling: A module has common coupling if it modifies a global variable. Control Coupling: A module has control coupling if another module can modify its control flow. For example, perform_arithmetic( perform_addition, a, b ). Instead, control should be on the makeup of the returned object. Stamp Coupling: A module has stamp coupling if an element of a data structure passed as a parameter is modified. Object-oriented classes work at this level. Data Coupling: A module has data coupling if all of its input parameters are needed and none of them are modified. Moreover, the result of the function is returned as a single object. Data flow analysis Data flow analysis is a design method used to achieve modules of functional cohesion and data coupling. The input to the method is a data-flow diagram. A data-flow diagram is a set of ovals representing modules. Each module's name is displayed inside its oval. Modules may be at the executable level or the function level. The diagram also has arrows connecting modules to each other. Arrows pointing into modules represent a set of inputs. Each module should have only one arrow pointing out from it to represent its single output object. (Optionally, an additional exception arrow points out.) A daisy chain of ovals will convey an entire algorithm. The input modules should start the diagram. The input modules should connect to the transform modules. The transform modules should connect to the output modules. Functional categories Computer programs may be categorized along functional lines. The main functional categories are application software and system software. System software includes the operating system, which couples computer hardware with application software. The purpose of the operating system is to provide an environment where application software executes in a convenient and efficient manner. Both application software and system software execute utility programs. At the hardware level, a microcode program controls the circuits throughout the central processing unit. Application software Application software is the key to unlocking the potential of the computer system. Enterprise application software bundles accounting, personnel, customer, and vendor applications. Examples include enterprise resource planning, customer relationship management, and supply chain management software. Enterprise applications may be developed in-house as a one-of-a-kind proprietary software. Alternatively, they may be purchased as off-the-shelf software. Purchased software may be modified to provide custom software. If the application is customized, then either the company's resources are used or the resources are outsourced. Outsourced software development may be from the original software vendor or a third-party developer. The potential advantages of in-house software are features and reports may be developed exactly to specification. Management may also be involved in the development process and offer a level of control. Management may decide to counteract a competitor's new initiative or implement a customer or vendor requirement. A merger or acquisition may necessitate enterprise software changes. The potential disadvantages of in-house software are time and resource costs may be extensive. Furthermore, risks concerning features and performance may be looming. The potential advantages of off-the-shelf software are upfront costs are identifiable, the basic needs should be fulfilled, and its performance and reliability have a track record. The potential disadvantages of off-the-shelf software are it may have unnecessary features that confuse end users, it may lack features the enterprise needs, and the data flow may not match the enterprise's work processes. One approach to economically obtaining a customized enterprise application is through an application service provider. Specialty companies provide hardware, custom software, and end-user support. They may speed the development of new applications because they possess skilled information system staff. The biggest advantage is it frees in-house resources from staffing and managing complex computer projects. Many application service providers target small, fast-growing companies with limited information system resources. On the other hand, larger companies with major systems will likely have their technical infrastructure in place. One risk is having to trust an external organization with sensitive information. Another risk is having to trust the provider's infrastructure reliability. Operating system An operating system is the low-level software that supports a computer's basic functions, such as scheduling processes and controlling peripherals. In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing. More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an operating system was kept in the computer at all times. The term operating system may refer to two levels of software. The operating system may refer to the kernel program that manages the processes, memory, and devices. More broadly, the operating system may refer to the entire package of the central software. The package includes a kernel program, command-line interpreter, graphical user interface, utility programs, and editor. Kernel Program The kernel's main purpose is to manage the limited resources of a computer: The kernel program should perform process scheduling. The kernel creates a process control block when a program is selected for execution. However, an executing program gets exclusive access to the central processing unit only for a time slice. To provide each user with the appearance of continuous access, the kernel quickly preempts each process control block to execute another one. The goal for system developers is to minimize dispatch latency. The kernel program should perform memory management. When the kernel initially loads an executable into memory, it divides the address space logically into regions. The kernel maintains a master-region table and many per-process-region (pregion) tables—one for each running process. These tables constitute the virtual address space. The master-region table is used to determine where its contents are located in physical memory. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion. The program pregion stores machine instructions. Since machine instructions do not change, the program pregion may be shared by many processes of the same executable. To save time and memory, the kernel may load only blocks of execution instructions from the disk drive, not the entire execution file completely. The kernel is responsible for translating virtual addresses into physical addresses. The kernel may request data from the memory controller and, instead, receive a page fault. If so, the kernel accesses the memory management unit to populate the physical data region and translate the address. The kernel allocates memory from the heap upon request by a process. When the process is finished with the memory, the process may request for it to be freed. If the process exits without requesting all allocated memory to be freed, then the kernel performs garbage collection to free the memory. The kernel also ensures that a process only accesses its own memory, and not that of the kernel or other processes. The kernel program should perform file system management. The kernel has instructions to create, retrieve, update, and delete files. The kernel program should perform device management. The kernel provides programs to standardize and simplify the interface to the mouse, keyboard, disk drives, printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time. The kernel program should perform network management. The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system. The kernel program should provide system level functions for programmers to use. Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors, file seeking, physical reading, and physical writing. Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface. Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface. The kernel program should provide a communication channel between executing processes. For a large software system, it may be desirable to engineer the system into smaller processes. Processes may communicate with one another by sending and receiving signals. Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher-level languages like C, Objective-C, and Swift. Utility program A utility program is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers. A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated. Utility programs include compression programs so data files are stored on less disk space. Compressed programs also save time when data files are transmitted over the network. Utility programs can sort and merge data sets. Utility programs detect computer viruses. Microcode program A microcode program is the bottom-level interpreter that controls the data path of software-driven computers. (Advances in hardware have migrated these operations to hardware execution circuits.) Microcode instructions allow the programmer to more easily implement the digital logic level—the computer's real hardware. The digital logic level is the boundary between computer science and computer engineering. A logic gate is a tiny transistor that can return one of two signals: on or off. Having one transistor forms the NOT gate. Connecting two transistors in series forms the NAND gate. Connecting two transistors in parallel forms the NOR gate. Connecting a NOT gate to a NAND gate forms the AND gate. Connecting a NOT gate to a NOR gate forms the OR gate. These five gates form the building blocks of binary algebra—the digital logic functions of the computer. Microcode instructions are mnemonics programmers may use to execute digital logic functions instead of forming them in binary algebra. They are stored in a central processing unit's (CPU) control store. These hardware-level instructions move data throughout the data path. The micro-instruction cycle begins when the microsequencer uses its microprogram counter to fetch the next machine instruction from random-access memory. The next step is to decode the machine instruction by selecting the proper output line to the hardware module. The final step is to execute the instruction using the hardware module's set of gates. Instructions to perform arithmetic are passed through an arithmetic logic unit (ALU). The ALU has circuits to perform elementary operations to add, shift, and compare integers. By combining and looping the elementary operations through the ALU, the CPU performs its complex arithmetic. Microcode instructions move data between the CPU and the memory controller. Memory controller microcode instructions manipulate two registers. The memory address register is used to access each memory cell's address. The memory data register is used to set and read each cell's contents. Microcode instructions move data between the CPU and the many computer buses. The disk controller bus writes to and reads from hard disk drives. Data is also moved between the CPU and other functional units via the peripheral component interconnect express bus. Notes References Computer programming Software
5785
https://en.wikipedia.org/wiki/Crime
Crime
In ordinary language, a crime is an unlawful act punishable by a state or other authority. The term crime does not, in modern criminal law, have any simple and universally accepted definition, though statutory definitions have been provided for certain purposes. The most popular view is that crime is a category created by law; in other words, something is a crime if declared as such by the relevant and applicable law. One proposed definition is that a crime or offence (or criminal offence) is an act harmful not only to some individual but also to a community, society, or the state ("a public wrong"). Such acts are forbidden and punishable by law. The notion that acts such as murder, rape, and theft are to be prohibited exists worldwide. What precisely is a criminal offence is defined by the criminal law of each relevant jurisdiction. While many have a catalogue of crimes called the criminal code, in some common law nations no such comprehensive statute exists. The state (government) has the power to severely restrict one's liberty for committing a crime. In modern societies, there are procedures to which investigations and trials must adhere. If found guilty, an offender may be sentenced to a form of reparation such as a community sentence, or, depending on the nature of their offence, to undergo imprisonment, life imprisonment or, in some jurisdictions, death. Usually, to be classified as a crime, the "act of doing something criminal" (actus reus) mustwith certain exceptionsbe accompanied by the "intention to do something criminal" (mens rea). While every crime violates the law, not every violation of the law counts as a crime. Breaches of private law (torts and breaches of contract) are not automatically punished by the state, but can be enforced through civil procedure. Definition The exact definition of crime is a philosophical issue without an agreed upon answer. Fields such as law, politics, sociology, and psychology define crime in different ways. Crimes may be variously considered as wrongs against individuals, against the community, or against the state. The criminality of an action is dependent on its context; acts of violence will be seen as crimes in many circumstances but as permissible or desirable in others. Crime was historically seen as a manifestation of evil, but this has been superseded by modern criminal theories. Legalism Legal and political definitions of crime consider actions that are banned by authorities or punishable by law. Crime is defined by the criminal law of a given jurisdiction, including all actions that are subject to criminal procedure. There is no limit to what can be considered a crime in a legal system, so there may not be a unifying principle used to determine whether an action should be designated as a crime. From a legal perspective, crimes are generally wrong actions that are severe enough to warrant punishment that infringes on the perpetrator's liberties. English criminal law and the related common law of Commonwealth countries can define offences that the courts alone have developed over the years, without any actual legislation: common law offences. The courts used the concept of malum in se to develop various common law offences. Sociology As a sociological concept, crime is associated with actions that cause harm and violate social norms. Under this definition, crime is a type of social construct, and societal attitudes determine what is considered criminal. In legal systems based on legal moralism, the predominant moral beliefs of society determine the legal definition as well as the social definition of crime. This system is less prominent in liberal democratic societies that prioritize individualism and multiculturalism over other moral beliefs. Paternalism defines crime not only as harm to others or to society, but also as harm to the self. Psychology Psychological definitions consider the state of mind of perpetrators and their relationship with their environment. Study The study of crime is called criminology. Criminology is a subfield of sociology that addresses issues of social norms, social order, deviance, and violence. It includes the motivations and consequences of crime and its perpetrators, as well as preventative measures, either studying criminal acts on an individual level or the relationship of crime and the community. Due to the wide range of concepts associated with crime and the disagreement on a precise definition, the focus of criminology can vary considerably. Various theories within criminology provide different descriptions and explanations for crime, including social control theory, subcultural theory, strain theory, differential association, and labeling theory. Subfields of criminology and related fields of study include crime prevention, criminal law, crime statistics, anthropological criminology, criminal psychology, criminal sociology, criminal psychiatry, victimology, penology, and forensic science. Besides sociology, criminology is often associated with law and psychology. Information and statistics about crime in a given jurisdiction are collected as crime estimates, typically produced by national or international agencies. Methods to collect crime statistics may vary, even between jurisdictions within the same nation. Under-reporting of crime is common, particularly in developing nations. Victim studies may be used to determine the frequency of crime in a given population. Foundational systems Natural-law theory Justifying the state's use of force to coerce compliance with its laws has proven a consistent theoretical problem. One of the earliest justifications involved the theory of natural law. This posits that the nature of the world or of human beings underlies the standards of morality or constructs them. Thomas Aquinas wrote in the 13th century: "the rule and measure of human acts is the reason, which is the first principle of human acts". He regarded people as by nature rational beings, concluding that it becomes morally appropriate that they should behave in a way that conforms to their rational nature. Thus, to be valid, any law must conform to natural law and coercing people to conform to that law is morally acceptable. In the 1760s, William Blackstone described the thesis: "This law of nature, being co-eval with mankind and dictated by God himself, is of course superior in obligation to any other. It is binding over all the globe, in all countries, and at all times: no human laws are of any validity, if contrary to this; and such of them as are valid derive all their force, and all their authority, mediately or immediately, from this original." But John Austin (1790–1859), an early positivist, applied utilitarianism in accepting the calculating nature of human beings and the existence of an objective morality. He denied that the legal validity of a norm depends on whether its content conforms to morality. Thus in Austinian terms, a moral code can objectively determine what people ought to do, the law can embody whatever norms the legislature decrees to achieve social utility, but every individual remains free to choose what to do. Similarly, H.L.A. Hart saw the law as an aspect of sovereignty, with lawmakers able to adopt any law as a means to a moral end. Thus the necessary and sufficient conditions for the truth of a proposition of law involved internal logic and consistency, and that the state's agents used state power with responsibility. Ronald Dworkin rejects Hart's theory and proposes that all individuals should expect the equal respect and concern of those who govern them as a fundamental political right. He offers a theory of compliance overlaid by a theory of deference (the citizen's duty to obey the law) and a theory of enforcement, which identifies the legitimate goals of enforcement and punishment. Legislation must conform to a theory of legitimacy, which describes the circumstances under which a particular person or group is entitled to make law, and a theory of legislative justice, which describes the law they are entitled or obliged to make. There are natural-law theorists who have accepted the idea of enforcing the prevailing morality as a primary function of the law. This view entails the problem that it makes any moral criticism of the law impossible: if conformity with natural law forms a necessary condition for legal validity, all valid law must, by definition, count as morally just. Thus, on this line of reasoning, the legal validity of a norm necessarily entails its moral justice. History Early history Restrictions on behavior existed in all prehistoric societies. Crime in early human society was seen as a personal transgression and was addressed by the community as a whole rather than through a formal legal system, often through the use of custom, religion, or the rule of a tribal leader. Some of the oldest extant writings are ancient criminal codes. The earliest known criminal code was the Code of Ur-Nammu (), and the known first criminal code that incorporated retaliatory justice was the Code of Hammurabi. The latter influenced the conception of crime across several civilizations over the following millennia. The Romans systematized law and applied their system across the Roman Empire. The initial rules of Roman law regarded assaults as a matter of private compensation. The most significant Roman law concept involved dominion. Most acts recognized as crimes in ancient societies, such as violence and theft, have persisted to the modern era. The criminal justice system of Imperial China existed unbroken for over 2,000 years. Many of the earliest conceptions of crime are associated with sin and corresponded to acts that were believed to invoke the anger of a deity. This idea was further popularized with the development of the Abrahamic religions. The understanding of crime and sin were closely associated with one another for much of history, and conceptions of crime took on many of the ideas associated with sin. Islamic law developed its own system of criminal justice as Islam spread in the seventh and eighth centuries. Post-classical era In post-classical Europe and East Asia, central government was limited and crime was defined locally. Towns established their own criminal justice systems, while crime in the countryside was defined by the social hierarchies of feudalism. In some places, such as the Russian Empire and the Kingdom of Italy, feudal justice survived into the 19th century. Common law first developed in England under the rule of Henry II in the 12th century. He established a system of traveling judges that tried accused criminals in each region of England by applying precedent from previous rulings. Legal developments in 12th century England also resulted in the earliest known recording of official crime data. Modern era In the modern era, crime came to be seen as an issue affecting society rather than conflicts between individuals. Writers such as Thomas Hobbes saw crime as a societal issue as early as the 17th century. Imprisonment developed as a long-term penalty for crime in the 18th century. Increasing urbanization and industrialization in the 19th century caused crime to become an immediate issue that affected society, prompting government intervention in crime and the establishment of criminology as its own field. Anthropological criminology was popularized by Cesare Lombroso in the late-19th century. This was a biological determinist school of thought based in social darwinism, arguing that certain people are naturally born as criminals. The eugenics movement of the early-20th century similarly held that crime was caused primarily by genetic factors. The concept of crime underwent a period of change as modernism was widely accepted in the years following World War II. Crime increasingly came to be seen as a societal issue, and criminal law was seen as a means to protect the public from antisocial behavior. This idea was associated with a larger trend in the western world toward social democracy and centre-left politics. Through most of history, reporting of crime was generally local. The advent of mass media through radio and television in the mid-20th century allowed for the sensationalism of crime. This created well-known stories of criminals such as Jeffrey Dahmer, and it allowed for dramatization that perpetuates misconceptions about crime. Forensic science was popularized in the 1980s, establishing DNA profiling as a new method to prevent and analyze crime. Types Violent crime Violent crime is crime that involves an act of violent aggression against another person. Common examples of violent crime include homicide, assault, sexual assault, and robbery. Some violent crimes, such as assault, may be committed with the intention of causing harm. Other violent crimes, such as robbery, may use violence to further another goal. Violent crime is distinct from noncriminal types of violence, such as self-defense, use of force, and acts of war. Acts of violence are most often perceived as deviant when they are committed as an overreaction or a disproportionate response to provocation. Property crime Common examples of property crime include burglary, theft, and vandalism. Examples of financial crimes include counterfeiting, smuggling, tax evasion, and bribery. The scope of financial crimes has expanded significantly since the beginning of modern economics in the 17th century. In occupational crime, the complexity and anonymity of computer systems may help criminal employees camouflage their operations. The victims of the most costly scams include banks, brokerage houses, insurance companies, and other large financial institutions. Public order crime Public order crime is crime that violates a society's norms about what constitutes socially acceptable behavior. Examples of public order crimes include gambling, drug-related crime, public intoxication, prostitution, loitering, breach of the peace, panhandling, vagrancy, street harassment, excessive noise, and littering. Public order crime is associated with the broken windows theory, which posits that public order crimes increase the likelihood of other types of crime. Some public order crimes are considered victimless crimes in which no specific victim an be identified. Most nations in the Western world have moved toward decriminalization of victimless crimes in the modern era. Adultery, fornication, blasphemy, apostasy, and invoking the name of God are commonly recognized as crimes in theocratic societies or those heavily influenced by religion. Political crime Political crime is crime that directly challenges or threatens the state. Examples of political crimes include subversion, rebellion, treason, mutiny, espionage, sedition, terrorism, riot, and unlawful assembly. Political crimes are associated with the political agenda of a given state, and they are necessarily applied against political dissidents. Due to their unique relation to the state, political crimes are often encouraged by one nation against another, and it is political alignment rather than the act itself that determines criminality. State crime that is carried out by the state to repress law-abiding citizens may also be considered political crime. Inchoate crime Inchoate crime is crime that is carried out in anticipation of other illegal actions but does not cause direct harm. Examples of inchoate crimes include attempt and conspiracy. Inchoate crimes are defined by substantial action to facilitate a crime with the intention of the crime's occurrence. This is distinct from simple preparation for or consideration of criminal activity. They are unique in that renunciation of criminal intention is generally enough to absolve the perpetrator of criminal liability, as their actions are no longer facilitating a potential future crime. Participants Criminal A criminal is an individual who commits a crime. What constitutes a criminal can vary depending on the context and the law, and it often carries a pejorative connotation. Criminals are often seen as embodying certain stereotypes or traits and are seen as a distinct type of person from law-abiding citizens. Despite this, no mental or physical trend is identifiable that differentiates criminals from non-criminals. Public response to criminals may be indignant or sympathetic. Indignant responses involve resentment and a desire for vengeance, wishing to see criminals removed from society or made to suffer for harm that they cause. Sympathetic responses involve compassion and understanding, seeking to rehabilitate or forgive criminals and absolve them of blame. Victim A victim is an individual who has been treated unjustly or made to suffer. In the context of crime, the victim is the individual that is harmed by a violation of criminal law. Victimization is associated with post-traumatic stress and a long-term decrease in quality of life. Victimology is the study of victims, including their role in crime and how they are affected. Several factors affect an individual's likelihood of becoming a victim. Some factors may cause victims of crime to experience short-term or long-term "repeat victimization". Common long-term victims are those that have close relationships with the criminal, manifesting in crimes such as domestic violence, embezzlement, child abuse, and bullying. Repeat victimization may also occur when a potential victim appears to be a viable target, such as when indicating wealth in a less affluent region. Many of the traits that indicate criminality also indicate victimality; victims of crime are more likely to engage in unlawful behavior and respond to provocation. Overall demographic trends of victims and criminals are often similar, and victims are more likely to have engaged in criminal activities themselves. The victims may only want compensation for the injuries suffered, while remaining indifferent to a possible desire for deterrence. Victims, on their own, may lack the economies of scale that could allow them to administer a penal system, let alone to collect any fines levied by a court. Historically, from ancient times until the 19th century, many societies believed that non-human animals were capable of committing crimes, and prosecuted and punished them accordingly. Prosecutions of animals gradually dwindled during the 19th century, although a few were recorded as late as the 1910s and 1920s. Criminal law Virtually all countries in the 21st century have criminal law grounded in civil law, common law, Islamic law, or socialist law. Historically, criminal codes have often divided criminals by class or caste, prescribing different penalties depending on status. In some tribal societies, an entire clan is recognized as liable for a crime. In many cases, disputes over a crime in this system lead to a feud that lasts over several generations. Criminalization The state determines what actions are considered criminal in the scope of the law. Criminalization has significant human rights considerations, as it can infringe on rights of autonomy and subject individuals to unjust punishment. Criminal justice Law enforcement The enforcement of criminal law seeks to prevent crime and sanction crimes that do occur. This enforcement is carried out by the state through law enforcement agencies, such as police, which are empowered to arrest suspected perpetrators of crimes. Law enforcement may focus on policing individual crimes, or it may focus on bringing down overall crime rates. One common variant, community policing, seeks to prevent crime by integrating police into the community and public life. Criminal procedure When the perpetrator of a crime is found guilty of the crime, the state delivers a sentence to determine the penalty for the crime. Corrections and punishment Authorities may respond to crime through corrections, carrying out punishment as a means to censure the criminal act. Punishment is generally reserved for serious offenses. Individuals regularly engage in activity that could be scrutinized under criminal law but are deemed inconsequential. Retributive justice seeks to create a system of accountability and punish criminals in a way that knowingly causes suffering. This may arise out of a feeling that criminals deserve to suffer and that punishment should exist for its own sake. The existence of punishment also creates an effect of deterrence that discourages criminal action for fear of punishment. Rehabilitation seeks to understand and mitigate the causes of a criminal's unlawful action to prevent recidivism. Different criminological theories propose different methods of rehabilitation, including strengthening social networks, reducing poverty, influencing values, and providing therapy for physical and mental ailments. Rehabilitative programs may include counseling or vocational education. Developed nations are less likely to use physical punishments. Instead, they will impose financial penalties or imprisonment. In places with widespread corruption or limited rule of law, crime may be punished extralegally through mob rule and lynching. Whether a crime can be resolved through financial compensation varies depending on the culture and the specific context of the crime. Historically, many societies have absolved acts of homicide through compensation to the victim's relatives. Liability If a crime is committed, the individual responsible is considered to be liable for the crime. For liability to exist, the individual must be capable of understanding the criminal process and the relevant authority must have legitimate power to establish what constitutes a crime. International criminal law International criminal law typically addresses serious offenses, such as genocide, crimes against humanity, and war crimes. As with all international law, these laws are created through treaties and international custom, and they are defined through the consensus of the involved states. International crimes are not prosecuted through a standard legal system, though international organizations may establish tribunals to investigate and rule on egregious offenses such as genocide. Causes and correlates Basic analysis of criminal behavior is determined by a cost–benefit analysis. A person that commits a criminal act typically believes that its benefits will outweigh the risk of being caught and punished. Negative economic factors (such as unemployment and income inequality) significantly increase the incentive to commit crime, while severe punishments decrease the incentive in some cases. Social factors similarly affect the likelihood of criminal activity. Crime corresponds heavily with social integration; groups that are less integrated with society or that are forcibly integrated with society are more likely to engage in crime. Involvement in the community, such as through a church, decreases the likelihood of crime, while associating with criminals increases the likelihood of becoming a criminal as well. There is no known genetic cause of crime. Some genes have been found to affect traits that may incline individuals toward criminal activity, but no biological or physiological trait has been found to directly cause or compel criminal actions. One biological factor is the disparity between men and women, as men are significantly more likely to commit crimes than women in virtually all cultures. Crimes committed by men also tend to be more severe than those committed by women. Public perception Crime is often a high priority political issue in developed countries, regardless of the country's crime rates. People that are not regularly exposed to crime most often experience it through media, including news reporting and crime fiction. Exposure of crime through news stories is associated with alarmism and inaccurate perceptions of crime trends. Selection bias in new stories about criminals significantly over-represent the prevalence of violent crime, and news reporting will often overemphasize a specific type of crime for a period of time, creating a "crime wave" effect. As public opinion of morality changes over time, actions that were once condemned as crimes may be considered justifiable. See also Crime displacement Law and order (politics) Rule of law Organized crime Notes References Polinsky, A. Mitchell. (1980). "Private versus Public Enforcement of Fines". The Journal of Legal Studies, Vol. IX, No. 1, (January), pp. 105–127. Polinsky, A. Mitchell & Shavell, Steven. (1997). On the Disutility and Discounting of Imprisonment and the Theory of Deterrence, NBER Working Papers 6259, National Bureau of Economic Research, Inc. External links Criminal law Morality
5786
https://en.wikipedia.org/wiki/California%20Institute%20of%20Technology
California Institute of Technology
The California Institute of Technology (branded as Caltech or CIT) is a private research university in Pasadena, California. The university is responsible for many modern scientific advancements and is among a small group of institutes of technology in the United States which are strongly devoted to the instruction of pure and applied sciences. Due to its history of technological innovation, Caltech has been considered to be one of the world's top research universities. The institution was founded as a preparatory and vocational school by Amos G. Throop in 1891 and began attracting influential scientists such as George Ellery Hale, Arthur Amos Noyes, and Robert Andrews Millikan in the early 20th century. The vocational and preparatory schools were disbanded and spun off in 1910 and the college assumed its present name in 1920. In 1934, Caltech was elected to the Association of American Universities, and the antecedents of NASA's Jet Propulsion Laboratory, which Caltech continues to manage and operate, were established between 1936 and 1943 under Theodore von Kármán. Caltech has six academic divisions with strong emphasis on science and engineering, managing $332 million in 2011 in sponsored research. Its primary campus is located approximately northeast of downtown Los Angeles. First-year students are required to live on campus, and 95% of undergraduates remain in the on-campus House System at Caltech. Although Caltech has a strong tradition of practical jokes and pranks, student life is governed by an honor code which allows faculty to assign take-home examinations. The Caltech Beavers compete in 13 intercollegiate sports in the NCAA Division III's Southern California Intercollegiate Athletic Conference (SCIAC). Scientists and engineers at or from the university have played an essential role in many modern scientific breakthroughs and innovations, including advances in sustainability science, quantum physics, earthquake monitoring, protein engineering, and soft robotics. , there are 79 Nobel laureates who have been affiliated with Caltech, making it the institution with the highest number of Nobelists per capita in America. This includes 46 alumni and faculty members (47 prizes, with chemist Linus Pauling being the only individual in history to win two unshared prizes). In addition, four Fields Medalists and six Turing Award winners have been affiliated with Caltech. History Throop College Caltech started as a vocational school founded in present-day Old Pasadena on Fair Oaks Avenue and Chestnut Street on September 23, 1891, by local businessman and politician Amos G. Throop. The school was known successively as Throop University, Throop Polytechnic Institute (and Manual Training School) and Throop College of Technology before acquiring its current name in 1920. The vocational school was disbanded and the preparatory program was split off to form the independent Polytechnic School in 1907. At a time when scientific research in the United States was still in its infancy, George Ellery Hale, a solar astronomer from the University of Chicago, founded the Mount Wilson Observatory in 1904. He joined Throop's board of trustees in 1907, and soon began developing it and the whole of Pasadena into a major scientific and cultural destination. He engineered the appointment of James A. B. Scherer, a literary scholar untutored in science but a capable administrator and fund-raiser, to Throop's presidency in 1908. Scherer persuaded retired businessman and trustee Charles W. Gates to donate $25,000 in seed money to build Gates Laboratory, the first science building on campus. World Wars In 1910, Throop moved to its current site. Arthur Fleming donated the land for the permanent campus site. Theodore Roosevelt delivered an address at Throop Institute on March 21, 1911, and he declared: I want to see institutions like Throop turn out perhaps ninety-nine of every hundred students as men who are to do given pieces of industrial work better than any one else can do them; I want to see those men do the kind of work that is now being done on the Panama Canal and on the great irrigation projects in the interior of this country—and the one-hundredth man I want to see with the kind of cultural scientific training that will make him and his fellows the matrix out of which you can occasionally develop a man like your great astronomer, George Ellery Hale. In the same year, a bill was introduced in the California Legislature calling for the establishment of a publicly funded "California Institute of Technology", with an initial budget of a million dollars, ten times the budget of Throop at the time. The board of trustees offered to turn Throop over to the state, but the presidents of Stanford University and the University of California successfully lobbied to defeat the bill, which allowed Throop to develop as the only scientific research-oriented education institute in southern California, public or private, until the onset of the World War II necessitated the broader development of research-based science education. The promise of Throop attracted physical chemist Arthur Amos Noyes from MIT to develop the institution and assist in establishing it as a center for science and technology. With the onset of World War I, Hale organized the National Research Council to coordinate and support scientific work on military problems. While he supported the idea of federal appropriations for science, he took exception to a federal bill that would have funded engineering research at land-grant colleges, and instead sought to raise a $1 million national research fund entirely from private sources. To that end, as Hale wrote in The New York Times: Throop College of Technology, in Pasadena California has recently afforded a striking illustration of one way in which the Research Council can secure co-operation and advance scientific investigation. This institution, with its able investigators and excellent research laboratories, could be of great service in any broad scheme of cooperation. President Scherer, hearing of the formation of the council, immediately offered to take part in its work, and with this object, he secured within three days an additional research endowment of one hundred thousand dollars. Through the National Research Council, Hale simultaneously lobbied for science to play a larger role in national affairs, and for Throop to play a national role in science. The new funds were designated for physics research, and ultimately led to the establishment of the Norman Bridge Laboratory, which attracted experimental physicist Robert Andrews Millikan from the University of Chicago in 1917. During the course of the war, Hale, Noyes and Millikan worked together in Washington on the NRC. Subsequently, they continued their partnership in developing Caltech. Under the leadership of Hale, Noyes, and Millikan (aided by the booming economy of Southern California), Caltech grew to national prominence in the 1920s and concentrated on the development of Roosevelt's "Hundredth Man". On November 29, 1921, the trustees declared it to be the express policy of the institute to pursue scientific research of the greatest importance and at the same time "to continue to conduct thorough courses in engineering and pure science, basing the work of these courses on exceptionally strong instruction in the fundamental sciences of mathematics, physics, and chemistry; broadening and enriching the curriculum by a liberal amount of instruction in such subjects as English, history, and economics; and vitalizing all the work of the Institute by the infusion in generous measure of the spirit of research". In 1923, Millikan was awarded the Nobel Prize in Physics. In 1925, the school established a department of geology and hired William Bennett Munro, then chairman of the division of History, Government, and Economics at Harvard University, to create a division of humanities and social sciences at Caltech. In 1928, a division of biology was established under the leadership of Thomas Hunt Morgan, the most distinguished biologist in the United States at the time, and discoverer of the role of genes and the chromosome in heredity. In 1930, Kerckhoff Marine Laboratory was established in Corona del Mar under the care of Professor George MacGinitie. In 1926, a graduate school of aeronautics was created, which eventually attracted Theodore von Kármán. Kármán later helped create the Jet Propulsion Laboratory, and played an integral part in establishing Caltech as one of the world's centers for rocket science. In 1928, construction of the Palomar Observatory began. Millikan served as "Chairman of the Executive Council" (effectively Caltech's president) from 1921 to 1945, and his influence was such that the institute was occasionally referred to as "Millikan's School". Millikan initiated a visiting-scholars program soon after joining Caltech. Notable scientists who accepted his invitation include Paul Dirac, Erwin Schrödinger, Werner Heisenberg, Hendrik Lorentz and Niels Bohr. Albert Einstein arrived on the Caltech campus for the first time in 1931 to polish up his Theory of General Relativity, and he returned to Caltech subsequently as a visiting professor in 1932 and 1933. During World War II, Caltech was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission. The United States Navy also maintained a naval training school for aeronautical engineering, resident inspectors of ordinance and naval material, and a liaison officer to the National Defense Research Committee on campus. Project Vista From April to December 1951, Caltech was the host of a federal classified study, Project Vista. The selection of Caltech as host for the project was based on the university's expertise in rocketry and nuclear physics. In response to the war in Korea and the pressure from the Soviet Union, the project was Caltech's way of assisting the federal government in its effort to increase national security. The project was created to study new ways of improving the relationship between tactical air support and ground troops. The Army, Air Force, and Navy sponsored the project; however, it was under contract with the Army. The study was named after the hotel, Vista del Arroyo Hotel, which housed the study. The study operated under a committee with the supervision of President Lee A. DuBridge. William A. Fowler, a professor at Caltech, was selected as research director. More than a fourth of Caltech's faculty and a group of outside scientists staffed the project. Moreover, the number increases if one takes into account visiting scientists, military liaisons, secretarial, and security staff. In compensation for its participation, the university received about $750,000. Post-war growth From the 1950s to 1980s, Caltech was the home of Murray Gell-Mann and Richard Feynman, whose work was central to the establishment of the Standard Model of particle physics. Feynman was also widely known outside the physics community as an exceptional teacher and a colorful, unconventional character. During Lee A. DuBridge's tenure as Caltech's president (1946–1969), Caltech's faculty doubled and the campus tripled in size. DuBridge, unlike his predecessors, welcomed federal funding of science. New research fields flourished, including chemical biology, planetary science, nuclear astrophysics, and geochemistry. A 200-inch telescope was dedicated on nearby Palomar Mountain in 1948 and remained the world's most powerful optical telescope for over forty years. Caltech opened its doors to female undergraduates during the presidency of Harold Brown in 1970, and they made up 14% of the entering class. The portion of female undergraduates has been increasing since then. Protests by Caltech students are rare. The earliest was a 1968 protest outside the NBC Burbank studios, in response to rumors that NBC was to cancel Star Trek. In 1973, the students from Dabney House protested a presidential visit with a sign on the library bearing the simple phrase "Impeach Nixon". The following week, Ross McCollum, president of the National Oil Company, wrote an open letter to Dabney House stating that in light of their actions he had decided not to donate one million dollars to Caltech. The Dabney family, being Republicans, disowned Dabney House after hearing of the protest. 21st century Since 2000, the Einstein Papers Project has been located at Caltech. The project was established in 1986 to assemble, preserve, translate, and publish papers selected from the literary estate of Albert Einstein and from other collections. In fall 2008, the freshman class was 42% female, a record for Caltech's undergraduate enrollment. In the same year, the Institute concluded a six-year-long fund-raising campaign. The campaign raised more than $1.4 billion from about 16,000 donors. Nearly half of the funds went into the support of Caltech programs and projects. In 2010, Caltech, in partnership with Lawrence Berkeley National Laboratory and headed by Professor Nathan Lewis, established a DOE Energy Innovation Hub aimed at developing revolutionary methods to generate fuels directly from sunlight. This hub, the Joint Center for Artificial Photosynthesis, will receive up to $122 million in federal funding over five years. Since 2012, Caltech began to offer classes through massive open online courses (MOOCs) under Coursera, from 2013, edX, and bootcamps. Jean-Lou Chameau, the eighth president, announced on February 19, 2013, that he would be stepping down to accept the presidency at King Abdullah University of Science and Technology. Thomas F. Rosenbaum was announced to be the ninth president of Caltech on October 24, 2013, and his term began on July 1, 2014. In 2019, Caltech received a gift of $750 million for sustainability research from the Resnick family of The Wonderful Company. The gift is the largest ever for environmental sustainability research and the second-largest private donation to a US academic institution (after Bloomberg's gift of $1.8 billion to Johns Hopkins University in 2018). On account of President Robert A. Millikan's affiliation with the Human Betterment Foundation, in January 2021, the Caltech Board of Trustees authorized the removal of Millikan's name (and the names of five other historical figures affiliated with the Foundation), from campus buildings. Campus Caltech's primary campus is located in Pasadena, California, approximately northeast of downtown Los Angeles. It is within walking distance of Old Town Pasadena and the Pasadena Playhouse District and therefore the two locations are frequent getaways for Caltech students. In 1917 Hale hired architect Bertram Goodhue to produce a master plan for the campus. Goodhue conceived the overall layout of the campus and designed the physics building, Dabney Hall, and several other structures, in which he sought to be consistent with the local climate, the character of the school, and Hale's educational philosophy. Goodhue's designs for Caltech were also influenced by the traditional Spanish mission architecture of Southern California. During the 1960s, Caltech underwent considerable expansion, in part due to the philanthropy of alumnus Arnold O. Beckman. In 1953, Beckman was asked to join the Caltech Board of Trustees. In 1964, he became its chairman. Over the next few years, as Caltech's president emeritus David Baltimore describes it, Arnold Beckman and his wife Mabel "shaped the destiny of Caltech". In 1971 a magnitude-6.6 earthquake in San Fernando caused some damage to the Caltech campus. Engineers who evaluated the damage found that two historic buildings dating from the early days of the Institute—Throop Hall and the Goodhue-designed Culbertson Auditorium—had cracked. New additions to the campus include the Cahill Center for Astronomy and Astrophysics and the Walter and Leonore Annenberg Center for Information Science and Technology, which opened in 2009, and the Warren and Katherine Schlinger Laboratory for Chemistry and Chemical Engineering followed in March 2010. The institute also concluded an upgrading of the South Houses in 2006. In late 2010, Caltech completed a 1.3 MW solar array projected to produce approximately 1.6 GWh in 2011. Organization and administration Caltech is incorporated as a non-profit corporation and is governed by a privately appointed 46-member board of trustees who serve five-year terms of office and retire at the age of 72. The trustees elect a president to serve as the chief executive officer of the institute and administer the affairs on the institute on behalf of the board, a provost who serves as the chief academic officer of the institute below the president, and ten other vice presidential and other senior positions. Thomas F. Rosenbaum became the ninth president of Caltech in 2014. Caltech's endowment is governed by a permanent trustee committee and administered by an investment office. The institute is organized into six primary academic divisions: Biology and Biological Engineering, Chemistry and Chemical Engineering, Engineering and Applied Science, Geological and Planetary Sciences, Humanities and Social Sciences, Physics, Mathematics, and Astronomy. The voting faculty of Caltech include all professors, instructors, research associates and fellows, and the University Librarian. Faculty are responsible for establishing admission requirements, academic standards, and curricula. The Faculty Board is the faculty's representative body and consists of 18 elected faculty representatives as well as other senior administration officials. Full-time professors are expected to teach classes, conduct research, advise students, and perform administrative work such as serving on committees. Founded in 1930s, the Jet Propulsion Laboratory (JPL) is a federally funded research and development center (FFRDC) owned by NASA and operated as a division of Caltech through a contract between NASA and Caltech. In 2008, JPL spent over $1.6 billion on research and development and employed over 5,000 project-related and support employees. The JPL Director also serves as a Caltech Vice President and is responsible to the President of the Institute for the management of the laboratory. Academics Caltech is a small four-year, highly residential research university with slightly more students in graduate programs than undergraduate. The institute has been accredited by the Western Association of Schools and Colleges since 1949. Caltech is on the quarter system: the fall term starts in late September and ends before Christmas, the second term starts after New Year's Day and ends in mid-March, and the third term starts in late March or early April and ends in early June. Rankings Caltech is consistently ranked within the top ten universities in the world, and within the top four in the United States, by major global ranking systems. In 2021, Caltech ranked 6th globally based on aggregate world university rankings of THE, QS, and ARWU. For 2022, U.S. News & World Report ranked Caltech as tied for 9th in the United States among national universities overall, 11th for most innovative, and 15th for best value. U.S. News & World Report also ranked the graduate programs in chemistry and earth sciences first among national universities. Caltech was ranked 1st internationally between 2011 and 2016 by the Times Higher Education World University Rankings. Caltech was ranked as the best university in the world in two categories: Engineering & Technology and Physical Sciences. It was also found to have the highest faculty citation rate in the world. Admissions Admission to Caltech is extremely rigorous. Prior to going test blind, Caltech students had some of the highest test scores in the nation. In 2022, Caltech was ranked by CBS News as the 3rd hardest college in America to gain acceptance to. For the freshmen who enrolled in 2019 (Class of 2023) the middle 50% range of SAT were 740–780 for evidence-based reading and writing and 790–800 for math, and 1530–1570 total. The middle 50% range ACT Composite score was 35–36. The SAT Math Level 2 middle 50% range was 800–800. The middle 50% range for the SAT Physics Subject Test was 760–800; SAT Chemistry Subject Test was 760–800; SAT Biology Subject Tests was 760–800. In June 2020, Caltech announced a test-blind policy where they would not require nor consider test scores for the next two years; in July 2021, the moratorium was extended by another year and then extended further. The institute is need-blind for domestic applicants. For the Class of 2026 (enrolled Fall 2022), Caltech received 16662 applications and accepted 448 applicants for a 2.7% admit rate; 224 enrolled. The class included 48% women and 52% men. For the Class of 2025, 32% were of underrepresented ancestry (which includes students who self-identify as American Indian/Alaska Native, Hispanic/Latino, Black/African American, and/or Native Hawaiian/Pacific Islander), and 6% were foreign students. For the Class of 2027 (enrolled Fall 2023), Caltech had over 270 commits of 412 admits, at a yield rate of 66–67%. Tuition and financial aid Undergraduate tuition for the 2021–2022 school year was $56,394 and total annual costs were estimated to be $79,947 excluding the Caltech Student Health Insurance Plan. In 2012–2013, Caltech awarded $17.1 million in need-based aid, $438k in non-need-based aid, and $2.51 million in self-help support to enrolled undergraduate students. The average financial aid package of all students eligible for aid was $38,756 and students graduated with an average debt of $15,090. Undergraduate program The full-time, four-year undergraduate program emphasizes instruction in the arts and sciences and has high graduate coexistence. Caltech offers 28 majors (called "options") and 12 minors across all six academic divisions. Caltech also offers interdisciplinary programs in Applied Physics, Biochemistry, Bioengineering, Computation and Neural Systems, Control and Dynamical Systems, Environmental Science and Engineering, Geobiology and Astrobiology, Geochemistry, and Planetary Astronomy. The most popular options are Chemical Engineering, Computer Science, Electrical Engineering, Mechanical Engineering and Physics. The most popular majors of the class of 2023 were Computer Science, Mechanical Engineering, Physics, and Electrical Engineering. Prior to the entering class of 2013, Caltech required students to take a core curriculum of five terms of mathematics, five terms of physics, two terms of chemistry, one term of biology, two terms of lab courses, one term of scientific communication, three terms of physical education, and 12 terms of humanities and social science. Since 2013, only three terms each of mathematics and physics have been required by the institute, with the remaining two terms each required by certain options. A typical class is worth 9 academic units and given the extensive core curriculum requirements in addition to individual options' degree requirements, students need to take an average of 40.5 units per term (more than four classes) to graduate in four years. 36 units is the minimum full-time load, 48 units is considered a heavy load, and registrations above 51 units require an overload petition. Approximately 20 percent of students double-major. This is achievable since the humanities and social sciences majors have been designed to be done in conjunction with a science major. Although choosing two options in the same division is discouraged, it is still possible. First-year students are enrolled in first-term classes based upon results of placement exams in math, physics, chemistry, and writing and take all classes in their first two terms on a Pass/Fail basis. There is little competition; collaboration on homework is encouraged and the honor system encourages take-home tests and flexible homework schedules. Caltech offers co-operative programs with other schools, such as the Pasadena Art Center College of Design and Occidental College. According to a PayScale study, Caltech graduates earn a median early career salary of $83,400 and $143,100 mid-career, placing them in the top 5 among graduates of US colleges and universities. The average net return on investment over a period of 20 years is $887,000, the tenth-highest among US colleges. Caltech offers Army and Air Force ROTC in cooperation with the University of Southern California. Graduate program The graduate instructional programs emphasize doctoral studies and are dominated by science, technology, engineering, and mathematics fields. The institute offers graduate degree programs for the Master of Science, Engineer's Degree, Doctor of Philosophy, BS/MS and MD/PhD, with the majority of students in the PhD program. The most popular options are Chemistry, Physics, Biology, Electrical Engineering and Chemical Engineering. Applicants for graduate studies are required to take the GRE. GRE Subject scores are either required or strongly recommended by several options. A joint program between Caltech and the Keck School of Medicine of the University of Southern California, and the UCLA David Geffen School of Medicine grants MD/PhD degrees. Students in this program do their preclinical and clinical work at USC or UCLA, and their PhD work with any member of the Caltech faculty, including the Biology, Chemistry, and Engineering and Applied Sciences Divisions. The MD degree would be from USC or UCLA and the PhD would be awarded from Caltech. The research facilities at Caltech are available to graduate students, but there are opportunities for students to work in facilities of other universities, research centers as well as private industries. The graduate student to faculty ratio is 4:1. Approximately 99 percent of doctoral students have full financial support. Financial support for graduate students comes in the form of fellowships, research assistantships, teaching assistantships or a combination of fellowship and assistantship support. Graduate students are bound by the honor code, as are the undergraduates, and the Graduate Honor Council oversees any violations of the code. Research Caltech is classified among "R1: Doctoral Universities – Very High Research Activity". Caltech was elected to the Association of American Universities in 1934 and remains a research university with "very high" research activity, primarily in STEM fields. Caltech manages research expenditures of $270 million annually, 66th among all universities in the U.S. and 17th among private institutions without medical schools for 2008. The largest federal agencies contributing to research are NASA, National Science Foundation, Department of Health and Human Services, Department of Defense, and Department of Energy. Caltech received $144 million in federal funding for the physical sciences, $40.8 million for the life sciences, $33.5 million for engineering, $14.4 million for environmental sciences, $7.16 million for computer sciences, and $1.97 million for mathematical sciences in 2008. The institute was awarded an all-time high funding of $357 million in 2009. Active funding from the National Science Foundation Directorate of Mathematical and Physical Science (MPS) for Caltech stands at $343 million , the highest for any educational institution in the nation, and higher than the total funds allocated to any state except California and New York. In 2005, Caltech had dedicated to research: to physical sciences, to engineering, and to biological sciences. In addition to managing JPL, Caltech also operates the Palomar Observatory in San Diego County, the Owens Valley Radio Observatory in Bishop, California, the Submillimeter Observatory and W. M. Keck Observatory at the Mauna Kea Observatory, the Laser Interferometer Gravitational-Wave Observatory at Livingston, Louisiana and Richland, Washington, and Kerckhoff Marine Laboratory in Corona del Mar, California. The Institute launched the Kavli Nanoscience Institute at Caltech in 2006, the Keck Institute for Space Studies in 2008, and is also the current home for the Einstein Papers Project. The Spitzer Science Center (SSC), part of the Infrared Processing and Analysis Center located on the Caltech campus, is the data analysis and community support center for NASA's Spitzer Space Telescope. Caltech partnered with UCLA to establish a Joint Center for Translational Medicine (UCLA-Caltech JCTM), which conducts experimental research into clinical applications, including the diagnosis and treatment of diseases such as cancer. Caltech operates several TCCON stations as part of an international collaborative effort of measuring greenhouse gases globally. One station is on campus. Undergraduates at Caltech are also encouraged to participate in research. About 80% of the class of 2010 did research through the annual Summer Undergraduate Research Fellowships (SURF) program at least once during their stay, and many continued during the school year. Students write and submit SURF proposals for research projects in collaboration with professors, and about 70 percent of applicants are awarded SURFs. The program is open to both Caltech and non-Caltech undergraduate students. It serves as preparation for graduate school and helps to explain why Caltech has the highest percentage of alumni who go on to receive a PhD of all the major universities. The licensing and transferring of technology to the commercial sector is managed by the Office of Technology Transfer (OTT). OTT protects and manages the intellectual property developed by faculty members, students, other researchers, and JPL technologists. Caltech receives more invention disclosures per faculty member than any other university in the nation. , 1891 patents were granted to Caltech researchers since 1969. Student life House system During the early 20th century, a Caltech committee visited several universities and decided to transform the undergraduate housing system from fraternities to a house system. Four South Houses (or Hovses, as styled in the stone engravings) were built: Blacker House, Dabney House, Fleming House and Ricketts House. In the 1960s, three North Houses were built: Lloyd House, Page House, and Ruddock House, and during the 1990s, Avery House. The four South Houses closed for renovation in 2005 and reopened in 2006. The latest addition to residential life at Caltech is Bechtel Residence, which opened in 2018. It is not affiliated with the house system. All first- and second-year students live on campus in the house system or in the Bechtel Residence. On account of Albert B. Ruddock's affiliation with the Human Betterment Foundation, in January 2021, the Caltech Board of Trustees authorized the removal of Ruddock's name from campus buildings. Ruddock House was renamed as the Grant D. Venerable House. Athletics Caltech has athletic teams in baseball, men's and women's basketball, cross country, men's and women's soccer, swimming and diving, men's and women's tennis, track and field, women's volleyball, and men's and women's water polo. Caltech's mascot is the Beaver, a homage to nature's engineer. Its teams are members of the NCAA Division III and compete in the Southern California Intercollegiate Athletic Conference (SCIAC), which Caltech co-founded in 1915. On January 6, 2007, the Beavers' men's basketball team snapped a 207-game losing streak to Division III schools, beating Bard College 81–52. It was their first Division III victory since 1996. Until their win over Occidental College on February 22, 2011 the team had not won a game in SCIAC play since 1985. Ryan Elmquist's free throw with 3.3 seconds in regulation gave the Beavers the victory. The documentary film Quantum Hoops concerns the events of the Beavers' 2005–06 season. On January 13, 2007, the Caltech women's basketball team snapped a 50-game losing streak, defeating the Pomona-Pitzer Sagehens 55–53. The women's program, which entered the SCIAC in 2002, garnered their first conference win. On the bench as honorary coach for the evening was Robert Grubbs, 2005 Nobel laureate in Chemistry. The team went on to beat Whittier College on February 10, for its second SCIAC win, and placed its first member on the All Conference team. In 2007, 2008, and 2009, the women's table tennis team (a club team) competed in nationals. The women's Ultimate club team, known as "Snatch", has also been very successful in recent years, ranking 44 of over 200 college teams in the Ultimate Player's Association. On February 2, 2013, the Caltech baseball team ended a 228-game losing streak, the team's first win in nearly 10 years. The track and field team's home venue is at the South Athletic Field in Tournament Park, the site of the first Rose Bowl Game. The school also sponsored an intercollegiate football team from 1973 through 1977, and played part of its home schedule at the Rose Bowl. Performing and visual arts The Caltech/Occidental College Orchestra is a full seventy-piece orchestra composed of students, faculty, and staff at Caltech and nearby Occidental College. The orchestra gives three pairs of concerts annually, at both Caltech and Occidental College. There are also two Caltech Jazz Bands and a Concert Band, as well as an active chamber music program. For vocal music, Caltech has a mixed-voice Glee Club and the smaller Chamber Singers. The theater program at Caltech is known as TACIT, or Theater Arts at the California Institute of Technology. There are two to three plays organized by TACIT per year, and they were involved in the production of the PHD Movie, released in 2011. Student life traditions Annual events Every Halloween, Dabney House conducts the infamous "Millikan pumpkin-drop experiment" from the top of Millikan Library, the highest point on campus. According to tradition, a claim was once made that the shattering of a pumpkin frozen in liquid nitrogen and dropped from a sufficient height would produce a triboluminescent spark. This yearly event involves a crowd of observers, who try to spot the elusive spark. The title of the event is an oblique reference to the famous Millikan oil-drop experiment which measured e, the elemental unit of electrical charge. On Ditch Day, the seniors ditch school, leaving behind elaborately designed tasks and traps at the doors of their rooms to prevent underclassmen from entering. Over the years this has evolved to the point where many seniors spend months designing mechanical, electrical, and software obstacles to confound the underclassmen. Each group of seniors designs a "stack" to be solved by a handful of underclassmen. The faculty have been drawn into the event as well, and cancel all classes on Ditch Day so the underclassmen can participate in what has become a highlight of the academic year. Another long-standing tradition is the playing of Wagner's "Ride of the Valkyries" at 7:00 each morning during finals week with the largest, loudest speakers available. The playing of that piece is not allowed at any other time (except if one happens to be listening to the entire 14 hours and 5 minutes of The Ring Cycle), and any offender is dragged into the showers to be drenched in cold water fully dressed. Pranks Caltech students have been known for their many pranks (also known as "RFs"). The two most famous in recent history are the changing of the Hollywood Sign to read "Caltech", by judiciously covering up certain parts of the letters, and the changing of the scoreboard to read Caltech 38, MIT 9 during the 1984 Rose Bowl Game. But the most famous of all occurred during the 1961 Rose Bowl Game, where Caltech students altered the flip-cards that were raised by the stadium attendees to display "Caltech", and several other "unintended" messages. This event is now referred to as the Great Rose Bowl Hoax. In recent years, pranking has been officially encouraged by Tom Mannion, Caltech's Assistant VP for Student Affairs and Campus Life. "The grand old days of pranking have gone away at Caltech, and that's what we are trying to bring back," reported the Boston Globe. In December 2011, Caltech students went to New York and pulled a prank in Manhattan's Greenwich Village. The prank involved making The Cube sculpture look like the Aperture Science Weighted Companion Cube from the video game Portal. Caltech pranks have been documented in three Legends of Caltech books, the most recent of which was edited by alumni Autumn Looijen '99 and Mason Porter '98 and published in May 2007. Rivalry with MIT In 2005, a group of Caltech students pulled a string of pranks during MIT's Campus Preview Weekend for admitted students. These include covering up the word Massachusetts in the "Massachusetts Institute of Technology" engraving on the main building façade with a banner so that it read "That Other Institute of Technology". A group of MIT hackers responded by altering the banner so that the inscription read "The Only Institute of Technology." Caltech students also passed out T-shirts to MIT's incoming freshman class that had MIT written on the front and "...because not everyone can go to Caltech" along with an image of a palm tree on the back. MIT retaliated in April 2006, when students posing as the Howe & Ser (Howitzer) Moving Company stole the 130-year-old, 1.7-ton Fleming House cannon and moved it over 3,000 miles to their campus in Cambridge, Massachusetts for their 2006 Campus Preview Weekend, repeating a similar prank performed by nearby Harvey Mudd College in 1986. Thirty members of Fleming House traveled to MIT and reclaimed their cannon on April 10, 2006. On April 13, 2007 (Friday the 13th), a group of students from The California Tech, Caltech's campus newspaper, arrived and distributed fake copies of The Tech, MIT's campus newspaper, while prospective students were visiting for their Campus Preview Weekend. Articles included "MIT Invents the Interweb", "Architects Deem Campus 'Unfortunate, and "Infinite Corridor Not Actually Infinite". In December 2009, some Caltech students declared that MIT had been sold and had become the Caltech East campus. A "sold" banner was hung on front of the MIT dome building and a "Welcome to Caltech East: School of the Humanities" banner over the Massachusetts Avenue Entrance. Newspapers and T-shirts were distributed, and door labels and fliers in the infinite corridor were put up in accordance with the "curriculum change." In September 2010, MIT students attempted to put a TARDIS, the time machine from the BBC's Doctor Who, onto a roof. Caught in mid-act, the prank was aborted. In January 2011, Caltech students in conjunction with MIT students helped put the TARDIS on top of Baxter. Caltech students then moved the TARDIS to UC Berkeley and Stanford. In April 2014, during MIT's Campus Preview Weekend, a group of Caltech students handed out mugs emblazoned with the MIT logo on the front and the words "The Institute of Technology" on the back. When heated, the mugs turn orange, display a palm tree, and read "Caltech The Hotter Institute of Technology." Identical mugs continue to be sold at the Caltech campus store. Honor code Life in the Caltech community is governed by the honor code, which simply states: "No member of the Caltech community shall take unfair advantage of any other member of the Caltech community." This is enforced by a Board of Control, which consists of undergraduate students, and by a similar body at the graduate level, called the Graduate Honor Council. The honor code aims at promoting an atmosphere of respect and trust that allows Caltech students to enjoy privileges that make for a more relaxed atmosphere. For example, the honor code allows professors to make the majority of exams as take-home, allowing students to take them on their own schedule and in their preferred environment. Through the late 1990s, the only exception to the honor code, implemented earlier in the decade in response to changes in federal regulations, concerned the sexual harassment policy. Today, there are myriad exceptions to the honor code in the form of new Institute policies such as the fire policy and alcohol policy. Although both policies are presented in the Honor System Handbook given to new members of the Caltech community, some undergraduates regard them as a slight against the honor code and the implicit trust and respect it represents within the community. In recent years, the Student Affairs Office has also taken up pursuing investigations independently of the Board of Control and Conduct Review Committee, an implicit violation of both the honor code and written disciplinary policy that has contributed to further erosion of trust between some parts of the undergraduate community and the administration. Notable people As of October 2022, Caltech has 46 Nobel laureates to its name awarded to 30 alumni (26 graduates and 4 postdocs), including 5 Caltech professors who are also alumni (Carl D. Anderson, Linus Pauling, William A. Fowler, Edward B. Lewis, and Kip Thorne), and 16 non-alumni professors (14 at the time of the award, not including David Baltimore and Renato Dulbecco). The total number of Nobel Prizes is 47 because Pauling received prizes in both Chemistry and Peace. Eight faculty and alumni have received a Crafoord Prize from the Royal Swedish Academy of Sciences, while 58 have been awarded the U.S. National Medal of Science, and 11 have received the National Medal of Technology. One alumnus, Stanislav Smirnov, won the Fields Medal in 2010. Other distinguished researchers have been affiliated with Caltech as postdoctoral scholars (for example, Barbara McClintock, James D. Watson, Sheldon Glashow and John Gurdon) or visiting professors (for example, Albert Einstein, Stephen Hawking and Edward Witten). Students Caltech enrolled 987 undergraduate students and 1,410 graduate students for the 2021–2022 school year. Women made up 45% of the undergraduate and 33% of the graduate student body. The racial demographics of the school substantially differ from those of the nation as a whole. The four-year graduation rate is 79% and the six-year rate is 92%, which is low compared to most leading U.S. universities, but substantially higher than it was in the 1960s and 1970s. Students majoring in STEM fields traditionally have graduation rates below 70%. Alumni There are 22,930 total living alumni in the U.S. and around the world. As of October 2022, 30 alumni and 16 non-alumni faculty have won the Nobel Prize. The Turing Award, the "Nobel Prize of Computer Science", has been awarded to six alumni, and one has won the Fields Medal. Many alumni have participated in scientific research. Some have concentrated their studies on the very small universe of atoms and molecules. Nobel laureate Carl D. Anderson (BS 1927, PhD 1930) proved the existence of positrons and muons, Nobel laureate Edwin McMillan (BS 1928, MS 1929) synthesized the first transuranium element, Nobel laureate Leo James Rainwater (BS 1939) investigated the non-spherical shapes of atomic nuclei, and Nobel laureate Douglas D. Osheroff (BS 1967) studied the superfluid nature of helium-3. Donald Knuth (PhD 1963), the "father" of the analysis of algorithms, wrote The Art of Computer Programming and created the TeX computer typesetting system, which is commonly used in the scientific community. Bruce Reznick (BS 1973) is a mathematician noted for his contributions to number theory and the combinatorial-algebraic-analytic investigations of polynomials. Narendra Karmarkar (MS 1979) is known for the interior point method, a polynomial algorithm for linear programming known as Karmarkar's algorithm. Other alumni have turned their gaze to the universe. C. Gordon Fullerton (BS 1957, MS 1958) piloted the third Space Shuttle mission. Astronaut (and later, United States Senator) Harrison Schmitt (BS 1957) was the only geologist to have walked on the surface of the Moon. Astronomer Eugene Merle Shoemaker (BS 1947, MS 1948) co-discovered Comet Shoemaker-Levy 9 (a comet which crashed into the planet Jupiter) and was the first person buried on the Moon (by having his ashes crashed into the Moon). Astronomer George O. Abell (BS 1951, MS 1952, PhD 1957) while a grad student at Caltech participated in the National Geographic Society-Palomar Sky Survey. This ultimately resulted in the publication of the Abell Catalogue of Clusters of Galaxies, the definitive work in the field. Undergraduate alumni founded, or co-founded, companies such as LCD manufacturer Varitronix, Hotmail, Compaq, MathWorks (which created Matlab), and database provider Imply, while graduate students founded, or co-founded, companies such as Intel, TRW, and the non-profit educational organization, the Exploratorium. Arnold Beckman (PhD 1928) invented the pH meter and commercialized it with the founding of Beckman Instruments. His success with that company enabled him to provide seed funding for William Shockley (BS 1932), who had co-invented semiconductor transistors and wanted to commercialize them. Shockley became the founding Director of the Shockley Semiconductor Laboratory division of Beckman Instruments. Shockley had previously worked at Bell Labs, whose first president was another alumnus, Frank Jewett (BS 1898). Because his aging mother lived in Palo Alto, California, Shockley established his laboratory near her in Mountain View, California. Shockley was a co-recipient of the Nobel Prize in physics in 1956, but his aggressive management style and odd personality at the Shockley Lab became unbearable. In late 1957, eight of his researchers resigned and with support from Sherman Fairchild formed Fairchild Semiconductor. Among the "traitorous eight" was Gordon E. Moore (PhD 1954), who later left Fairchild to co-found Intel. Other offspring companies of Fairchild Semiconductor include National Semiconductor and Advanced Micro Devices, which in turn spawned more technology companies in the area. Shockley's decision to use silicon instead of germanium as the semiconductor material, coupled with the abundance of silicon semiconductor related companies in the area, gave rise to the term "Silicon Valley" to describe that geographic region surrounding Palo Alto. Caltech alumni also held public offices, with Mustafa A.G. Abushagur (PhD 1984) the Deputy Prime Minister of Libya and Prime Minister-Elect of Libya, James Fletcher (PhD 1948) the 4th and 7th Administrator of NASA, Steven Koonin (PhD 1972) the Undersecretary of Energy for Science, and Regina Dugan (PhD 1993) the 19th director of DARPA. The 20th director for DARPA, Arati Prabhakar, is also a Caltech alumna (PhD 1984) as well as Charles Elachi (Phd 1971), former director of the Jet Propulsion Lab. Arvind Virmani is a former Chief Economic Adviser to the Government of India. In 2013, President Obama announced the nomination of France Cordova (PhD 1979) as the director of the National Science Foundation and Ellen Williams (PhD 1982) as the director for ARPA-E. Faculty and staff Richard Feynman was among the most well-known physicists associated with Caltech, having published the Feynman Lectures on Physics, an undergraduate physics text, and popular science texts such as Six Easy Pieces for the general audience. The promotion of physics made him a public figure of science, although his Nobel-winning work in quantum electrodynamics was already very established in the scientific community. Murray Gell-Mann, a Nobel-winning physicist, introduced a classification of hadrons and went on to postulate the existence of quarks, which is currently accepted as part of the Standard Model. Long-time Caltech President Robert Andrews Millikan was the first to calculate the charge of the electron with his well-known oil-drop experiment, while Richard Chace Tolman is remembered for his contributions to cosmology and statistical mechanics. 2004 Nobel Prize in Physics winner H. David Politzer is a current professor at Caltech, as is astrophysicist and author Kip Thorne and eminent mathematician Barry Simon. Linus Pauling pioneered quantum chemistry and molecular biology, and went on to discover the nature of the chemical bond in 1939. Seismologist Charles Richter, also an alumnus, developed the magnitude scale that bears his name, the Richter magnitude scale for measuring the power of earthquakes. One of the founders of the geochemistry department, Clair Patterson was the first to accurately determine the age of the Earth via lead:uranium ratio in meteorites. In engineering, Theodore von Kármán made many key advances in aerodynamics, notably his work on supersonic and hypersonic airflow characterization. A repeating pattern of swirling vortices is named after him, the von Kármán vortex street. Participants in von Kármán's GALCIT project included Frank Malina, who helped develop the WAC Corporal, which was the first U.S. rocket to reach the edge of space, Jack Parsons, a pioneer in the development of liquid and solid rocket fuels who designed the first castable composite-based rocket motor, and Qian Xuesen, who was dubbed the "Father of Chinese Rocketry". More recently, Michael Brown, a professor of planetary astronomy, discovered many trans-Neptunian objects, most notably the dwarf planet Eris, which prompted the International Astronomical Union to redefine the term "planet". David Baltimore, the Robert A. Millikan Professor of Biology, and Alice Huang, Senior Faculty Associate in Biology, served as the presidents of AAAS from 2007 to 2008 and 2010 to 2011, respectively. 33% of the faculty are members of the National Academy of Sciences or Engineering and/or fellows of the American Academy of Arts and Sciences. This is the highest percentage of any faculty in the country with the exception of the graduate institution Rockefeller University. The average salary for assistant professors at Caltech is $111,300, associate professors $121,300, and full professors $172,800. Caltech faculty are active in applied physics, astronomy and astrophysics, biology, biochemistry, biological engineering, chemical engineering, computer science, geology, mechanical engineering, and physics. Presidents James Augustin Brown Scherer (1908–1920) (president of Throop College of Technology before the name change) Robert A. Millikan (1921–1945), experimental physicist, Nobel laureate in physics for 1923 (his official title was "Chairman of the Executive Council") Lee A. DuBridge (1946–1969), experimental physicist (first to officially hold the title of President) Harold Brown (1969–1977), physicist and public servant (left Caltech to serve as United States Secretary of Defense in the administration of Jimmy Carter) Robert F. Christy (1977–1978), astrophysicist (acting president) Marvin L. Goldberger (1978–1987), theoretical physicist (left to serve as Director of Institute for Advanced Study) Thomas E. Everhart (1987–1997), experimental physicist David Baltimore (1997–2006), molecular biologist, Nobel laureate in Physiology or Medicine for 1975 Jean-Lou Chameau (2006–2013), civil engineer and educational administrator (left to serve as president of King Abdullah University of Science and Technology) Thomas F. Rosenbaum (2014–), condensed matter physicist and administrator Caltech startups Over the years Caltech has actively promoted the commercialization of technologies developed within its walls. Through its Office of Technology Transfer & Corporate Partnerships, scientific breakthroughs have led to the transfer of numerous technologies in a wide variety of scientific-related fields such as photovoltaic, radio-frequency identification (RFID), semiconductors, hyperspectral imaging, electronic devices, protein design, solid state amplifiers and many more. Companies such as Quora, Contour Energy Systems, Impinj, Fulcrum Microsystems, Nanosys, Inc., Photon etc., Xencor, and Wavestream Wireless have emerged from Caltech. In media and popular culture Caltech has appeared in many works of popular culture, both as itself and in disguised form. On television, it played a prominent role and was the workplace of all four male lead characters and one female lead character in the sitcom The Big Bang Theory. Caltech is also the inspiration, and frequent film location, for the California Institute of Science in Numb3rs. On film, the Pacific Tech of The War of the Worlds and Real Genius is based on Caltech. In nonfiction, two 2007 documentaries examine aspects of Caltech: Curious, its researchers, and Quantum Hoops, its men's basketball team. Caltech is also prominently featured in many comics and television series by Marvel Entertainment. In Marvel Comics, the university serves as the alma mater of Hulk, Mister Fantastic, Bill Foster (Black Goliath), and Madman. In the Marvel Cinematic Universe, Bruno Carrelli (Kamala Khan's best friend and love interest) attends Caltech in the miniseries Ms. Marvel. Given its Los Angeles-area location, the grounds of the Institute are often host to short scenes in movies and television. The Athenaeum dining club appears in the Beverly Hills Cop series, The X-Files, True Romance, and The West Wing. See also Engineering education US-China University Presidents Roundtable Notes References External links Official athletics website 1891 establishments in California Buildings and structures in Pasadena, California Education in Pasadena, California Educational institutions established in 1891 Engineering universities and colleges in California Private universities and colleges in California San Gabriel Valley Schools accredited by the Western Association of Schools and Colleges Science and technology in Greater Los Angeles Technological universities in the United States Universities and colleges in Los Angeles County, California Need-blind educational institutions
5790
https://en.wikipedia.org/wiki/Carlo%20Goldoni
Carlo Goldoni
Carlo Osvaldo Goldoni (, also , ; 25 February 1707 – 6 February 1793) was an Italian playwright and librettist from the Republic of Venice. His works include some of Italy's most famous and best-loved plays. Audiences have admired the plays of Goldoni for their ingenious mix of wit and honesty. His plays offered his contemporaries images of themselves, often dramatizing the lives, values, and conflicts of the emerging middle classes. Though he wrote in French and Italian, his plays make rich use of the Venetian language, regional vernacular, and colloquialisms. Goldoni also wrote under the pen name and title Polisseno Fegeio, Pastor Arcade, which he claimed in his memoirs the "Arcadians of Rome" bestowed on him. Biography Memoirs There is an abundance of autobiographical information on Goldoni, most of which comes from the introductions to his plays and from his Memoirs. However, these memoirs are known to contain many errors of fact, especially about his earlier years. In these memoirs, he paints himself as a born comedian, careless, light-hearted and with a happy temperament, proof against all strokes of fate, yet thoroughly respectable and honorable. Early life and studies Goldoni was born in Venice in 1707, the son of Margherita Salvioni (or Saioni) and Giulio Goldoni. In his memoirs, Goldoni describes his father as a physician, and claims that he was introduced to theatre by his grandfather Carlo Alessandro Goldoni. In reality, it seems that Giulio was an apothecary; as for the grandfather, he had died four years before Carlo's birth. In any case, Goldoni was deeply interested in theatre from his earliest years, and all attempts to direct his activity into other channels were of no avail; his toys were puppets, and his books, plays. His father placed him under the care of the philosopher Caldini at Rimini but the youth soon ran away with a company of strolling players and returned to Venice. In 1723 his father matriculated him into the stern Collegio Ghislieri in Pavia, which imposed the tonsure and monastic habits on its students. However, he relates in his Memoirs that a considerable part of his time was spent in reading Greek and Latin comedies. He had already begun writing at this time and, in his third year, he composed a libellous poem (Il colosso) in which he ridiculed the daughters of certain Pavian families. As a result of that incident (and/or of a visit paid with some schoolmates to a local brothel) he was expelled from the school and had to leave the city (1725). He studied law at Udine, and eventually took his degree at University of Modena. He was employed as a law clerk at Chioggia and Feltre, after which he returned to his native city and began practicing. Educated as a lawyer, and holding lucrative positions as secretary and counsellor, he seemed, indeed, at one time to have settled down to the practice of law, but following an unexpected summons to Venice, after an absence of several years, he changed his career, and thenceforth he devoted himself to writing plays and managing theatres. His father died in 1731. In 1732, to avoid an unwanted marriage, he left the town for Milan and then for Verona where the theatre manager Giuseppe Imer helped him on his way to becoming a comical poet as well as introducing him to his future wife, Nicoletta Conio. Goldoni returned with her to Venice, where he stayed until 1743. Theatrical career Goldoni entered the Italian theatre scene with a tragedy, Amalasunta, produced in Milan. The play was a critical and financial failure. Submitting it to Count Prata, director of the opera, he was told that his piece "was composed with due regard for the rules of Aristotle and Horace, but not according to those laid down for the Italian drama." "In France", continued the count, "you can try to please the public, but here in Italy it is the actors and actresses whom you must consult, as well as the composer of the music and the stage decorators. Everything must be done according to a certain form which I will explain to you." Goldoni thanked his critic, went back to his inn and ordered a fire, into which he threw the manuscript of his Amalasunta. His next play, Belisario, written in 1734, was more successful, though of its success he afterward professed himself ashamed. During this period he also wrote librettos for opera seria and served for a time as literary director of the San Giovanni Grisostomo, Venice's most distinguished opera house. He wrote other tragedies for a time, but he was not long in discovering that his bent was for comedy. He had come to realize that the Italian stage needed reforming; adopting Molière as his model, he went to work in earnest and in 1738 produced his first real comedy, L'uomo di mondo ("The Man of the World"). During his many wanderings and adventures in Italy, he was constantly at work and when, at Livorno, he became acquainted with the manager Medebac, he determined to pursue the profession of playwriting in order to make a living. He was employed by Medebac to write plays for his theater in Venice. He worked for other managers and produced during his stay in that city some of his most characteristic works. He also wrote Momolo Cortesan in 1738. By 1743, he had perfected his hybrid style of playwriting (combining the model of Molière with the strengths of Commedia dell'arte and his own wit and sincerity). This style was typified in La Donna di garbo, the first Italian comedy of its kind. After 1748, Goldoni collaborated with the composer Baldassare Galuppi, making significant contributions to the new form of 'opera buffa'. Galuppi composed the score for more than twenty of Goldoni's librettos. As with his comedies, Goldoni's opera buffa integrate elements of the Commedia dell'arte with recognisable local and middle-class realities. His operatic works include two of the most successful musical comedies of the eighteenth century, Il filosofo di campagna (The Country Philosopher), set by Galuppi (1752) and La buona figliuola (The Good Girl), set by Niccolò Piccinni (1760). In 1753, following his return from Bologna, he defected to the Teatro San Luca of the Vendramin family, where he performed most of his plays to 1762. Move to France and death In 1757, he engaged in a bitter dispute with playwright Carlo Gozzi, which left him utterly disgusted with the tastes of his countrymen; so much so that in 1761 he moved to Paris, where he received a position at court and was put in charge of the Théâtre-Italien. He spent the rest of his life in France, composing most of his plays in French and writing his memoirs in that language. Among the plays which he wrote in French, the most successful was Le bourru bienfaisant, dedicated to the Marie Adélaïde, a daughter of Louis XV and aunt to the dauphin, the future Louis XVI of France. It premiered on 4 February 1771, almost nine months after the dauphin's marriage to Marie Antoinette. Goldoni enjoyed considerable popularity in France; in 1769, when he retired to Versailles, the King gave him a pension. He lost this pension after the French Revolution. The Convention eventually voted to restore his pension the day after his death. It was restored to his widow, at the pleading of the poet André Chénier; "She is old", he urged, "she is seventy-six, and her husband has left her no heritage save his illustrious name, his virtues and his poverty." Goldoni's impact on Italian theatre In his Memoirs Goldoni amply discusses the state of Italian comedy when he began writing. At that time, Italian comedy revolved around the conventionality of the Commedia dell'arte, or improvised comedy. Goldoni took to himself the task of superseding the comedy of masks and the comedy of intrigue by representations of actual life and manners through the characters and their behaviors. He maintained that Italian life and manners were susceptible of artistic treatment such as had not been given them before. His works are a lasting monument to the changes that he initiated: a dramatic revolution that had been attempted but not achieved before. Goldoni's importance lay in providing good examples rather than precepts. Goldoni says that he took for his models the plays of Molière and that whenever a piece of his own succeeded he whispered to himself: "Good, but not yet Molière." Goldoni's plays are gentler and more optimistic in tone than Molière's. It was this very success that was the object of harsh critiques by Carlo Gozzi, who accused Goldoni of having deprived the Italian theatre of the charms of poetry and imagination. The great success of Gozzi's fairy dramas so irritated Goldoni that it led to his self-exile to France. Goldoni gave to his country a classical form, which, though it has since been cultivated, has yet to be cultivated by a master. Themes Goldoni's plays that were written while he was still in Italy ignore religious and ecclesiastical subjects. This may be surprising, considering his staunch Catholic upbringing. No thoughts are expressed about death or repentance in his memoirs or in his comedies. After his move to France, his position became clearer, as his plays took on a clear anti-clerical tone and often satirized the hypocrisy of monks and of the Church. Goldoni was inspired by his love of humanity and the admiration he had for his fellow men. He wrote, and was obsessed with, the relationships that humans establish with one another, their cities and homes, the Humanist movement, and the study of philosophy. The moral and civil values that Goldoni promotes in his plays are those of rationality, civility, humanism, the importance of the rising middle-class, a progressive stance to state affairs, honor and honesty. Goldoni had a dislike for arrogance, intolerance and the abuse of power. Goldoni's main characters are no abstract examples of human virtue, nor monstrous examples of human vice. They occupy the middle ground of human temperament. Goldoni maintains an acute sensibility for the differences in social classes between his characters as well as environmental and generational changes. Goldoni pokes fun at the arrogant nobility and the pauper who lacks dignity. Venetian and Tuscan As in other theatrical works of the time and place, the characters in Goldoni's Italian comedies spoke originally either the literary Tuscan variety (which became modern Italian) or the Venetian dialect, depending on their station in life. However, in some printed editions of his plays he often turned the Venetian texts into Tuscan, too. Goldoni in popular culture One of his best known works is the comic play Servant of Two Masters, which has been translated and adapted internationally numerous times. In 1966 it was adapted into an opera buffa by the American composer Vittorio Giannini. In 2011, Richard Bean adapted the play for the National Theatre of Great Britain as One Man, Two Guvnors. Its popularity led to a transfer to the West End and in 2012 to Broadway. The film Carlo Goldoni – Venice, Grand Theatre of the World, directed by Alessandro Bettero, was released in 2007 and is available in English, Italian, French, and Japanese. Selected works The following is a small sampling of Goldoni's enormous output. Tragedies Rosmonda (1734) Griselda (1734) Tragicomedies Belisario (1734) Don Giovanni Tenorio o sia Il dissoluto, "The Dissolute" (1735) Rinaldo di Montalbano (1736) Comedies Il servitore di due padroni, (1745) "The Servant of Two Masters" (now often retitled Arlecchino servitore di due padroni "Harlequin Servant of two Masters") I due gemelli veneziani, "The Two Venetian Twins" (1747) La vedova scaltra, "The Shrewd Widow" (1748) La putta onorata, "The Honorable Maid" (1749) Il cavaliere e la dama, "The Gentleman and the Lady" (1749) La famiglia dell'antiquario, "The Antiquarian's Family" (1750) Il teatro comico, "The Comical Theatre" (1750–1751) Il bugiardo, "The Liar" (1750–1751) Il vero amico, "The True Friend" (1750) translated by Anna Cuffaro I pettegolezzi delle donne, "Women's Gossip" (1750–1751) La locandiera, "The Mistress of the Inn" (1751) Il feudatario "The Feudal Lord" (1752) Gl'innamorati, "The Lovers" (1759) I rusteghi, "The Boors" (1760) Le baruffe chiozzotte, "The Chioggia Scuffles" (1762) Gli amori di Zelinda e Lindoro, "The Love of Zelinda and Lindoro" (1764) Opera seria libretti Amalasunta (1732) Gustavo primo, re di Svezia (c. 1738) Oronte, re de' Sciti (1740) Statira (c. 1740) Opera buffa libretti La contessina (The Young Countess) by Maccari (1743) L'Arcadia in Brenta (The Arcadia in Brenta) by Galuppi (1749) Il mondo della luna (The World on the Moon), set to music by Galuppi (1750), Haydn (1777), Paisiello (1782) and other composers. Il filosofo di campagna (The Country Philosopher) by Galuppi (1754) Il mercato di Malmantile (The Malmantile Market) by Fischietti (1757) Buovo d'Antona, set to music by Tommaso Traetta (1758, incorrectly recorded as 1750 in Zatta's edition) La buona figliuola (The Good Girl) by Niccolò Piccinni (1760) Lo speziale (The Apothecary) by Joseph Haydn (1768) La finta semplice (The Fake Innocent) by Wolfgang Amadeus Mozart (1769) Le pescatrici (The Fisherwomen) by Haydn (1770), Florian Leopold Gassmann (1771) Intermezzo libretti Le donne vendicate, "The Revenge of the Women" (1751) Cantatas and serenades La ninfa saggia, "The Wise Nymph" (17??) Gli amanti felici, "The Happy Lovers" (17??) Poetry Il colosso, a satire against Pavia girls which led to Goldoni being expelled from Collegio Ghislieri (1725) Il quaresimale in epilogo (1725–1726) Books Nuovo teatro comico, "New Comic Theater", plays. Pitteri, Venice (1757) Mémoires, "Memoirs". Paris (1787) Goldoni's collected works. Zalta, Venice (1788–1795) Selected translations of Goldoni's works Il vero amico, "The True Friend" translated by Anna Cuffaro. Publisher: Sparkling Books. Archifanfaro translated by W. H. Auden with an introduction by Michael Andre in Unmuzzled OX. Notes References Sources Bates, Alfred, editor (1903). "Goldoni", vol. 5, , in The Drama: Its History, Literature and Influence on Civilization. London/New York: Smart and Stanley. Richards, Kenneth (1995). "Goldoni, Carlo", pp. 432–434, in The Cambridge Guide to Theatre, second edition, edited by Martin Banham. Cambridge University Press. . External links www.carlogoldoni.net – the English website dedicated to Goldoni www.sparklingbooks.com for bilingual edition English/Italian of The True Friend/Il vero amico Webpage devoted to Carlo Goldoni (lletrA (UOC), Catalan Literature Online) Detailed biography, prepared for the 200th anniversary of his death (1993) Gli Innamorati La Locandiera La Avventura Della Villeggiatura Works by Goldoni at Progetto Manuzio Works by Goldoni (text, concordances and frequency list) Venice Carnival 2007, Tricentenary of Carlo Goldoni A riotous delight of commedia dell'arte Carlo Goldoni – biography in the Catholic Encyclopedia 1707 births 1793 deaths Commedia dell'arte 18th-century Italian dramatists and playwrights 18th-century Venetian writers Italian opera librettists Italian male dramatists and playwrights University of Modena alumni Italian memoirists Members of the Academy of Arcadians
5793
https://en.wikipedia.org/wiki/Cumulative%20distribution%20function
Cumulative distribution function
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to . Every probability distribution supported on the real numbers, discrete or "mixed" as well as continuous, is uniquely identified by a right-continuous monotone increasing function (a càdlàg function) satisfying and . In the case of a scalar continuous distribution, it gives the area under the probability density function from minus infinity to . Cumulative distribution functions are also used to specify the distribution of multivariate random variables. Definition The cumulative distribution function of a real-valued random variable is the function given by where the right-hand side represents the probability that the random variable takes on a value less than or equal to . The probability that lies in the semi-closed interval , where , is therefore In the definition above, the "less than or equal to" sign, "≤", is a convention, not a universally used one (e.g. Hungarian literature uses "<"), but the distinction is important for discrete distributions. The proper use of tables of the binomial and Poisson distributions depends upon this convention. Moreover, important formulas like Paul Lévy's inversion formula for the characteristic function also rely on the "less than or equal" formulation. If treating several random variables etc. the corresponding letters are used as subscripts while, if treating only one, the subscript is usually omitted. It is conventional to use a capital for a cumulative distribution function, in contrast to the lower-case used for probability density functions and probability mass functions. This applies when discussing general distributions: some specific distributions have their own conventional notation, for example the normal distribution uses and instead of and , respectively. The probability density function of a continuous random variable can be determined from the cumulative distribution function by differentiating using the Fundamental Theorem of Calculus; i.e. given , as long as the derivative exists. The CDF of a continuous random variable can be expressed as the integral of its probability density function as follows: In the case of a random variable which has distribution having a discrete component at a value , If is continuous at , this equals zero and there is no discrete component at . Properties Every cumulative distribution function is non-decreasing and right-continuous, which makes it a càdlàg function. Furthermore, Every function with these four properties is a CDF, i.e., for every such function, a random variable can be defined such that the function is the cumulative distribution function of that random variable. If is a purely discrete random variable, then it attains values with probability , and the CDF of will be discontinuous at the points : If the CDF of a real valued random variable is continuous, then is a continuous random variable; if furthermore is absolutely continuous, then there exists a Lebesgue-integrable function such that for all real numbers and . The function is equal to the derivative of almost everywhere, and it is called the probability density function of the distribution of . If has finite L1-norm, that is, the expectation of is finite, then the expectation is given by the Riemann–Stieltjes integral and for any , as shown in the diagram. In particular, we have Examples As an example, suppose is uniformly distributed on the unit interval . Then the CDF of is given by Suppose instead that takes only the discrete values 0 and 1, with equal probability. Then the CDF of is given by Suppose is exponential distributed. Then the CDF of is given by Here λ > 0 is the parameter of the distribution, often called the rate parameter. Suppose is normal distributed. Then the CDF of is given by Here the parameter is the mean or expectation of the distribution; and is its standard deviation. A table of the CDF of the standard normal distribution is often used in statistical applications, where it is named the standard normal table, the unit normal table, or the Z table. Suppose is binomial distributed. Then the CDF of is given by Here is the probability of success and the function denotes the discrete probability distribution of the number of successes in a sequence of independent experiments, and is the "floor" under , i.e. the greatest integer less than or equal to . Derived functions Complementary cumulative distribution function (tail distribution) Sometimes, it is useful to study the opposite question and ask how often the random variable is above a particular level. This is called the () or simply the or , and is defined as This has applications in statistical hypothesis testing, for example, because the one-sided p-value is the probability of observing a test statistic at least as extreme as the one observed. Thus, provided that the test statistic, T, has a continuous distribution, the one-sided p-value is simply given by the ccdf: for an observed value of the test statistic In survival analysis, is called the survival function and denoted , while the term reliability function is common in engineering. Properties For a non-negative continuous random variable having an expectation, Markov's inequality states that As , and in fact provided that is finite. Proof: Assuming has a density function , for any Then, on recognizing and rearranging terms, as claimed. For a random variable having an expectation, and for a non-negative random variable the second term is 0. If the random variable can only take non-negative integer values, this is equivalent to Folded cumulative distribution While the plot of a cumulative distribution often has an S-like shape, an alternative illustration is the folded cumulative distribution or mountain plot, which folds the top half of the graph over, that is where denotes the indicator function and the second summand is the survivor function, thus using two scales, one for the upslope and another for the downslope. This form of illustration emphasises the median, dispersion (specifically, the mean absolute deviation from the median) and skewness of the distribution or of the empirical results. Inverse distribution function (quantile function) If the CDF F is strictly increasing and continuous then is the unique real number such that . This defines the inverse distribution function or quantile function. Some distributions do not have a unique inverse (for example if for all , causing to be constant). In this case, one may use the generalized inverse distribution function, which is defined as Example 1: The median is . Example 2: Put . Then we call the 95th percentile. Some useful properties of the inverse cdf (which are also preserved in the definition of the generalized inverse distribution function) are: is nondecreasing if and only if If has a distribution then is distributed as . This is used in random number generation using the inverse transform sampling-method. If is a collection of independent -distributed random variables defined on the same sample space, then there exist random variables such that is distributed as and with probability 1 for all . The inverse of the cdf can be used to translate results obtained for the uniform distribution to other distributions. Empirical distribution function The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function. Multivariate case Definition for two random variables When dealing simultaneously with more than one random variable the joint cumulative distribution function can also be defined. For example, for a pair of random variables , the joint CDF is given by where the right-hand side represents the probability that the random variable takes on a value less than or equal to and that takes on a value less than or equal to . Example of joint cumulative distribution function: For two continuous variables X and Y: For two discrete random variables, it is beneficial to generate a table of probabilities and address the cumulative probability for each potential range of X and Y, and here is the example: given the joint probability mass function in tabular form, determine the joint cumulative distribution function. Solution: using the given table of probabilities for each potential range of X and Y, the joint cumulative distribution function may be constructed in tabular form: Definition for more than two random variables For random variables , the joint CDF is given by Interpreting the random variables as a random vector yields a shorter notation: Properties Every multivariate CDF is: Monotonically non-decreasing for each of its variables, Right-continuous in each of its variables, Not every function satisfying the above four properties is a multivariate CDF, unlike in the single dimension case. For example, let for or or and let otherwise. It is easy to see that the above conditions are met, and yet is not a CDF since if it was, then as explained below. The probability that a point belongs to a hyperrectangle is analogous to the 1-dimensional case: Complex case Complex random variable The generalization of the cumulative distribution function from real to complex random variables is not obvious because expressions of the form make no sense. However expressions of the form make sense. Therefore, we define the cumulative distribution of a complex random variables via the joint distribution of their real and imaginary parts: Complex random vector Generalization of yields as definition for the CDS of a complex random vector . Use in statistical analysis The concept of the cumulative distribution function makes an explicit appearance in statistical analysis in two (similar) ways. Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The empirical distribution function is a formal direct estimate of the cumulative distribution function for which simple statistical properties can be derived and which can form the basis of various statistical hypothesis tests. Such tests can assess whether there is evidence against a sample of data having arisen from a given distribution, or evidence against two samples of data having arisen from the same (unknown) population distribution. Kolmogorov–Smirnov and Kuiper's tests The Kolmogorov–Smirnov test is based on cumulative distribution functions and can be used to test to see whether two empirical distributions are different or whether an empirical distribution is different from an ideal distribution. The closely related Kuiper's test is useful if the domain of the distribution is cyclic as in day of the week. For instance Kuiper's test might be used to see if the number of tornadoes varies during the year or if sales of a product vary by day of the week or day of the month. See also Descriptive statistics Distribution fitting Ogive (statistics) Modified half-normal distribution with the pdf on is given as , where denotes the Fox–Wright Psi function. References External links Functions related to probability distributions