id
stringlengths
2
7
url
stringlengths
31
119
title
stringlengths
1
69
text
stringlengths
80
123k
5225
https://en.wikipedia.org/wiki/Code
Code
In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time. The process of encoding converts information from a source into symbols for communication or storage. Decoding is the reverse process, converting code symbols back into a form that the recipient understands, such as English or/and Spanish. One reason for coding is to enable communication in places where ordinary plain language, spoken or written, is difficult or impossible. For example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower encodes parts of the message, typically individual letters, and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent. Theory In information theory and computer science, a code is usually considered as an algorithm that uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings. Before giving a mathematically precise definition, this is a brief example. The mapping is a code, whose source alphabet is the set and whose target alphabet is the set . Using the extension of the code, the encoded string 0011001 can be grouped into codewords as 0 011 0 01, and these in turn can be decoded to the sequence of source symbols acab. Using terms from formal language theory, the precise mathematical definition of this concept is as follows: let S and T be two finite sets, called the source and target alphabets, respectively. A code is a total function mapping each symbol from S to a sequence of symbols over T. The extension of , is a homomorphism of into , which naturally maps each sequence of source symbols to a sequence of target symbols. Variable-length codes In this section, we consider codes that encode each source (clear text) character by a code word from some dictionary, and concatenation of such code words give us an encoded string. Variable-length codes are especially useful when clear text characters have different probabilities; see also entropy encoding. A prefix code is a code with the "prefix property": there is no valid code word in the system that is a prefix (start) of any other valid code word in the set. Huffman coding is the most known algorithm for deriving prefix codes. Prefix codes are widely referred to as "Huffman codes" even when the code was not produced by a Huffman algorithm. Other examples of prefix codes are country calling codes, the country and publisher parts of ISBNs, and the Secondary Synchronization Codes used in the UMTS WCDMA 3G Wireless Standard. Kraft's inequality characterizes the sets of codeword lengths that are possible in a prefix code. Virtually any uniquely decodable one-to-many code, not necessarily a prefix one, must satisfy Kraft's inequality. Error-correcting codes Codes may also be used to represent data in a way more resistant to errors in transmission or storage. This so-called error-correcting code works by including carefully crafted redundancy with the stored (or transmitted) data. Examples include Hamming codes, Reed–Solomon, Reed–Muller, Walsh–Hadamard, Bose–Chaudhuri–Hochquenghem, Turbo, Golay, algebraic geometry codes, low-density parity-check codes, and space–time codes. Error detecting codes can be optimised to detect burst errors, or random errors. Examples Codes in communication used for brevity A cable code replaces words (e.g. ship or invoice) with shorter words, allowing the same information to be sent with fewer characters, more quickly, and less expensively. Codes can be used for brevity. When telegraph messages were the state of the art in rapid long-distance communication, elaborate systems of commercial codes that encoded complete phrases into single mouths (commonly five-minute groups) were developed, so that telegraphers became conversant with such "words" as BYOXO ("Are you trying to weasel out of our deal?"), LIOUY ("Why do you not answer my question?"), BMULD ("You're a skunk!"), or AYYLU ("Not clearly coded, repeat more clearly."). Code words were chosen for various reasons: length, pronounceability, etc. Meanings were chosen to fit perceived needs: commercial negotiations, military terms for military codes, diplomatic terms for diplomatic codes, any and all of the preceding for espionage codes. Codebooks and codebook publishers proliferated, including one run as a front for the American Black Chamber run by Herbert Yardley between the First and Second World Wars. The purpose of most of these codes was to save on cable costs. The use of data coding for data compression predates the computer era; an early example is the telegraph Morse code where more-frequently used characters have shorter representations. Techniques such as Huffman coding are now used by computer-based algorithms to compress large data files into a more compact form for storage or transmission. Character encodings Character encodings are representations of textual data. A given character encoding may be associated with a specific character set (the collection of characters which it can represent), though some character sets have multiple character encodings and vice versa. Character encodings may be broadly grouped according to the number of bytes required to represent a single character: there are single-byte encodings, multibyte (also called wide) encodings, and variable-width (also called variable-length) encodings. The earliest character encodings were single-byte, the best-known example of which is ASCII. ASCII remains in use today, for example in HTTP headers. However, single-byte encodings cannot model character sets with more than 256 characters. Scripts that require large character sets such as Chinese, Japanese and Korean must be represented with multibyte encodings. Early multibyte encodings were fixed-length, meaning that although each character was represented by more than one byte, all characters used the same number of bytes ("word length"), making them suitable for decoding with a lookup table. The final group, variable-width encodings, is a subset of multibyte encodings. These use more complex encoding and decoding logic to efficiently represent large character sets while keeping the representations of more commonly used characters shorter or maintaining backward compatibility properties. This group includes UTF-8, an encoding of the Unicode character set; UTF-8 is the most common encoding of text media on the Internet. Genetic code Biological organisms contain genetic material that is used to control their function and development. This is DNA, which contains units named genes from which messenger RNA is derived. This in turn produces proteins through a genetic code in which a series of triplets (codons) of four possible nucleotides can be translated into one of twenty possible amino acids. A sequence of codons results in a corresponding sequence of amino acids that form a protein molecule; a type of codon called a stop codon signals the end of the sequence. Gödel code In mathematics, a Gödel code was the basis for the proof of Gödel's incompleteness theorem. Here, the idea was to map mathematical notation to a natural number (using a Gödel numbering). Other There are codes using colors, like traffic lights, the color code employed to mark the nominal value of the electrical resistors or that of the trashcans devoted to specific types of garbage (paper, glass, organic, etc.). In marketing, coupon codes can be used for a financial discount or rebate when purchasing a product from a (usual internet) retailer. In military environments, specific sounds with the cornet are used for different uses: to mark some moments of the day, to command the infantry on the battlefield, etc. Communication systems for sensory impairments, such as sign language for deaf people and braille for blind people, are based on movement or tactile codes. Musical scores are the most common way to encode music. Specific games have their own code systems to record the matches, e.g. chess notation. Cryptography In the history of cryptography, codes were once common for ensuring the confidentiality of communications, although ciphers are now used instead. Secret codes intended to obscure the real messages, ranging from serious (mainly espionage in military, diplomacy, business, etc.) to trivial (romance, games) can be any kind of imaginative encoding: flowers, game cards, clothes, fans, hats, melodies, birds, etc., in which the sole requirement is the pre-agreement on the meaning by both the sender and the receiver. Other examples Other examples of encoding include: Encoding (in cognition) - a basic perceptual process of interpreting incoming stimuli; technically speaking, it is a complex, multi-stage process of converting relatively objective sensory input (e.g., light, sound) into a subjectively meaningful experience. A content format - a specific encoding format for converting a specific type of data to information. Text encoding uses a markup language to tag the structure and other features of a text to facilitate processing by computers. (See also Text Encoding Initiative.) Semantics encoding of formal language A informal language B is a method of representing all terms (e.g. programs or descriptions) of language A using language B. Data compression transforms a signal into a code optimized for transmission or storage, generally done with a codec. Neural encoding - the way in which information is represented in neurons. Memory encoding - the process of converting sensations into memories. Television encoding: NTSC, PAL and SECAM Other examples of decoding include: Decoding (computer science) Decoding methods, methods in communication theory for decoding codewords sent over a noisy channel Digital signal processing, the study of signals in a digital representation and the processing methods of these signals Digital-to-analog converter, the use of analog circuit for decoding operations Word decoding, the use of phonics to decipher print patterns and translate them into the sounds of language Codes and acronyms Acronyms and abbreviations can be considered codes, and in a sense, all languages and writing systems are codes for human thought. International Air Transport Association airport codes are three-letter codes used to designate airports and used for bag tags. Station codes are similarly used on railways but are usually national, so the same code can be used for different stations if they are in different countries. Occasionally, a code word achieves an independent existence (and meaning) while the original equivalent phrase is forgotten or at least no longer has the precise meaning attributed to the code word. For example, '30' was widely used in journalism to mean "end of story", and has been used in other contexts to signify "the end". See also Asemic writing Cipher Code (semiotics) Equipment codes Quantum error correction Semiotics Universal language References Further reading Signal processing
5228
https://en.wikipedia.org/wiki/Cheirogaleidae
Cheirogaleidae
The Cheirogaleidae are the family of strepsirrhine primates containing the various dwarf and mouse lemurs. Like all other lemurs, cheirogaleids live exclusively on the island of Madagascar. Characteristics Cheirogaleids are smaller than the other lemurs and, in fact, they are the smallest primates. They have soft, long fur, colored grey-brown to reddish on top, with a generally brighter underbelly. Typically, they have small ears, large, close-set eyes, and long hind legs. Like all strepsirrhines, they have fine claws at the second toe of the hind legs. They grow to a size of only 13 to 28 cm, with a tail that is very long, sometimes up to one and a half times as long as the body. They weigh no more than 500 grams, with some species weighing as little as 60 grams. Dwarf and mouse lemurs are nocturnal and arboreal. They are excellent climbers and can also jump far, using their long tails for balance. When on the ground (a rare occurrence), they move by hopping on their hind legs. They spend the day in tree hollows or leaf nests. Cheirogaleids are typically solitary, but sometimes live together in pairs. Their eyes possess a tapetum lucidum, a light-reflecting layer that improves their night vision. Some species, such as the lesser dwarf lemur, store fat at the hind legs and the base of the tail, and hibernate. Unlike lemurids, they have long upper incisors, although they do have the comb-like teeth typical of all strepsirhines. They have the dental formula: Cheirogaleids are omnivores, eating fruits, flowers and leaves (and sometimes nectar), as well as insects, spiders, and small vertebrates. The females usually have three pairs of nipples. After a meager 60-day gestation, they will bear two to four (usually two or three) young. After five to six weeks, the young are weaned and become fully mature near the end of their first year or sometime in their second year, depending on the species. In human care, they can live for up to 15 years, although their life expectancy in the wild is probably significantly shorter. Classification The five genera of cheirogaleids contain 42 species. Infraorder Lemuriformes Family Cheirogaleidae Genus Cheirogaleus: dwarf lemurs Montagne d'Ambre dwarf lemur, Cheirogaleus andysabini Furry-eared dwarf lemur, Cheirogaleus crossleyi Groves' dwarf lemur, Cheirogaleus grovesi Lavasoa dwarf lemur, Cheirogaleus lavasoensis Greater dwarf lemur, Cheirogaleus major Fat-tailed dwarf lemur, Cheirogaleus medius Lesser iron-gray dwarf lemur, Cheirogaleus minusculus Ankarana dwarf lemur, Cheirogaleus shethi Sibree's dwarf lemur, Cheirogaleus sibreei Thomas' dwarf lemur, Cheirogaleus thomasi Genus Microcebus: mouse lemurs Arnhold's mouse lemur, Microcebus arnholdi Madame Berthe's mouse lemur, Microcebus berthae Bongolava mouse lemur Microcebus bongolavensis Boraha mouse lemur Microcebus boraha Danfoss' mouse lemur Microcebus danfossi Ganzhorn's mouse lemur. Microcebus ganzhorni Gerp's mouse lemur. Microcebus gerpi Reddish-gray mouse lemur, Microcebus griseorufus Jolly's mouse lemur, Microcebus jollyae Jonah's mouse lemur, Microcebus jonahi Goodman's mouse lemur, Microcebus lehilahytsara MacArthur's mouse lemur, Microcebus macarthurii Claire's mouse lemur, Microcebus mamiratra, synonymous to M. lokobensis Bemanasy mouse lemur, Microcebus manitatra Margot Marsh's mouse lemur, Microcebus margotmarshae Marohita mouse lemur, Microcebus marohita Mittermeier's mouse lemur, Microcebus mittermeieri Gray mouse lemur, Microcebus murinus Pygmy mouse lemur, Microcebus myoxinus Golden-brown mouse lemur, Microcebus ravelobensis Brown mouse lemur, Microcebus rufus Sambirano mouse lemur, Microcebus sambiranensis Simmons' mouse lemur, Microcebus simmonsi Anosy mouse lemur. Microcebus tanosi Northern rufous mouse lemur, Microcebus tavaratra Genus Mirza: giant mouse lemurs Coquerel's giant mouse lemur or Coquerel's dwarf lemur, Mirza coquereli Northern giant mouse lemur, Mirza zaza Genus Allocebus Hairy-eared dwarf lemur, Allocebus trichotis Genus Phaner: fork-marked lemurs Masoala fork-marked lemur, Phaner furcifer Pale fork-marked lemur, Phaner pallescens Pariente's fork-marked lemur, Phaner parienti Amber Mountain fork-marked lemur, Phaner electromontis Footnotes According to the letter of the International Code of Zoological Nomenclature, the correct name for this family should be Microcebidae, but the name Cheirogaleidae has been retained for stability. In 2008, 7 new species of Microcebus were formally recognized, but Microcebus lokobensis (Lokobe mouse lemur) was not among the additions, even though it was described in 2006. Therefore, its status as a species is still questionable. References Lemurs Primate families Taxa named by John Edward Gray Taxa described in 1873
5229
https://en.wikipedia.org/wiki/Callitrichidae
Callitrichidae
The Callitrichidae (also called Arctopitheci or Hapalidae) are a family of New World monkeys, including marmosets, tamarins, and lion tamarins. At times, this group of animals has been regarded as a subfamily, called the Callitrichinae, of the family Cebidae. This taxon was traditionally thought to be a primitive lineage, from which all the larger-bodied platyrrhines evolved. However, some works argue that callitrichids are actually a dwarfed lineage. Ancestral stem-callitrichids likely were "normal-sized" ceboids that were dwarfed through evolutionary time. This may exemplify a rare example of insular dwarfing in a mainland context, with the "islands" being formed by biogeographic barriers during arid climatic periods when forest distribution became patchy, and/or by the extensive river networks in the Amazon Basin. All callitrichids are arboreal. They are the smallest of the simian primates. They eat insects, fruit, and the sap or gum from trees; occasionally, they take small vertebrates. The marmosets rely quite heavily on tree exudates, with some species (e.g. Callithrix jacchus and Cebuella pygmaea) considered obligate exudativores. Callitrichids typically live in small, territorial groups of about five or six animals. Their social organization is unique among primates, and is called a "cooperative polyandrous group". This communal breeding system involves groups of multiple males and females, but only one female is reproductively active. Females mate with more than one male and each shares the responsibility of carrying the offspring. They are the only primate group that regularly produces twins, which constitute over 80% of births in species that have been studied. Unlike other male primates, male callitrichids generally provide as much parental care as females. Parental duties may include carrying, protecting, feeding, comforting, and even engaging in play behavior with offspring. In some cases, such as in the cotton-top tamarin (Saguinus oedipus), males, particularly those that are paternal, even show a greater involvement in caregiving than females. The typical social structure seems to constitute a breeding group, with several of their previous offspring living in the group and providing significant help in rearing the young. Species and subspecies list Taxa included in the Callitrichidae are: Family Callitrichidae Genus Saguinus Subgenus Saguinus Red-handed tamarin, Saguinus midas Western black tamarin, Saguinus niger Eastern black-handed tamarin, Saguinus ursulus Pied tamarin, Saguinus bicolor Martins's tamarin, Saguinus martinsi Saguinus martinsi martinsi Saguinus martinsi ochraceus White-footed tamarin, Saguinus leucopus Cottontop tamarin, Saguinus oedipus Geoffroy's tamarin, Saguinus geoffroyi Subgenus Tamarinus Moustached tamarin, Saguinus mystax Spix's moustached tamarin, Saguinus mystax mystax Red-capped moustached tamarin, Saguinus mystax pileatus White-rump moustached tamarin, Saguinus mystax pluto White-lipped tamarin, Saguinus labiatus Geoffroy's red-bellied tamarin, Saguinus labiatus labiatus Gray's red-bellied tamarin, Saguinus labiatus rufiventer Thomas's red-bellied tamarin, Saguinus labiatus thomasi Emperor tamarin, Saguinus imperator Black-chinned emperor tamarin, Saguinus imperator imperator Bearded emperor tamarin, Saguinus imperator subgrisescens Mottle-faced tamarin, Saguinus inustus Genus Leontocebus Black-mantled tamarin, Leontocebus nigricollis Spix's black-mantle tamarin, Leontocebus nigricollis nigricollis Graells's tamarin, Leontocebus nigricollis graellsi Hernández-Camacho's black-mantle tamarin, Leontocebus nigricollis hernandezi Brown-mantled tamarin, Leontocebus fuscicollis Avila Pires' saddle-back tamarin, Leontocebus fuscicollis avilapiresi Spix's saddle-back tamarin, Leontocebus fuscicollis fuscicollis Mura's saddleback tamarin, Leontocebus fuscicollis mura Lako's saddleback tamarin, Leontocebus fuscicollis primitivus Andean saddle-back tamarin, Leontocebus leucogenys Lesson's saddle-back tamarin, Leontocebus fuscus Cruz Lima's saddle-back tamarin, Leontocebus cruzlimai Weddell's saddle-back tamarin, Leontocebus weddelli Weddell's tamarin, Leontocebus weddelli weddelli Crandall's saddle-back tamarin, Leontocebus weddelli crandalli White-mantled tamarin, Leontocebus weddelli melanoleucus Golden-mantled tamarin, Leontocebus tripartitus Illiger's saddle-back tamarin, Leontocebus illigeri Red-mantled saddle-back tamarin, Leontocebus lagonotus Geoffroy's saddle-back tamarin, Leontocebus nigrifrons Genus Leontopithecus Golden lion tamarin, Leontopithecus rosalia Golden-headed lion tamarin, Leontopithecus chrysomelas Black lion tamarin, Leontopithecus chrysopygus Superagui lion tamarin, Leontopithecus caissara Genus Patasola Patasola magdalenae Genus Micodon Micodon kiotensis Genus Callimico Goeldi's marmoset, Callimico goeldii Genus Mico Silvery marmoset, Mico argentatus Roosmalens' dwarf marmoset, Mico humilis White marmoset, Mico leucippe Black-tailed marmoset, Mico melanurus Schneider's marmoset, Mico schneideri Hershkovitz's marmoset, Mico intermedia Emilia's marmoset, Mico emiliae Black-headed marmoset, Mico nigriceps Marca's marmoset, Mico marcai Santarem marmoset, Mico humeralifer Gold-and-white marmoset, Mico chrysoleucos Maués marmoset, Mico mauesi Sateré marmoset, Mico saterei Rio Acarí marmoset, Mico acariensis Rondon's marmoset, Mico rondoni Munduruku marmoset, Mico munduruku Genus Cebuella Western pygmy marmoset, Cebuella pygmaea Eastern pygmy marmoset, Cebuella niveiventris Genus Callithrix Common marmoset, Callithrix jacchus Black-tufted marmoset, Callithrix penicillata Wied's marmoset, Callithrix kuhlii White-headed marmoset, Callithrix geoffroyi Buffy-tufted marmoset, Callithrix aurita Buffy-headed marmoset, Callithrix flaviceps References External links Primate families Taxa named by Oldfield Thomas Taxa described in 1903
5230
https://en.wikipedia.org/wiki/Cebidae
Cebidae
The Cebidae are one of the five families of New World monkeys now recognised. Extant members are the capuchin and squirrel monkeys. These species are found throughout tropical and subtropical South and Central America. Characteristics Cebid monkeys are arboreal animals that only rarely travel on the ground. They are generally small monkeys, ranging in size up to that of the brown capuchin, with a body length of 33 to 56 cm, and a weight of 2.5 to 3.9 kilograms. They are somewhat variable in form and coloration, but all have the wide, flat, noses typical of New World monkeys. They are omnivorous, mostly eating fruit and insects, although the proportions of these foods vary greatly between species. They have the dental formula: Females give birth to one or two young after a gestation period of between 130 and 170 days, depending on species. They are social animals, living in groups of between five and forty individuals, with the smaller species typically forming larger groups. They are generally diurnal in habit. Classification Previously, New World monkeys were divided between Callitrichidae and this family. For a few recent years, marmosets, tamarins, and lion tamarins were placed as a subfamily (Callitrichinae) in Cebidae, while moving other genera from Cebidae into the families Aotidae, Pitheciidae and Atelidae. The most recent classification of New World monkeys again splits the callitrichids off, leaving only the capuchins and squirrel monkeys in this family. Subfamily Cebinae (capuchin monkeys) Genus Cebus (gracile capuchin monkeys) Colombian white-faced capuchin or Colombian white-headed capuchin, Cebus capucinus Panamanian white-faced capuchin or Panamanian white-headed capuchin, Cebus imitator Marañón white-fronted capuchin, Cebus yuracus Shock-headed capuchin, Cebus cuscinus Spix's white-fronted capuchin, Cebus unicolor Humboldt's white-fronted capuchin, Cebus albifrons Guianan weeper capuchin, Cebus olivaceus Chestnut capuchin, Cebus castaneus Ka'apor capuchin, Cebus kaapori Venezuelan brown capuchin, Cebus brunneus Sierra de Perijá white-fronted capuchin, Cebus leucocephalus Río Cesar white-fronted capuchin, Cebus cesare Varied white-fronted capuchin, Cebus versicolor Santa Marta white-fronted capuchin, Cebus malitiosus Ecuadorian white-fronted capuchin, Cebus aequatorialis Genus Sapajus (robust capuchin monkeys) Tufted capuchin, Sapajus apella Blond capuchin, Sapajus flavius Black-striped capuchin, Sapajus libidinosus Azaras's capuchin, Sapajus cay Black capuchin, Sapajus nigritus Crested capuchin, Sapajus robustus Golden-bellied capuchin, Sapajus xanthosternos Subfamily Saimiriinae (squirrel monkeys) Genus Saimiri Bare-eared squirrel monkey, Saimiri ustus Black squirrel monkey, Saimiri vanzolinii Black-capped squirrel monkey, Saimiri boliviensis Central American squirrel monkey, Saimiri oerstedi Guianan squirrel monkey, Saimiri sciureus Humboldt's squirrel monkey, Saimiri cassiquiarensis Collins' squirrel monkey, Saimiri collinsi Extinct taxa Genus Panamacebus Panamacebus transitus Subfamily Cebinae Genus Acrecebus Acrecebus fraileyi Genus Killikaike Killikaike blakei Genus Dolichocebus Dolichocebus gaimanensis Subfamily Saimiriinae Genus Saimiri Saimiri fieldsi Saimiri annectens Genus Patasola Patasola magdalenae References New World monkeys Primate families Taxa named by Charles Lucien Bonaparte Taxa described in 1831
5232
https://en.wikipedia.org/wiki/Chondrichthyes
Chondrichthyes
Chondrichthyes (; ) is a class of jawed fish that contains the cartilaginous fish or chondrichthyians, which all have skeletons primarily composed of cartilage. They can be contrasted with the Osteichthyes or bony fish, which have skeletons primarily composed of bone tissue. Chondrichthyes are aquatic vertebrates with paired fins, paired nares, placoid scales, conus arteriosus in the heart, and a lack of opecula and swim bladders. Within the infraphylum Gnathostomata, cartilaginous fishes are distinct from all other jawed vertebrates. The class is divided into two subclasses: Elasmobranchii (sharks, rays, skates and sawfish) and Holocephali (chimaeras, sometimes called ghost sharks, which are sometimes separated into their own class). Extant Chondrichthyes range in size from the finless sleeper ray to the over whale shark. Anatomy Skeleton The skeleton is cartilaginous. The notochord is gradually replaced by a vertebral column during development, except in Holocephali, where the notochord stays intact. In some deepwater sharks, the column is reduced. As they do not have bone marrow, red blood cells are produced in the spleen and the epigonal organ (special tissue around the gonads, which is also thought to play a role in the immune system). They are also produced in the Leydig's organ, which is only found in certain cartilaginous fishes. The subclass Holocephali, which is a very specialized group, lacks both the Leydig's and epigonal organs. Appendages Apart from electric rays, which have a thick and flabby body, with soft, loose skin, chondrichthyans have tough skin covered with dermal teeth (again, Holocephali is an exception, as the teeth are lost in adults, only kept on the clasping organ seen on the caudal ventral surface of the male), also called placoid scales (or dermal denticles), making it feel like sandpaper. In most species, all dermal denticles are oriented in one direction, making the skin feel very smooth if rubbed in one direction and very rough if rubbed in the other. Originally, the pectoral and pelvic girdles, which do not contain any dermal elements, did not connect. In later forms, each pair of fins became ventrally connected in the middle when scapulocoracoid and puboischiadic bars evolved. In rays, the pectoral fins are connected to the head and are very flexible. One of the primary characteristics present in most sharks is the heterocercal tail, which aids in locomotion. Body covering Chondrichthyans have tooth-like scales called dermal denticles or placoid scales. Denticles usually provide protection, and in most cases, streamlining. Mucous glands exist in some species, as well. It is assumed that their oral teeth evolved from dermal denticles that migrated into the mouth, but it could be the other way around, as the teleost bony fish Denticeps clupeoides has most of its head covered by dermal teeth (as does, probably, Atherion elymus, another bony fish). This is most likely a secondary evolved characteristic, which means there is not necessarily a connection between the teeth and the original dermal scales. The old placoderms did not have teeth at all, but had sharp bony plates in their mouth. Thus, it is unknown whether the dermal or oral teeth evolved first. It has even been suggested that the original bony plates of all vertebrates are now gone and that the present scales are just modified teeth, even if both the teeth and body armor had a common origin a long time ago. However, there is currently no evidence of this. Respiratory system All chondrichthyans breathe through five to seven pairs of gills, depending on the species. In general, pelagic species must keep swimming to keep oxygenated water moving through their gills, whilst demersal species can actively pump water in through their spiracles and out through their gills. However, this is only a general rule and many species differ. A spiracle is a small hole found behind each eye. These can be tiny and circular, such as found on the nurse shark (Ginglymostoma cirratum), to extended and slit-like, such as found on the wobbegongs (Orectolobidae). Many larger, pelagic species, such as the mackerel sharks (Lamnidae) and the thresher sharks (Alopiidae), no longer possess them. Nervous system In chondrichthyans, the nervous system is composed of a small brain, 8–10 pairs of cranial nerves, and a spinal cord with spinal nerves. They have several sensory organs which provide information to be processed. Ampullae of Lorenzini are a network of small jelly filled pores called electroreceptors which help the fish sense electric fields in water. This aids in finding prey, navigation, and sensing temperature. The Lateral line system has modified epithelial cells located externally which sense motion, vibration, and pressure in the water around them. Most species have large well-developed eyes. Also, they have very powerful nostrils and olfactory organs. Their inner ears consist of 3 large semicircular canals which aid in balance and orientation. Their sound detecting apparatus has limited range and is typically more powerful at lower frequencies. Some species have electric organs which can be used for defense and predation. They have relatively simple brains with the forebrain not greatly enlarged. The structure and formation of myelin in their nervous systems are nearly identical to that of tetrapods, which has led evolutionary biologists to believe that Chondrichthyes were a cornerstone group in the evolutionary timeline of myelin development. Immune system Like all other jawed vertebrates, members of Chondrichthyes have an adaptive immune system. Reproduction Fertilization is internal. Development is usually live birth (ovoviviparous species) but can be through eggs (oviparous). Some rare species are viviparous. There is no parental care after birth; however, some chondrichthyans do guard their eggs. Capture-induced premature birth and abortion (collectively called capture-induced parturition) occurs frequently in sharks/rays when fished. Capture-induced parturition is often mistaken for natural birth by recreational fishers and is rarely considered in commercial fisheries management despite being shown to occur in at least 12% of live bearing sharks and rays (88 species to date). Classification The class Chondrichthyes has two subclasses: the subclass Elasmobranchii (sharks, rays, skates, and sawfish) and the subclass Holocephali (chimaeras). To see the full list of the species, click here. Evolution Cartilaginous fish are considered to have evolved from acanthodians. The discovery of Entelognathus and several examinations of acanthodian characteristics indicate that bony fish evolved directly from placoderm like ancestors, while acanthodians represent a paraphyletic assemblage leading to Chondrichthyes. Some characteristics previously thought to be exclusive to acanthodians are also present in basal cartilaginous fish. In particular, new phylogenetic studies find cartilaginous fish to be well nested among acanthodians, with Doliodus and Tamiobatis being the closest relatives to Chondrichthyes. Recent studies vindicate this, as Doliodus had a mosaic of chondrichthyan and acanthodian traits. Dating back to the Middle and Late Ordovician Period, many isolated scales, made of dentine and bone, have a structure and growth form that is chondrichthyan-like. They may be the remains of stem-chondrichthyans, but their classification remains uncertain. The earliest unequivocal fossils of acanthodian-grade cartilaginous fishes are Qianodus and Fanjingshania from the early Silurian (Aeronian) of Guizhou, China around 439 million years ago, which are also the oldest unambiguous remains of any jawed vertebrates. Shenacanthus vermiformis, which lived 436 million years ago, had thoracic armour plates resembling those of placoderms. By the start of the Early Devonian, 419 million years ago, jawed fishes had divided into three distinct groups: the now extinct placoderms (a paraphyletic assemblage of ancient armoured fishes), the bony fishes, and the clade that includes spiny sharks and early cartilaginous fish. The modern bony fishes, class Osteichthyes, appeared in the late Silurian or early Devonian, about 416 million years ago. The first abundant genus of shark, Cladoselache, appeared in the oceans during the Devonian Period. The first Cartilaginous fishes evolved from Doliodus-like spiny shark ancestors. Taxonomy Subphylum Vertebrata └─Infraphylum Gnathostomata ├─Placodermi — extinct (armored gnathostomes) └Eugnathostomata (true jawed vertebrates) ├─Acanthodii (stem cartilaginous fish) └─Chondrichthyes (true cartilaginous fish) ├─Holocephali (chimaeras + several extinct clades) └Elasmobranchii (shark and rays) ├─Selachii (true sharks) └─Batoidea (rays and relatives)   Note: Lines show evolutionary relationships. See also List of cartilaginous fish Cartilaginous versus bony fishes Largest cartilaginous fishes Threatened rays Threatened sharks Placodermi References Further reading Taxonomy of Chondrichthyes Images of many sharks, skates and rays on Morphbank Fish classes Pridoli first appearances Extant Silurian first appearances
5233
https://en.wikipedia.org/wiki/Carl%20Linnaeus
Carl Linnaeus
Carl Linnaeus (23 May 1707 – 10 January 1778), also known after ennoblement in 1761 as Carl von Linné, was a Swedish biologist and physician who formalised binomial nomenclature, the modern system of naming organisms. He is known as the "father of modern taxonomy". Many of his writings were in Latin; his name is rendered in Latin as and, after his 1761 ennoblement, as . Linnaeus was the son of a curate and he was born in Råshult, the countryside of Småland, in southern Sweden. He received most of his higher education at Uppsala University and began giving lectures in botany there in 1730. He lived abroad between 1735 and 1738, where he studied and also published the first edition of his in the Netherlands. He then returned to Sweden where he became professor of medicine and botany at Uppsala. In the 1740s, he was sent on several journeys through Sweden to find and classify plants and animals. In the 1750s and 1760s, he continued to collect and classify animals, plants, and minerals, while publishing several volumes. By the time of his death in 1778, he was one of the most acclaimed scientists in Europe. Philosopher Jean-Jacques Rousseau sent him the message: "Tell him I know no greater man on Earth." Johann Wolfgang von Goethe wrote: "With the exception of Shakespeare and Spinoza, I know no one among the no longer living who has influenced me more strongly." Swedish author August Strindberg wrote: "Linnaeus was in reality a poet who happened to become a naturalist." Linnaeus has been called (Prince of Botanists) and "The Pliny of the North". He is also considered one of the founders of modern ecology. In botany and zoology, the abbreviation L. is used to indicate Linnaeus as the authority for a species' name. In older publications, the abbreviation "Linn." is found. Linnaeus's remains constitute the type specimen for the species Homo sapiens following the International Code of Zoological Nomenclature, since the sole specimen that he is known to have examined was himself. Early life Childhood Linnaeus was born in the village of Råshult in Småland, Sweden, on 23 May 1707. He was the first child of Nicolaus (Nils) Ingemarsson (who later adopted the family name Linnaeus) and Christina Brodersonia. His siblings were Anna Maria Linnæa, Sofia Juliana Linnæa, Samuel Linnæus (who would eventually succeed their father as rector of Stenbrohult and write a manual on beekeeping), and Emerentia Linnæa. His father taught him Latin as a small child. One of a long line of peasants and priests, Nils was an amateur botanist, a Lutheran minister, and the curate of the small village of Stenbrohult in Småland. Christina was the daughter of the rector of Stenbrohult, Samuel Brodersonius. A year after Linnaeus's birth, his grandfather Samuel Brodersonius died, and his father Nils became the rector of Stenbrohult. The family moved into the rectory from the curate's house. Even in his early years, Linnaeus seemed to have a liking for plants, flowers in particular. Whenever he was upset, he was given a flower, which immediately calmed him. Nils spent much time in his garden and often showed flowers to Linnaeus and told him their names. Soon Linnaeus was given his own patch of earth where he could grow plants. Carl's father was the first in his ancestry to adopt a permanent surname. Before that, ancestors had used the patronymic naming system of Scandinavian countries: his father was named Ingemarsson after his father Ingemar Bengtsson. When Nils was admitted to the University of Lund, he had to take on a family name. He adopted the Latinate name Linnæus after a giant linden tree (or lime tree), in Swedish, that grew on the family homestead. This name was spelled with the æ ligature. When Carl was born, he was named Carl Linnæus, with his father's family name. The son also always spelled it with the æ ligature, both in handwritten documents and in publications. Carl's patronymic would have been Nilsson, as in Carl Nilsson Linnæus. Early education Linnaeus's father began teaching him basic Latin, religion, and geography at an early age. When Linnaeus was seven, Nils decided to hire a tutor for him. The parents picked Johan Telander, a son of a local yeoman. Linnaeus did not like him, writing in his autobiography that Telander "was better calculated to extinguish a child's talents than develop them". Two years after his tutoring had begun, he was sent to the Lower Grammar School at Växjö in 1717. Linnaeus rarely studied, often going to the countryside to look for plants. At some point, his father went to visit him and, after hearing critical assessments by his preceptors, he decided to put the youth as an apprentice to some honest cobbler. He reached the last year of the Lower School when he was fifteen, which was taught by the headmaster, Daniel Lannerus, who was interested in botany. Lannerus noticed Linnaeus's interest in botany and gave him the run of his garden. He also introduced him to Johan Rothman, the state doctor of Småland and a teacher at Katedralskolan (a gymnasium) in Växjö. Also a botanist, Rothman broadened Linnaeus's interest in botany and helped him develop an interest in medicine. By the age of 17, Linnaeus had become well acquainted with the existing botanical literature. He remarks in his journal that he "read day and night, knowing like the back of my hand, Arvidh Månsson's Rydaholm Book of Herbs, Tillandz's Flora Åboensis, Palmberg's Serta Florea Suecana, Bromelii's Chloros Gothica and Rudbeckii's Hortus Upsaliensis". Linnaeus entered the Växjö Katedralskola in 1724, where he studied mainly Greek, Hebrew, theology and mathematics, a curriculum designed for boys preparing for the priesthood. In the last year at the gymnasium, Linnaeus's father visited to ask the professors how his son's studies were progressing; to his dismay, most said that the boy would never become a scholar. Rothman believed otherwise, suggesting Linnaeus could have a future in medicine. The doctor offered to have Linnaeus live with his family in Växjö and to teach him physiology and botany. Nils accepted this offer. University studies Lund Rothman showed Linnaeus that botany was a serious subject. He taught Linnaeus to classify plants according to Tournefort's system. Linnaeus was also taught about the sexual reproduction of plants, according to Sébastien Vaillant. In 1727, Linnaeus, age 21, enrolled in Lund University in Skåne. He was registered as , the Latin form of his full name, which he also used later for his Latin publications. Professor Kilian Stobæus, natural scientist, physician and historian, offered Linnaeus tutoring and lodging, as well as the use of his library, which included many books about botany. He also gave the student free admission to his lectures. In his spare time, Linnaeus explored the flora of Skåne, together with students sharing the same interests. Uppsala In August 1728, Linnaeus decided to attend Uppsala University on the advice of Rothman, who believed it would be a better choice if Linnaeus wanted to study both medicine and botany. Rothman based this recommendation on the two professors who taught at the medical faculty at Uppsala: Olof Rudbeck the Younger and Lars Roberg. Although Rudbeck and Roberg had undoubtedly been good professors, by then they were older and not so interested in teaching. Rudbeck no longer gave public lectures, and had others stand in for him. The botany, zoology, pharmacology and anatomy lectures were not in their best state. In Uppsala, Linnaeus met a new benefactor, Olof Celsius, who was a professor of theology and an amateur botanist. He received Linnaeus into his home and allowed him use of his library, which was one of the richest botanical libraries in Sweden. In 1729, Linnaeus wrote a thesis, on plant sexual reproduction. This attracted the attention of Rudbeck; in May 1730, he selected Linnaeus to give lectures at the University although the young man was only a second-year student. His lectures were popular, and Linnaeus often addressed an audience of 300 people. In June, Linnaeus moved from Celsius's house to Rudbeck's to become the tutor of the three youngest of his 24 children. His friendship with Celsius did not wane and they continued their botanical expeditions. Over that winter, Linnaeus began to doubt Tournefort's system of classification and decided to create one of his own. His plan was to divide the plants by the number of stamens and pistils. He began writing several books, which would later result in, for example, and . He also produced a book on the plants grown in the Uppsala Botanical Garden, . Rudbeck's former assistant, Nils Rosén, returned to the University in March 1731 with a degree in medicine. Rosén started giving anatomy lectures and tried to take over Linnaeus's botany lectures, but Rudbeck prevented that. Until December, Rosén gave Linnaeus private tutoring in medicine. In December, Linnaeus had a "disagreement" with Rudbeck's wife and had to move out of his mentor's house; his relationship with Rudbeck did not appear to suffer. That Christmas, Linnaeus returned home to Stenbrohult to visit his parents for the first time in about three years. His mother had disapproved of his failing to become a priest, but she was pleased to learn he was teaching at the University. Expedition to Lapland During a visit with his parents, Linnaeus told them about his plan to travel to Lapland; Rudbeck had made the journey in 1695, but the detailed results of his exploration were lost in a fire seven years afterwards. Linnaeus's hope was to find new plants, animals and possibly valuable minerals. He was also curious about the customs of the native Sami people, reindeer-herding nomads who wandered Scandinavia's vast tundras. In April 1732, Linnaeus was awarded a grant from the Royal Society of Sciences in Uppsala for his journey. Linnaeus began his expedition from Uppsala on 12 May 1732, just before he turned 25. He travelled on foot and horse, bringing with him his journal, botanical and ornithological manuscripts and sheets of paper for pressing plants. Near Gävle he found great quantities of Campanula serpyllifolia, later known as Linnaea borealis, the twinflower that would become his favourite. He sometimes dismounted on the way to examine a flower or rock and was particularly interested in mosses and lichens, the latter a main part of the diet of the reindeer, a common and economically important animal in Lapland. Linnaeus travelled clockwise around the coast of the Gulf of Bothnia, making major inland incursions from Umeå, Luleå and Tornio. He returned from his six-month-long, over expedition in October, having gathered and observed many plants, birds and rocks. Although Lapland was a region with limited biodiversity, Linnaeus described about 100 previously unidentified plants. These became the basis of his book . However, on the expedition to Lapland, Linnaeus used Latin names to describe organisms because he had not yet developed the binomial system. In Linnaeus's ideas about nomenclature and classification were first used in a practical way, making this the first proto-modern Flora. The account covered 534 species, used the Linnaean classification system and included, for the described species, geographical distribution and taxonomic notes. It was Augustin Pyramus de Candolle who attributed Linnaeus with as the first example in the botanical genre of Flora writing. Botanical historian E. L. Greene described as "the most classic and delightful" of Linnaeus's works. It was also during this expedition that Linnaeus had a flash of insight regarding the classification of mammals. Upon observing the lower jawbone of a horse at the side of a road he was travelling, Linnaeus remarked: "If I only knew how many teeth and of what kind every animal had, how many teats and where they were placed, I should perhaps be able to work out a perfectly natural system for the arrangement of all quadrupeds." In 1734, Linnaeus led a small group of students to Dalarna. Funded by the Governor of Dalarna, the expedition was to catalogue known natural resources and discover new ones, but also to gather intelligence on Norwegian mining activities at Røros. Years in the Dutch Republic (1735–38) Doctorate His relations with Nils Rosén having worsened, Linnaeus accepted an invitation from Claes Sohlberg, son of a mining inspector, to spend the Christmas holiday in Falun, where Linnaeus was permitted to visit the mines. In April 1735, at the suggestion of Sohlberg's father, Linnaeus and Sohlberg set out for the Dutch Republic, where Linnaeus intended to study medicine at the University of Harderwijk while tutoring Sohlberg in exchange for an annual salary. At the time, it was common for Swedes to pursue doctoral degrees in the Netherlands, then a highly revered place to study natural history. On the way, the pair stopped in Hamburg, where they met the mayor, who proudly showed them a supposed wonder of nature in his possession: the taxidermied remains of a seven-headed hydra. Linnaeus quickly discovered the specimen was a fake, cobbled together from the jaws and paws of weasels and the skins of snakes. The provenance of the hydra suggested to Linnaeus that it had been manufactured by monks to represent the Beast of Revelation. Even at the risk of incurring the mayor's wrath, Linnaeus made his observations public, dashing the mayor's dreams of selling the hydra for an enormous sum. Linnaeus and Sohlberg were forced to flee from Hamburg. Linnaeus began working towards his degree as soon as he reached Harderwijk, a university known for awarding degrees in as little as a week. He submitted a dissertation, written back in Sweden, entitled Dissertatio medica inauguralis in qua exhibetur hypothesis nova de febrium intermittentium causa, in which he laid out his hypothesis that malaria arose only in areas with clay-rich soils. Although he failed to identify the true source of disease transmission, (i.e., the Anopheles mosquito), he did correctly predict that Artemisia annua (wormwood) would become a source of antimalarial medications. Within two weeks he had completed his oral and practical examinations and was awarded a doctoral degree. That summer Linnaeus reunited with Peter Artedi, a friend from Uppsala with whom he had once made a pact that should either of the two predecease the other, the survivor would finish the decedent's work. Ten weeks later, Artedi drowned in the canals of Amsterdam, leaving behind an unfinished manuscript on the classification of fish. Publishing of One of the first scientists Linnaeus met in the Netherlands was Johan Frederik Gronovius, to whom Linnaeus showed one of the several manuscripts he had brought with him from Sweden. The manuscript described a new system for classifying plants. When Gronovius saw it, he was very impressed, and offered to help pay for the printing. With an additional monetary contribution by the Scottish doctor Isaac Lawson, the manuscript was published as (1735). Linnaeus became acquainted with one of the most respected physicians and botanists in the Netherlands, Herman Boerhaave, who tried to convince Linnaeus to make a career there. Boerhaave offered him a journey to South Africa and America, but Linnaeus declined, stating he would not stand the heat. Instead, Boerhaave convinced Linnaeus that he should visit the botanist Johannes Burman. After his visit, Burman, impressed with his guest's knowledge, decided Linnaeus should stay with him during the winter. During his stay, Linnaeus helped Burman with his . Burman also helped Linnaeus with the books on which he was working: and . George Clifford, Philip Miller, and Johann Jacob Dillenius In August 1735, during Linnaeus's stay with Burman, he met George Clifford III, a director of the Dutch East India Company and the owner of a rich botanical garden at the estate of Hartekamp in Heemstede. Clifford was very impressed with Linnaeus's ability to classify plants, and invited him to become his physician and superintendent of his garden. Linnaeus had already agreed to stay with Burman over the winter, and could thus not accept immediately. However, Clifford offered to compensate Burman by offering him a copy of Sir Hans Sloane's Natural History of Jamaica, a rare book, if he let Linnaeus stay with him, and Burman accepted. On 24 September 1735, Linnaeus moved to Hartekamp to become personal physician to Clifford, and curator of Clifford's herbarium. He was paid 1,000 florins a year, with free board and lodging. Though the agreement was only for a winter of that year, Linnaeus practically stayed there until 1738. It was here that he wrote a book Hortus Cliffortianus, in the preface of which he described his experience as "the happiest time of my life". (A portion of Hartekamp was declared as public garden in April 1956 by the Heemstede local authority, and was named "Linnaeushof". It eventually became, as it is claimed, the biggest playground in Europe.) In July 1736, Linnaeus travelled to England, at Clifford's expense. He went to London to visit Sir Hans Sloane, a collector of natural history, and to see his cabinet, as well as to visit the Chelsea Physic Garden and its keeper, Philip Miller. He taught Miller about his new system of subdividing plants, as described in . Miller was in fact reluctant to use the new binomial nomenclature, preferring the classifications of Joseph Pitton de Tournefort and John Ray at first. Linnaeus, nevertheless, applauded Miller's Gardeners Dictionary, the conservative Scot actually retained in his dictionary a number of pre-Linnaean binomial signifiers discarded by Linnaeus but which have been retained by modern botanists. He only fully changed to the Linnaean system in the edition of The Gardeners Dictionary of 1768. Miller ultimately was impressed, and from then on started to arrange the garden according to Linnaeus's system. Linnaeus also travelled to Oxford University to visit the botanist Johann Jacob Dillenius. He failed to make Dillenius publicly fully accept his new classification system, though the two men remained in correspondence for many years afterwards. Linnaeus dedicated his Critica Botanica to him, as "opus botanicum quo absolutius mundus non-vidit". Linnaeus would later name a genus of tropical tree Dillenia in his honour. He then returned to Hartekamp, bringing with him many specimens of rare plants. The next year, 1737, he published , in which he described 935 genera of plants, and shortly thereafter he supplemented it with , with another sixty (sexaginta) genera. His work at Hartekamp led to another book, , a catalogue of the botanical holdings in the herbarium and botanical garden of Hartekamp. He wrote it in nine months (completed in July 1737), but it was not published until 1738. It contains the first use of the name Nepenthes, which Linnaeus used to describe a genus of pitcher plants. Linnaeus stayed with Clifford at Hartekamp until 18 October 1737 (new style), when he left the house to return to Sweden. Illness and the kindness of Dutch friends obliged him to stay some months longer in Holland. In May 1738, he set out for Sweden again. On the way home, he stayed in Paris for about a month, visiting botanists such as Antoine de Jussieu. After his return, Linnaeus never again left Sweden. Return to Sweden When Linnaeus returned to Sweden on 28 June 1738, he went to Falun, where he entered into an engagement to Sara Elisabeth Moræa. Three months later, he moved to Stockholm to find employment as a physician, and thus to make it possible to support a family. Once again, Linnaeus found a patron; he became acquainted with Count Carl Gustav Tessin, who helped him get work as a physician at the Admiralty. During this time in Stockholm, Linnaeus helped found the Royal Swedish Academy of Science; he became the first Praeses of the academy by drawing of lots. Because his finances had improved and were now sufficient to support a family, he received permission to marry his fiancée, Sara Elisabeth Moræa. Their wedding was held 26 June 1739. Seventeen months later, Sara gave birth to their first son, Carl. Two years later, a daughter, Elisabeth Christina, was born, and the subsequent year Sara gave birth to Sara Magdalena, who died when 15 days old. Sara and Linnaeus would later have four other children: Lovisa, , Johannes and Sophia. In May 1741, Linnaeus was appointed Professor of Medicine at Uppsala University, first with responsibility for medicine-related matters. Soon, he changed place with the other Professor of Medicine, Nils Rosén, and thus was responsible for the Botanical Garden (which he would thoroughly reconstruct and expand), botany and natural history, instead. In October that same year, his wife and nine-month-old son followed him to live in Uppsala. Öland and Gotland Ten days after he was appointed Professor, he undertook an expedition to the island provinces of Öland and Gotland with six students from the university to look for plants useful in medicine. First, they travelled to Öland and stayed there until 21 June, when they sailed to Visby in Gotland. Linnaeus and the students stayed on Gotland for about a month, and then returned to Uppsala. During this expedition, they found 100 previously unrecorded plants. The observations from the expedition were later published in , written in Swedish. Like , it contained both zoological and botanical observations, as well as observations concerning the culture in Öland and Gotland. During the summer of 1745, Linnaeus published two more books: and . was a strictly botanical book, while was zoological. Anders Celsius had created the temperature scale named after him in 1742. Celsius's scale was inverted compared to today, the boiling point at 0 °C and freezing point at 100 °C. In 1745, Linnaeus inverted the scale to its present standard. Västergötland In the summer of 1746, Linnaeus was once again commissioned by the Government to carry out an expedition, this time to the Swedish province of Västergötland. He set out from Uppsala on 12 June and returned on 11 August. On the expedition his primary companion was Erik Gustaf Lidbeck, a student who had accompanied him on his previous journey. Linnaeus described his findings from the expedition in the book , published the next year. After he returned from the journey, the Government decided Linnaeus should take on another expedition to the southernmost province Scania. This journey was postponed, as Linnaeus felt too busy. In 1747, Linnaeus was given the title archiater, or chief physician, by the Swedish king Adolf Frederick—a mark of great respect. The same year he was elected member of the Academy of Sciences in Berlin. Scania In the spring of 1749, Linnaeus could finally journey to Scania, again commissioned by the Government. With him he brought his student, Olof Söderberg. On the way to Scania, he made his last visit to his brothers and sisters in Stenbrohult since his father had died the previous year. The expedition was similar to the previous journeys in most aspects, but this time he was also ordered to find the best place to grow walnut and Swedish whitebeam trees; these trees were used by the military to make rifles. While there, they also visited the Ramlösa mineral spa, where he remarked on the quality of its ferruginous water. The journey was successful, and Linnaeus's observations were published the next year in . Rector of Uppsala University In 1750, Linnaeus became rector of Uppsala University, starting a period where natural sciences were esteemed. Perhaps the most important contribution he made during his time at Uppsala was to teach; many of his students travelled to various places in the world to collect botanical samples. Linnaeus called the best of these students his "apostles". His lectures were normally very popular and were often held in the Botanical Garden. He tried to teach the students to think for themselves and not trust anybody, not even him. Even more popular than the lectures were the botanical excursions made every Saturday during summer, where Linnaeus and his students explored the flora and fauna in the vicinity of Uppsala. Philosophia Botanica Linnaeus published Philosophia Botanica in 1751. The book contained a complete survey of the taxonomy system he had been using in his earlier works. It also contained information of how to keep a journal on travels and how to maintain a botanical garden. Nutrix Noverca During Linnaeus's time it was normal for upper class women to have wet nurses for their babies. Linnaeus joined an ongoing campaign to end this practice in Sweden and promote breast-feeding by mothers. In 1752 Linnaeus published a thesis along with Frederick Lindberg, a physician student, based on their experiences. In the tradition of the period, this dissertation was essentially an idea of the presiding reviewer (prases) expounded upon by the student. Linnaeus's dissertation was translated into French by J. E. Gilibert in 1770 as La Nourrice marâtre, ou Dissertation sur les suites funestes du nourrisage mercénaire. Linnaeus suggested that children might absorb the personality of their wet nurse through the milk. He admired the child care practices of the Lapps and pointed out how healthy their babies were compared to those of Europeans who employed wet nurses. He compared the behaviour of wild animals and pointed out how none of them denied their newborns their breastmilk. It is thought that his activism played a role in his choice of the term Mammalia for the class of organisms. Species Plantarum Linnaeus published Species Plantarum, the work which is now internationally accepted as the starting point of modern botanical nomenclature, in 1753. The first volume was issued on 24 May, the second volume followed on 16 August of the same year. The book contained 1,200 pages and was published in two volumes; it described over 7,300 species. The same year the king dubbed him knight of the Order of the Polar Star, the first civilian in Sweden to become a knight in this order. He was then seldom seen not wearing the order's insignia. Ennoblement Linnaeus felt Uppsala was too noisy and unhealthy, so he bought two farms in 1758: Hammarby and Sävja. The next year, he bought a neighbouring farm, Edeby. He spent the summers with his family at Hammarby; initially it only had a small one-storey house, but in 1762 a new, larger main building was added. In Hammarby, Linnaeus made a garden where he could grow plants that could not be grown in the Botanical Garden in Uppsala. He began constructing a museum on a hill behind Hammarby in 1766, where he moved his library and collection of plants. A fire that destroyed about one third of Uppsala and had threatened his residence there necessitated the move. Since the initial release of in 1735, the book had been expanded and reprinted several times; the tenth edition was released in 1758. This edition established itself as the starting point for zoological nomenclature, the equivalent of . The Swedish King Adolf Frederick granted Linnaeus nobility in 1757, but he was not ennobled until 1761. With his ennoblement, he took the name Carl von Linné (Latinised as ), 'Linné' being a shortened and gallicised version of 'Linnæus', and the German nobiliary particle 'von' signifying his ennoblement. The noble family's coat of arms prominently features a twinflower, one of Linnaeus's favourite plants; it was given the scientific name Linnaea borealis in his honour by Gronovius. The shield in the coat of arms is divided into thirds: red, black and green for the three kingdoms of nature (animal, mineral and vegetable) in Linnaean classification; in the centre is an egg "to denote Nature, which is continued and perpetuated in ovo." At the bottom is a phrase in Latin, borrowed from the Aeneid, which reads "Famam extendere factis": we extend our fame by our deeds. Linnaeus inscribed this personal motto in books that were given to him by friends. After his ennoblement, Linnaeus continued teaching and writing. His reputation had spread over the world, and he corresponded with many different people. For example, Catherine II of Russia sent him seeds from her country. He also corresponded with Giovanni Antonio Scopoli, "the Linnaeus of the Austrian Empire", who was a doctor and a botanist in Idrija, Duchy of Carniola (nowadays Slovenia). Scopoli communicated all of his research, findings, and descriptions (for example of the olm and the dormouse, two little animals hitherto unknown to Linnaeus). Linnaeus greatly respected Scopoli and showed great interest in his work. He named a solanaceous genus, Scopolia, the source of scopolamine, after him, but because of the great distance between them, they never met. Final years Linnaeus was relieved of his duties in the Royal Swedish Academy of Science in 1763, but continued his work there as usual for more than ten years after. In 1769 he was elected to the American Philosophical Society for his work. He stepped down as rector at Uppsala University in December 1772, mostly due to his declining health. Linnaeus's last years were troubled by illness. He had had a disease called the Uppsala fever in 1764, but survived due to the care of Rosén. He developed sciatica in 1773, and the next year, he had a stroke which partially paralysed him. He had a second stroke in 1776, losing the use of his right side and leaving him bereft of his memory; while still able to admire his own writings, he could not recognise himself as their author. In December 1777, he had another stroke which greatly weakened him, and eventually led to his death on 10 January 1778 in Hammarby. Despite his desire to be buried in Hammarby, he was buried in Uppsala Cathedral on 22 January. His library and collections were left to his widow Sara and their children. Joseph Banks, an eminent botanist, wished to purchase the collection, but his son Carl refused the offer and instead moved the collection to Uppsala. In 1783 Carl died and Sara inherited the collection, having outlived both her husband and son. She tried to sell it to Banks, but he was no longer interested; instead an acquaintance of his agreed to buy the collection. The acquaintance was a 24-year-old medical student, James Edward Smith, who bought the whole collection: 14,000 plants, 3,198 insects, 1,564 shells, about 3,000 letters and 1,600 books. Smith founded the Linnean Society of London five years later. The von Linné name ended with his son Carl, who never married. His other son, Johannes, had died aged 3. There are over two hundred descendants of Linnaeus through two of his daughters. Apostles During Linnaeus's time as Professor and Rector of Uppsala University, he taught many devoted students, 17 of whom he called "apostles". They were the most promising, most committed students, and all of them made botanical expeditions to various places in the world, often with his help. The amount of this help varied; sometimes he used his influence as Rector to grant his apostles a scholarship or a place on an expedition. To most of the apostles he gave instructions of what to look for on their journeys. Abroad, the apostles collected and organised new plants, animals and minerals according to Linnaeus's system. Most of them also gave some of their collection to Linnaeus when their journey was finished. Thanks to these students, the Linnaean system of taxonomy spread through the world without Linnaeus ever having to travel outside Sweden after his return from Holland. The British botanist William T. Stearn notes, without Linnaeus's new system, it would not have been possible for the apostles to collect and organise so many new specimens. Many of the apostles died during their expeditions. Early expeditions Christopher Tärnström, the first apostle and a 43-year-old pastor with a wife and children, made his journey in 1746. He boarded a Swedish East India Company ship headed for China. Tärnström never reached his destination, dying of a tropical fever on Côn Sơn Island the same year. Tärnström's widow blamed Linnaeus for making her children fatherless, causing Linnaeus to prefer sending out younger, unmarried students after Tärnström. Six other apostles later died on their expeditions, including Pehr Forsskål and Pehr Löfling. Two years after Tärnström's expedition, Finnish-born Pehr Kalm set out as the second apostle to North America. There he spent two-and-a-half years studying the flora and fauna of Pennsylvania, New York, New Jersey and Canada. Linnaeus was overjoyed when Kalm returned, bringing back with him many pressed flowers and seeds. At least 90 of the 700 North American species described in Species Plantarum had been brought back by Kalm. Cook expeditions and Japan Daniel Solander was living in Linnaeus's house during his time as a student in Uppsala. Linnaeus was very fond of him, promising Solander his eldest daughter's hand in marriage. On Linnaeus's recommendation, Solander travelled to England in 1760, where he met the English botanist Joseph Banks. With Banks, Solander joined James Cook on his expedition to Oceania on the Endeavour in 1768–71. Solander was not the only apostle to journey with James Cook; Anders Sparrman followed on the Resolution in 1772–75 bound for, among other places, Oceania and South America. Sparrman made many other expeditions, one of them to South Africa. Perhaps the most famous and successful apostle was Carl Peter Thunberg, who embarked on a nine-year expedition in 1770. He stayed in South Africa for three years, then travelled to Japan. All foreigners in Japan were forced to stay on the island of Dejima outside Nagasaki, so it was thus hard for Thunberg to study the flora. He did, however, manage to persuade some of the translators to bring him different plants, and he also found plants in the gardens of Dejima. He returned to Sweden in 1779, one year after Linnaeus's death. Major publications Systema Naturae The first edition of was printed in the Netherlands in 1735. It was a twelve-page work. By the time it reached its 10th edition in 1758, it classified 4,400 species of animals and 7,700 species of plants. People from all over the world sent their specimens to Linnaeus to be included. By the time he started work on the 12th edition, Linnaeus needed a new invention—the index card—to track classifications. In Systema Naturae, the unwieldy names mostly used at the time, such as "", were supplemented with concise and now familiar "binomials", composed of the generic name, followed by a specific epithet—in the case given, Physalis angulata. These binomials could serve as a label to refer to the species. Higher taxa were constructed and arranged in a simple and orderly manner. Although the system, now known as binomial nomenclature, was partially developed by the Bauhin brothers (see Gaspard Bauhin and Johann Bauhin) almost 200 years earlier, Linnaeus was the first to use it consistently throughout the work, including in monospecific genera, and may be said to have popularised it within the scientific community. After the decline in Linnaeus's health in the early 1770s, publication of editions of Systema Naturae went in two different directions. Another Swedish scientist, Johan Andreas Murray issued the Regnum Vegetabile section separately in 1774 as the Systema Vegetabilium, rather confusingly labelled the 13th edition. Meanwhile, a 13th edition of the entire Systema appeared in parts between 1788 and 1793 under the editorship of Johann Friedrich Gmelin. It was through the Systema Vegetabilium that Linnaeus's work became widely known in England, following its translation from the Latin by the Lichfield Botanical Society as A System of Vegetables (1783–1785). Orbis eruditi judicium de Caroli Linnaei MD scriptis ('Opinion of the learned world on the writings of Carl Linnaeus, Doctor') Published in 1740, this small octavo-sized pamphlet was presented to the State Library of New South Wales by the Linnean Society of NSW in 2018. This is considered among the rarest of all the writings of Linnaeus, and crucial to his career, securing him his appointment to a professorship of medicine at Uppsala University. From this position he laid the groundwork for his radical new theory of classifying and naming organisms for which he was considered the founder of modern taxonomy. (or, more fully, ) was first published in 1753, as a two-volume work. Its prime importance is perhaps that it is the primary starting point of plant nomenclature as it exists today. was first published in 1737, delineating plant genera. Around 10 editions were published, not all of them by Linnaeus himself; the most important is the 1754 fifth edition. In it Linnaeus divided the plant Kingdom into 24 classes. One, Cryptogamia, included all the plants with concealed reproductive parts (algae, fungi, mosses and liverworts and ferns). (1751) was a summary of Linnaeus's thinking on plant classification and nomenclature, and an elaboration of the work he had previously published in (1736) and (1737). Other publications forming part of his plan to reform the foundations of botany include his and : all were printed in Holland (as were (1737) and (1735)), the Philosophia being simultaneously released in Stockholm. Collections At the end of his lifetime the Linnean collection in Uppsala was considered one of the finest collections of natural history objects in Sweden. Next to his own collection he had also built up a museum for the university of Uppsala, which was supplied by material donated by Carl Gyllenborg (in 1744–1745), crown-prince Adolf Fredrik (in 1745), Erik Petreus (in 1746), Claes Grill (in 1746), Magnus Lagerström (in 1748 and 1750) and Jonas Alströmer (in 1749). The relation between the museum and the private collection was not formalised and the steady flow of material from Linnean pupils were incorporated to the private collection rather than to the museum. Linnaeus felt his work was reflecting the harmony of nature and he said in 1754 "the earth is then nothing else but a museum of the all-wise creator's masterpieces, divided into three chambers". He had turned his own estate into a microcosm of that 'world museum'. In April 1766 parts of the town were destroyed by a fire and the Linnean private collection was subsequently moved to a barn outside the town, and shortly afterwards to a single-room stone building close to his country house at Hammarby near Uppsala. This resulted in a physical separation between the two collections; the museum collection remained in the botanical garden of the university. Some material which needed special care (alcohol specimens) or ample storage space was moved from the private collection to the museum. In Hammarby the Linnean private collections suffered seriously from damp and the depredations by mice and insects. Carl von Linné's son (Carl Linnaeus) inherited the collections in 1778 and retained them until his own death in 1783. Shortly after Carl von Linné's death his son confirmed that mice had caused "horrible damage" to the plants and that also moths and mould had caused considerable damage. He tried to rescue them from the neglect they had suffered during his father's later years, and also added further specimens. This last activity however reduced rather than augmented the scientific value of the original material. In 1784 the young medical student James Edward Smith purchased the entire specimen collection, library, manuscripts, and correspondence of Carl Linnaeus from his widow and daughter and transferred the collections to London. Not all material in Linné's private collection was transported to England. Thirty-three fish specimens preserved in alcohol were not sent and were later lost. In London Smith tended to neglect the zoological parts of the collection; he added some specimens and also gave some specimens away. Over the following centuries the Linnean collection in London suffered enormously at the hands of scientists who studied the collection, and in the process disturbed the original arrangement and labels, added specimens that did not belong to the original series and withdrew precious original type material. Much material which had been intensively studied by Linné in his scientific career belonged to the collection of Queen Lovisa Ulrika (1720–1782) (in the Linnean publications referred to as "Museum Ludovicae Ulricae" or "M. L. U."). This collection was donated by her grandson King Gustav IV Adolf (1778–1837) to the museum in Uppsala in 1804. Another important collection in this respect was that of her husband King Adolf Fredrik (1710–1771) (in the Linnean sources known as "Museum Adolphi Friderici" or "Mus. Ad. Fr."), the wet parts (alcohol collection) of which were later donated to the Royal Swedish Academy of Sciences, and is today housed in the Swedish Museum of Natural History at Stockholm. The dry material was transferred to Uppsala. System of taxonomy The establishment of universally accepted conventions for the naming of organisms was Linnaeus's main contribution to taxonomy—his work marks the starting point of consistent use of binomial nomenclature. During the 18th century expansion of natural history knowledge, Linnaeus also developed what became known as the Linnaean taxonomy; the system of scientific classification now widely used in the biological sciences. A previous zoologist Rumphius (1627–1702) had more or less approximated the Linnaean system and his material contributed to the later development of the binomial scientific classification by Linnaeus. The Linnaean system classified nature within a nested hierarchy, starting with three kingdoms. Kingdoms were divided into classes and they, in turn, into orders, and thence into genera (singular: genus), which were divided into species (singular: species). Below the rank of species he sometimes recognised taxa of a lower (unnamed) rank; these have since acquired standardised names such as variety in botany and subspecies in zoology. Modern taxonomy includes a rank of family between order and genus and a rank of phylum between kingdom and class that were not present in Linnaeus's original system. Linnaeus's groupings were based upon shared physical characteristics, and not based upon differences. Of his higher groupings, only those for animals are still in use, and the groupings themselves have been significantly changed since their conception, as have the principles behind them. Nevertheless, Linnaeus is credited with establishing the idea of a hierarchical structure of classification which is based upon observable characteristics and intended to reflect natural relationships. While the underlying details concerning what are considered to be scientifically valid "observable characteristics" have changed with expanding knowledge (for example, DNA sequencing, unavailable in Linnaeus's time, has proven to be a tool of considerable utility for classifying living organisms and establishing their evolutionary relationships), the fundamental principle remains sound. Human taxonomy Linnaeus's system of taxonomy was especially noted as the first to include humans (Homo) taxonomically grouped with apes (Simia), under the header of Anthropomorpha. German biologist Ernst Haeckel speaking in 1907 noted this as the "most important sign of Linnaeus's genius". Linnaeus classified humans among the primates beginning with the first edition of . During his time at Hartekamp, he had the opportunity to examine several monkeys and noted similarities between them and man. He pointed out both species basically have the same anatomy; except for speech, he found no other differences. Thus he placed man and monkeys under the same category, Anthropomorpha, meaning "manlike." This classification received criticism from other biologists such as Johan Gottschalk Wallerius, Jacob Theodor Klein and Johann Georg Gmelin on the ground that it is illogical to describe man as human-like. In a letter to Gmelin from 1747, Linnaeus replied: The theological concerns were twofold: first, putting man at the same level as monkeys or apes would lower the spiritually higher position that man was assumed to have in the great chain of being, and second, because the Bible says man was created in the image of God (theomorphism), if monkeys/apes and humans were not distinctly and separately designed, that would mean monkeys and apes were created in the image of God as well. This was something many could not accept. The conflict between world views that was caused by asserting man was a type of animal would simmer for a century until the much greater, and still ongoing, creation–evolution controversy began in earnest with the publication of On the Origin of Species by Charles Darwin in 1859. After such criticism, Linnaeus felt he needed to explain himself more clearly. The 10th edition of introduced new terms, including Mammalia and Primates, the latter of which would replace Anthropomorpha as well as giving humans the full binomial Homo sapiens. The new classification received less criticism, but many natural historians still believed he had demoted humans from their former place of ruling over nature and not being a part of it. Linnaeus believed that man biologically belongs to the animal kingdom and had to be included in it. In his book , he said, "One should not vent one's wrath on animals, Theology decree that man has a soul and that the animals are mere 'automata mechanica,' but I believe they would be better advised that animals have a soul and that the difference is of nobility." Linnaeus added a second species to the genus Homo in based on a figure and description by Jacobus Bontius from a 1658 publication: Homo troglodytes ("caveman") and published a third in 1771: Homo lar. Swedish historian Gunnar Broberg states that the new human species Linnaeus described were actually simians or native people clad in skins to frighten colonial settlers, whose appearance had been exaggerated in accounts to Linnaeus. For Homo troglodytes Linnaeus asked the Swedish East India Company to search for one, but they did not find any signs of its existence. Homo lar has since been reclassified as Hylobates lar, the lar gibbon. In the first edition of , Linnaeus subdivided the human species into four varieties: "Europæus albesc[ens]" (whitish European), "Americanus rubesc[ens]" (reddish American), "Asiaticus fuscus" (tawny Asian) and "Africanus nigr[iculus]" (blackish African). In the tenth edition of Systema Naturae he further detailed phenotypical characteristics for each variety, based on the concept of the four temperaments from classical antiquity, and changed the description of Asians' skin tone to "luridus" (yellow). Additionally, Linnaeus created a wastebasket taxon "monstrosus" for "wild and monstrous humans, unknown groups, and more or less abnormal people". In 1959, W. T. Stearn designated Linnaeus to be the lectotype of H. sapiens. Influences and economic beliefs Linnaeus's applied science was inspired not only by the instrumental utilitarianism general to the early Enlightenment, but also by his adherence to the older economic doctrine of Cameralism. Additionally, Linnaeus was a state interventionist. He supported tariffs, levies, export bounties, quotas, embargoes, navigation acts, subsidised investment capital, ceilings on wages, cash grants, state-licensed producer monopolies, and cartels. Commemoration Anniversaries of Linnaeus's birth, especially in centennial years, have been marked by major celebrations. Linnaeus has appeared on numerous Swedish postage stamps and banknotes. There are numerous statues of Linnaeus in countries around the world. The Linnean Society of London has awarded the Linnean Medal for excellence in botany or zoology since 1888. Following approval by the Riksdag of Sweden, Växjö University and Kalmar College merged on 1 January 2010 to become Linnaeus University. Other things named after Linnaeus include the twinflower genus Linnaea, Linnaeosicyos (a monotypic genus in the family Cucurbitaceae), the crater Linné on the Earth's moon, a street in Cambridge, Massachusetts, and the cobalt sulfide mineral Linnaeite. Commentary Andrew Dickson White wrote in A History of the Warfare of Science with Theology in Christendom (1896): The mathematical PageRank algorithm, applied to 24 multilingual Wikipedia editions in 2014, published in PLOS ONE in 2015, placed Carl Linnaeus at the top historical figure, above Jesus, Aristotle, Napoleon, and Adolf Hitler (in that order). In the 21st century, Linnæus's taxonomy of human "races" has been problematised and discussed. Some critics claim that Linnæus was one of the forebears of the modern pseudoscientific notion of scientific racism, while others hold the view that while his classification was stereotyped, it did not imply that certain human "races" were superior to others. Standard author abbreviation Selected publications by Linnaeus Linnaeus, Carl 1846 Fauna svecica. Sistens Animalia Sveciae Regni: Quadrupedia, Aves, Amphibia, Pisces, Insecta, Vermes, distributae per classes & ordines, genera & species. C. Wishoff & G.J. Wishoff, Lugdnuni Batavorum. see also Species Plantarum See also Linnaeus's flower clock Johann Bartsch, colleague Centuria Insectorum History of botany History of phycology Scientific revolution References Notes Citations Sources Further reading External links Biographies Biography at the Department of Systematic Botany, University of Uppsala Biography at The Linnean Society of London Biography from the University of California Museum of Paleontology A four-minute biographical video from the London Natural History Museum on YouTube Biography from Taxonomic Literature, 2nd Edition. 1976–2009. Resources The Linnean Society of London The Linnaeus Apostles The Linnean Collections The Linnean Correspondence Linnaeus's Disciples and Apostles The Linnaean Dissertations Linnean Herbarium The Linnaeus Tercentenary Works by Carl von Linné at the Biodiversity Heritage Library Digital edition: "Critica Botanica" by the University and State Library Düsseldorf Digital edition: "Classes plantarum seu systemata plantarum" by the University and State Library Düsseldorf Oratio de telluris habitabilis incremento (1744) – full digital facsimile from Linda Hall Library Other Linnaeus was depicted by Jay Hosler in a parody of Peanuts titled "Good ol' Charlie Darwin". The 15 March 2007 issue of Nature featured a picture of Linnaeus on the cover with the heading "Linnaeus's Legacy" and devoted a substantial portion to items related to Linnaeus and Linnaean taxonomy. A tattoo of Linnaeus's definition of the order Primates mentioned by Carl Zimmer Ginkgo biloba tree at the University of Harderwijk, said to have been planted by Linnaeus in 1735 SL Magazine, Spring 2018 features an article by Nicholas Sparks, librarian, Collection Strategy and Development titled Origins of Taxonomy, describing a generous donation from the Linnean Society of NSW to supplement the State Library of New South Wales's collections on Carl Linnaeus of documents, photographs, prints and drawings as well as a fine portrait of Linnaeus painted about 1800. 1707 births 1778 deaths 18th-century writers in Latin 18th-century male writers 18th-century Swedish physicians 18th-century Swedish zoologists 18th-century Swedish writers Age of Liberty people Swedish arachnologists Botanical nomenclature Botanists active in Europe Botanists with author abbreviations Bryologists Burials at Uppsala Cathedral Fellows of the Royal Society Historical definitions of race Knights of the Order of the Polar Star Members of the French Academy of Sciences Members of the Prussian Academy of Sciences Members of the Royal Swedish Academy of Sciences People from Älmhult Municipality Phycologists Pteridologists Swedish autobiographers Swedish biologists Swedish botanists Swedish entomologists Swedish expatriates in the Dutch Republic Swedish Lutherans Swedish mammalogists Swedish mycologists Linne, Carl von Swedish ornithologists Swedish taxonomists Terminologists Taxon authorities of Hypericum species University of Harderwijk alumni Uppsala University alumni Academic staff of Uppsala University Members of the American Philosophical Society 18th-century lexicographers
5236
https://en.wikipedia.org/wiki/Coast
Coast
The coast, also known as the coastline or seashore, is defined as the area where land meets the ocean, or as a line that forms the boundary between the land and the coastline. Shores are influenced by the topography of the surrounding landscape, as well as by water induced erosion, such as waves. The geological composition of rock and soil dictates the type of shore which is created. The Earth has around of coastline. Coasts are important zones in natural ecosystems, often home to a wide range of biodiversity. On land, they harbor important ecosystems such as freshwater or estuarine wetlands, which are important for bird populations and other terrestrial animals. In wave-protected areas they harbor saltmarshes, mangroves or seagrasses, all of which can provide nursery habitat for finfish, shellfish, and other aquatic species. Rocky shores are usually found along exposed coasts and provide habitat for a wide range of sessile animals (e.g. mussels, starfish, barnacles) and various kinds of seaweeds. In physical oceanography, a shore is the wider fringe that is geologically modified by the action of the body of water past and present, while the beach is at the edge of the shore, representing the intertidal zone where there is one. Along tropical coasts with clear, nutrient-poor water, coral reefs can often be found between depths of . According to an atlas prepared by the United Nations, 44% of all humans live within 150 km (93 mi) of the sea. Due to its importance in society and its high population concentrations, the coast is important for major parts of the global food and economic system, and they provide many ecosystem services to humankind. For example, important human activities happen in port cities. Coastal fisheries (commercial, recreational, and subsistence) and aquaculture are major economic activities and create jobs, livelihoods, and protein for the majority of coastal human populations. Other coastal spaces like beaches and seaside resorts generate large revenues through tourism. Marine coastal ecosystems can also provide protection against sea level rise and tsunamis. In many countries, mangroves are the primary source of wood for fuel (e.g. charcoal) and building material. Coastal ecosystems like mangroves and seagrasses have a much higher capacity for carbon sequestration than many terrestrial ecosystems, and as such can play a critical role in the near-future to help mitigate climate change effects by uptake of atmospheric anthropogenic carbon dioxide. However, the economic importance of coasts makes many of these communities vulnerable to climate change, which causes increases in extreme weather and sea level rise, and related issues such as coastal erosion, saltwater intrusion and coastal flooding. Other coastal issues, such as marine pollution, marine debris, coastal development, and marine ecosystem destruction, further complicate the human uses of the coast and threaten coastal ecosystems. The interactive effects of climate change, habitat destruction, overfishing and water pollution (especially eutrophication) have led to the demise of coastal ecosystem around the globe. This has resulted in population collapse of fisheries stocks, loss of biodiversity, increased invasion of alien species, and loss of healthy habitats. International attention to these issues has been captured in Sustainable Development Goal 14 "Life Below Water" which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention. Because coasts are constantly changing, a coastline's exact perimeter cannot be determined; this measurement challenge is called the coastline paradox. The term coastal zone is used to refer to a region where interactions of sea and land processes occur. Both the terms coast and coastal are often used to describe a geographic location or region located on a coastline (e.g., New Zealand's West Coast, or the East, West, and Gulf Coast of the United States.) Coasts with a narrow continental shelf that are close to the open ocean are called pelagic coast, while other coasts are more sheltered coast in a gulf or bay. A shore, on the other hand, may refer to parts of land adjoining any large body of water, including oceans (sea shore) and lakes (lake shore). Size The Earth has approximately of coastline. Coastal habitats, which extend to the margins of the continental shelves, make up about 7 percent of the Earth's oceans, but at least 85% of commercially harvested fish depend on coastal environments during at least part of their life cycle. about 2.86% of exclusive economic zones were part of marine protected areas. The definition of coasts varies. Marine scientists think of the "wet" (aquatic or intertidal) vegetated habitats as being coastal ecosystems (including seagrass, salt marsh etc.) whilst some terrestrial scientist might only think of coastal ecosystems as purely terrestrial plants that live close to the seashore (see also estuaries and coastal ecosystems). While there is general agreement in the scientific community regarding the definition of coast, in the political sphere, the delineation of the extents of a coast differ according to jurisdiction. Government authorities in various countries may define coast differently for economic and social policy reasons. Exact length of coastline Formation Tides often determine the range over which sediment is deposited or eroded. Areas with high tidal ranges allow waves to reach farther up the shore, and areas with lower tidal ranges produce deposition at a smaller elevation interval. The tidal range is influenced by the size and shape of the coastline. Tides do not typically cause erosion by themselves; however, tidal bores can erode as the waves surge up the river estuaries from the ocean. Geologists classify coasts on the basis of tidal range into macrotidal coasts with a tidal range greater than ; mesotidal coasts with a tidal range of ; and microtidal coasts with a tidal range of less than . The distinction between macrotidal and mesotidal coasts is more important. Macrotidal coasts lack barrier islands and lagoons, and are characterized by funnel-shaped estuaries containing sand ridges aligned with tidal currents. Wave action is much more important for determining bedforms of sediments deposited along mesotidal and microtidal coasts than in macrotidal coasts. Waves erode coastline as they break on shore releasing their energy; the larger the wave the more energy it releases and the more sediment it moves. Coastlines with longer shores have more room for the waves to disperse their energy, while coasts with cliffs and short shore faces give little room for the wave energy to be dispersed. In these areas, the wave energy breaking against the cliffs is higher, and air and water are compressed into cracks in the rock, forcing the rock apart, breaking it down. Sediment deposited by waves comes from eroded cliff faces and is moved along the coastline by the waves. This forms an abrasion or cliffed coast. Sediment deposited by rivers is the dominant influence on the amount of sediment located in the case of coastlines that have estuaries. Today, riverine deposition at the coast is often blocked by dams and other human regulatory devices, which remove the sediment from the stream by causing it to be deposited inland. Coral reefs are a provider of sediment for coastlines of tropical islands. Like the ocean which shapes them, coasts are a dynamic environment with constant change. The Earth's natural processes, particularly sea level rises, waves and various weather phenomena, have resulted in the erosion, accretion and reshaping of coasts as well as flooding and creation of continental shelves and drowned river valleys (rias). Importance for humans and ecosystems Human settlements More and more of the world's people live in coastal regions. According to a United Nations atlas, 44% of all people live within 150 km (93 mi) of the sea. Many major cities are on or near good harbors and have port facilities. Some landlocked places have achieved port status by building canals. Nations defend their coasts against military invaders, smugglers and illegal migrants. Fixed coastal defenses have long been erected in many nations, and coastal countries typically have a navy and some form of coast guard. Tourism Coasts, especially those with beaches and warm water, attract tourists often leading to the development of seaside resort communities. In many island nations such as those of the Mediterranean, South Pacific Ocean and Caribbean, tourism is central to the economy. Coasts offer recreational activities such as swimming, fishing, surfing, boating, and sunbathing. Growth management and coastal management can be a challenge for coastal local authorities who often struggle to provide the infrastructure required by new residents, and poor management practices of construction often leave these communities and infrastructure vulnerable to processes like coastal erosion and sea level rise. In many of these communities, management practices such as beach nourishment or when the coastal infrastructure is no longer financially sustainable, managed retreat to remove communities from the coast. Ecosystem services Types Emergent coastline According to one principle of classification, an emergent coastline is a coastline that has experienced a fall in sea level, because of either a global sea-level change, or local uplift. Emergent coastlines are identifiable by the coastal landforms, which are above the high tide mark, such as raised beaches. In contrast, a submergent coastline is one where the sea level has risen, due to a global sea-level change, local subsidence, or isostatic rebound. Submergent coastlines are identifiable by their submerged, or "drowned" landforms, such as rias (drowned valleys) and fjords Concordant coastline According to the second principle of classification, a concordant coastline is a coastline where bands of different rock types run parallel to the shore. These rock types are usually of varying resistance, so the coastline forms distinctive landforms, such as coves. Discordant coastlines feature distinctive landforms because the rocks are eroded by the ocean waves. The less resistant rocks erode faster, creating inlets or bay; the more resistant rocks erode more slowly, remaining as headlands or outcroppings. Rivieras Riviera is an Italian word for "shoreline", ultimately derived from Latin ripa ("riverbank"). It came to be applied as a proper name to the coast of the Ligurian Sea, in the form riviera ligure, then shortened to riviera. Historically, the Ligurian Riviera extended from Capo Corvo (Punta Bianca) south of Genoa, north and west into what is now French territory past Monaco and sometimes as far as Marseilles. Today, this coast is divided into the Italian Riviera and the French Riviera, although the French use the term "Riviera" to refer to the Italian Riviera and call the French portion the "Côte d'Azur". As a result of the fame of the Ligurian rivieras, the term came into English to refer to any shoreline, especially one that is sunny, topographically diverse and popular with tourists. Such places using the term include the Australian Riviera in Queensland and the Turkish Riviera along the Aegean Sea. Other coastal categories A cliffed coast or abrasion coast is one where marine action has produced steep declivities known as cliffs. A flat coast is one where the land gradually descends into the sea. A graded shoreline is one where wind and water action has produced a flat and straight coastline. Landforms The following articles describe some coastal landforms: Barrier island Bay Headland Cove Peninsula Cliff erosion Much of the sediment deposited along a coast is the result of erosion of a surrounding cliff, or bluff. Sea cliffs retreat landward because of the constant undercutting of slopes by waves. If the slope/cliff being undercut is made of unconsolidated sediment it will erode at a much faster rate than a cliff made of bedrock. A natural arch is formed when a headland is eroded through by waves. Sea caves are made when certain rock beds are more susceptible to erosion than the surrounding rock beds because of different areas of weakness. These areas are eroded at a faster pace creating a hole or crevice that, through time, by means of wave action and erosion, becomes a cave. A stack is formed when a headland is eroded away by wave and wind action. A stump is a shortened sea stack that has been eroded away or fallen because of instability. Wave-cut notches are caused by the undercutting of overhanging slopes which leads to increased stress on cliff material and a greater probability that the slope material will fall. The fallen debris accumulates at the bottom of the cliff and is eventually removed by waves. A wave-cut platform forms after erosion and retreat of a sea cliff has been occurring for a long time. Gently sloping wave-cut platforms develop early on in the first stages of cliff retreat. Later, the length of the platform decreases because the waves lose their energy as they break further offshore. Coastal features formed by sediment Beach Beach cusps Cuspate foreland Dune system Mudflat Raised beach Ria Shoal Spit Strand plain Surge channel Tombolo Coastal features formed by another feature Estuary Lagoon Salt marsh Mangrove forests Kelp forests Coral reefs Oyster reefs Other features on the coast Concordant coastline Discordant coastline Fjord Island Island arc Machair Coastal waters "Coastal waters" (or "coastal seas") is a rather general term used differently in different contexts, ranging geographically from the waters within a few kilometers of the coast, through to the entire continental shelf which may stretch for more than a hundred kilometers from land. Thus the term coastal waters is used in a slightly different way in discussions of legal and economic boundaries (see territorial waters and international waters) or when considering the geography of coastal landforms or the ecological systems operating through the continental shelf (marine coastal ecosystems). The research on coastal waters often divides into these separate areas too. The dynamic fluid nature of the ocean means that all components of the whole ocean system are ultimately connected, although certain regional classifications are useful and relevant. The waters of the continental shelves represent such a region. The term "coastal waters" has been used in a wide variety of different ways in different contexts. In European Union environmental management it extends from the coast to just a few nautical miles while in the United States the US EPA considers this region to extend much further offshore. "Coastal waters" has specific meanings in the context of commercial coastal shipping, and somewhat different meanings in the context of naval littoral warfare. Oceanographers and marine biologists have yet other takes. Coastal waters have a wide range of marine habitats from enclosed estuaries to the open waters of the continental shelf. Similarly, the term littoral zone has no single definition. It is the part of a sea, lake, or river that is close to the shore. In coastal environments, the littoral zone extends from the high water mark, which is rarely inundated, to shoreline areas that are permanently submerged. Coastal waters can be threatened by coastal eutrophication and harmful algal blooms. In geology The identification of bodies of rock formed from sediments deposited in shoreline and nearshore environments (shoreline and nearshore facies) is extremely important to geologists. These provide vital clues for reconstructing the geography of ancient continents (paleogeography). The locations of these beds show the extent of ancient seas at particular points in geological time, and provide clues to the magnitudes of tides in the distant past. Sediments deposited in the shoreface are preserved as lenses of sandstone in which the upper part of the sandstone is coarser than the lower part (a coarsening upwards sequence). Geologists refer to these are parasequences. Each records an episode of retreat of the ocean from the shoreline over a period of 10,000 to 1,000,000 years. These often show laminations reflecting various kinds of tidal cycles. Some of the best-studied shoreline deposits in the world are found along the former western shore of the Western Interior Seaway, a shallow sea that flooded central North America during the late Cretaceous Period (about 100 to 66 million years ago). These are beautifully exposed along the Book Cliffs of Utah and Colorado. Geologic processes The following articles describe the various geologic processes that affect a coastal zone: Attrition Currents Denudation Deposition Erosion Flooding Longshore drift Marine sediments Saltation Sea level change eustatic isostatic Sedimentation Coastal sediment supply sediment transport solution subaerial processes suspension Tides Water waves diffraction refraction wave breaking wave shoaling Weathering Wildlife Animals Larger animals that live in coastal areas include puffins, sea turtles and rockhopper penguins, among many others. Sea snails and various kinds of barnacles live on rocky coasts and scavenge on food deposited by the sea. Some coastal animals are used to humans in developed areas, such as dolphins and seagulls who eat food thrown for them by tourists. Since the coastal areas are all part of the littoral zone, there is a profusion of marine life found just off-coast, including sessile animals such as corals, sponges, starfish, mussels, seaweeds, fishes, and sea anemones. There are many kinds of seabirds on various coasts. These include pelicans and cormorants, who join up with terns and oystercatchers to forage for fish and shellfish. There are sea lions on the coast of Wales and other countries. Coastal fish Plants Many coastal areas are famous for their kelp beds. Kelp is a fast-growing seaweed that can grow up to half a meter a day in ideal conditions. Mangroves, seagrasses, macroalgal beds, and salt marsh are important coastal vegetation types in tropical and temperate environments respectively. Restinga is another type of coastal vegetation. Threats Coasts also face many human-induced environmental impacts and coastal development hazards. The most important ones are: Pollution which can be in the form of water pollution, nutrient pollution (leading to coastal eutrophication and harmful algal blooms), oil spills or marine debris that is contaminating coasts with plastic and other trash. Sea level rise, and associated issues like coastal erosion and saltwater intrusion. Pollution The pollution of coastlines is connected to marine pollution which can occur from a number of sources: Marine debris (garbage and industrial debris); the transportation of petroleum in tankers, increasing the probability of large oil spills; small oil spills created by large and small vessels, which flush bilge water into the ocean. Marine pollution Marine debris Microplastics Sea level rise due to climate change Global goals International attention to address the threats of coasts has been captured in Sustainable Development Goal 14 "Life Below Water" which sets goals for international policy focused on preserving marine coastal ecosystems and supporting more sustainable economic practices for coastal communities. Likewise, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention. See also Bank (geography) Beach cleaning Coastal and Estuarine Research Federation European Atlas of the Seas Intertidal zone Land reclamation List of countries by length of coastline List of U.S. states by coastline Offshore or Intertidal zone Ballantine Scale Coastal path Shorezone References External links Woods Hole Oceanographic Institution - organization dedicated to ocean research, exploration, and education Coastal and oceanic landforms Coastal geography Oceanographical terminology Articles containing video clips
5237
https://en.wikipedia.org/wiki/Catatonia
Catatonia
Catatonia is a complex neuropsychiatric behavioral syndrome that is characterized by abnormal movements, immobility, abnormal behaviors, and withdrawal. The onset of catatonia can be acute or subtle and symptoms can wax, wane, or change during episodes. It has historically been related to schizophrenia (catatonic schizophrenia), but catatonia is most often seen in mood disorders. It is now known that catatonic symptoms are nonspecific and may be observed in other mental, neurological, and medical conditions. Catatonia is not a stand-alone diagnosis (although some experts disagree), and the term is used to describe a feature of the underlying disorder. There are several subtypes of catatonia: akinetic catatonia, excited catatonia, malignant catatonia, and delirious mania. Recognizing and treating catatonia is very important as failure to do so can lead to poor outcomes and can be potentially fatal. Treatment with benzodiazepines or ECT can lead to remission of catatonia. There is growing evidence of the effectiveness of the NMDA receptor antagonists amantadine and memantine for benzodiazepine-resistant catatonia. Antipsychotics are sometimes employed, but they can worsen symptoms and have serious adverse effects. Signs and symptoms The presentation of a patient with catatonia varies greatly depending on the subtype and underlying cause, and can be acute or subtle. Because most patients with catatonia have an underlying psychiatric illness, the majority will present with worsening depression, mania, or psychosis followed by catatonia symptoms. Catatonia presents as a motor disturbance in which patients will display marked reduction in movement, marked agitation, or a mixture of both despite having the physical capacity to move normally. These patients may be unable to start an action or stop one. Movements and mannerisms may be repetitive, or purposeless. The most common signs of catatonia are immobility, mutism, withdrawal and refusal to eat, staring, negativism, posturing (rigidity), rigidity, waxy flexibility/catalepsy, stereotypy (purposeless, repetitive movements), echolalia or echopraxia, verbigeration (repeat meaningless phrases). It should not be assumed that patients presenting with catatonia are unaware of their surroundings as some patients can recall in detail their catatonic state and their actions. There are several subtypes of catatonia and they are characterized by the specific movement disturbance and associated features. Although catatonia can be divided into various subtypes, the natural history of catatonia is often fluctuant and different states can exist within the same individual. Subtypes Withdrawn Catatonia: This form of catatonia is characterized by decreased response to external stimuli, immobility or inhibited movement, mutism, staring, posturing, and negativism. Patients may sit or stand in the same position for hours, may hold odd positions, and may resist movement of their extremities. Excited Catatonia: Excited catatonia is characterized by odd mannerisms/gestures, performing purposeless or inappropriate actions, excessive motor activity, restlessness, stereotypy, impulsivity, agitation, and combativeness. Speech and actions may be repetitive or mimic another person's. People in this state are extremely hyperactive and may have delusions and hallucinations. Catatonic excitement is commonly cited as one of the most dangerous mental states in psychiatry. Malignant Catatonia: Malignant catatonia is a life-threatening condition that may progress rapidly within a few days. It is characterized by fever, abnormalities in blood pressure, heart rate, respiratory rate, diaphoresis (sweating), and delirium. Certain lab findings are common with this presentation; however, they are nonspecific, which means that they are also present in other conditions and do not diagnose catatonia. These lab findings include: leukocytosis, elevated creatine kinase, low serum iron. The signs and symptoms of malignant catatonia overlap significantly with neuroleptic malignant syndrome (NMS) and so a careful history, review of medications, and physical exam are critical to properly differentiate these conditions. For example, if the patient has waxy flexibility and holds a position against gravity when passively moved into that position, then it is likely catatonia. If the patient has a "lead-pipe rigidity" then NMS should be the prime suspect. Other forms: Periodic catatonia is an inconsistently defined entity. In the Wernicke-Kleist-Leonhard school, it is a distinct form of "non-system schizophrenia" characterized by recurrent acute phases with hyperkinetic and akinetic features and often psychotic symptoms, and the build-up of a residual state in between these acute phases, which is characterized by low-level catatonic features and aboulia of varying severity. The condition has a strong hereditary component. According to modern classifications, this may be diagnosed as a form of bipolar disorder, schizoaffective disorder or schizophrenia. Independently, the term periodic catatonia is sometimes used in modern literature to describe a syndrome of recurrent phases of acute catatonia (excited or inhibited type) with full remission between episodes, which resembles the description of "motility psychosis" in the Wernicke-Kleist-Leonhard school. System catatonias or systematic catatonias are only defined in the Wernicke-Kleist-Leonhard school. These are chronic-progressive conditions characterized by specific disturbances of volition and psychomotricity, leading to a dramatic decline of executive and adaptive functioning and ability to communicate. They are considered forms of schizophrenia but distinct from other schizophrenic conditions. Affective flattening and apparent loss of interests are common but may be related to reduced emotional expression rather than lack of emotion. Heredity is low. Of the 21 different forms (6 "simple" and 15 "combined" forms) that have been described, most overlap only partially - if at all - with current definitions of either catatonia or schizophrenia, and thus are difficult to classify according to modern diagnostic manuals. Early childhood catatonias are also a diagnosis exclusive to the Wernicke-Kleist-Leonhard school, and refers to system catatonias that manifest in young children. Clinically, these conditions resemble severe regressive forms of autism. Chronic catatonia-like breakdown or autistic catatonia refers to a functional decline seen in some patients with pre-existing autism spectrum disorder and/or intellectual disability which usually runs a chronic-progressive course and encompasses attenuated catatonic symptoms as well as mood and anxiety symptoms that increasingly interfere with adaptive functioning. Onset is typically insidious and often mistaken for background autistic symptoms. Slowing of voluntary movement, reduced speech, aboulia, increased prompt dependency and obsessive-compulsive symptoms are frequently seen; negativism, (auto-)aggressive behaviors and ill-defined hallucinations have also been reported. Both the causes of this disorder as well as its prognosis appear to be heterogenous, with most patients showing partial recovery upon treatment. It seems to be related to chronic stress as a result of life transitions, loss of external time structuring, sensory sensitivities and/or traumatic experiences, co-morbid mental disorders, or other unknown causes. Since clinical catatonia can not always be diagnosed, this condition has also been renamed to the more general term "late regression". Complications Patients may experience several complications from being in a catatonic state. The nature of these complications will depend on the type of catatonia being experienced by the patient. For example, patients presenting with withdrawn catatonia may have refusal to eat which will in turn lead to malnutrition and dehydration. Furthermore, if immobility is a symptom the patient is presenting with, then they may develop pressure ulcers, muscle contractions, and are at risk of developing deep vein thrombosis (DVT) and pulmonary embolus (PE). Patients with excited catatonia may be aggressive and violent, and physical trauma may result from this. Catatonia may progress to the malignant type which will present with autonomic instability and may be life-threatening. Other complications also include the development of pneumonia and neuroleptic malignant syndrome. Causes Catatonia is almost always secondary to another underlying illness, often a psychiatric disorder. Mood disorders such as a bipolar disorder and depression are the most common etiologies to progress to catatonia. Other psychiatric associations include schizophrenia and other primary psychotic disorders. It also is related to autism spectrum disorders and ADHD. Psychodynamic theorists have interpreted catatonia as a defense against the potentially destructive consequences of responsibility, and the passivity of the disorder provides relief. Catatonia is also seen in many medical disorders, including infections (such as encephalitis), autoimmune disorders, meningitis, focal neurological lesions (including strokes), alcohol withdrawal, abrupt or overly rapid benzodiazepine withdrawal, cerebrovascular disease, neoplasms, head injury, and some metabolic conditions (homocystinuria, diabetic ketoacidosis, hepatic encephalopathy, and hypercalcaemia). Pathogenesis The pathophysiology that leads to catatonia is still poorly understood and a definite mechanism remains unknown. Neurologic studies have implicated several pathways; however, it remains unclear whether these findings are the cause or the consequence of the disorder. Abnormalities in GABA, glutamate signaling, serotonin, and dopamine transmission are believed to be implicated in catatonia. Furthermore, it has also been hypothesized that pathways that connect the basal ganglia with the cortex and thalamus is involved in the development of catatonia. Diagnosis There is not yet a definitive consensus regarding diagnostic criteria of catatonia. In the fifth edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-5, 2013) and the World Health Organization's eleventh edition of the International Classification of Diseases (ICD-11, 2022), the classification is more homogeneous than in earlier editions. Prominent researchers in the field have other suggestions for diagnostic criteria. DSM-5 classification The DSM-5 does not classify catatonia as an independent disorder, but rather it classifies it as catatonia associated with another mental disorder, due to another medical condition, or as unspecified catatonia. Catatonia is diagnosed by the presence of three or more of the following 12 psychomotor symptoms in association with a mental disorder, medical condition, or unspecified: stupor: no psycho-motor activity; not actively relating to the environment catalepsy: passive induction of a posture held against gravity waxy flexibility: allowing positioning by an examiner and maintaining that position mutism: no, or very little, verbal response (exclude if known aphasia) negativism: opposition or no response to instructions or external stimuli posturing: spontaneous and active maintenance of a posture against gravity mannerisms that are odd, circumstantial caricatures of normal actions stereotypy: repetitive, abnormally frequent, non-goal-directed movements agitation, not influenced by external stimuli grimacing: keeping a fixed facial expression echolalia: mimicking another's speech echopraxia: mimicking another's movements. Other disorders (additional code 293.89 [F06.1] to indicate the presence of the co-morbid catatonia): Catatonia associated with autism spectrum disorder Catatonia associated with schizophrenia spectrum and other psychotic disorders Catatonia associated with brief psychotic disorder Catatonia associated with schizophreniform disorder Catatonia associated with schizoaffective disorder Catatonia associated with a substance-induced psychotic disorder Catatonia associated with bipolar and related disorders Catatonia associated with major depressive disorder Catatonic disorder due to another medical condition If catatonic symptoms are present but do not form the catatonic syndrome, a medication- or substance-induced aetiology should first be considered. ICD-11 classification In ICD-11 catatonia is defined as a syndrome of primarily psychomotor disturbances that is characterized by the simultaneous occurrence of several symptoms such as stupor; catalepsy; waxy flexibility; mutism; negativism; posturing; mannerisms; stereotypies; psychomotor agitation; grimacing; echolalia and echopraxia. Catatonia may occur in the context of specific mental disorders, including mood disorders, schizophrenia or other primary psychotic disorders, and Neurodevelopmental disorders, and may be induced by psychoactive substances, including medications. Catatonia may also be caused by a medical condition not classified under mental, behavioral, or neurodevelopmental disorders. Assessment/Physical Catatonia is often overlooked and under-diagnosed. Patients with catatonia most commonly have an underlying psychiatric disorder, for this reason, physicians may overlook signs of catatonia due to the severity of the psychosis the patient is presenting with. Furthermore, the patient may not be presenting with the common signs of catatonia such as mutism and posturing. Additionally, the motor abnormalities seen in catatonia are also present in psychiatric disorders. For example, a patient with mania will show increased motor activity that may progress to exciting catatonia. One way in which physicians can differentiate between the two is to observe the motor abnormality. Patients with mania present with increased goal-directed activity. On the other hand, the increased activity in catatonia is not goal-directed and often repetitive. Catatonia is a clinical diagnosis and there is no specific laboratory test to diagnose it. However, certain testing can help determine what is causing the catatonia. An EEG will likely show diffuse slowing. If seizure activity is driving the syndrome, then an EEG would also be helpful in detecting this. CT or MRI will not show catatonia; however, they might reveal abnormalities that might be leading to the syndrome. Metabolic screens, inflammatory markers, or autoantibodies may reveal reversible medical causes of catatonia. Vital signs should be frequently monitored as catatonia can progress to malignant catatonia which is life-threatening. Malignant catatonia is characterized by fever, hypertension, tachycardia, and tachypnea. Rating scale Various rating scales for catatonia have been developed, however, their utility for clinical care has not been well established. The most commonly used scale is the Bush-Francis Catatonia Rating Scale (BFCRS) (external link is provided below). The scale is composed of 23 items with the first 14 items being used as the screening tool. If 2 of the 14 are positive, this prompts for further evaluation and completion of the remaining 9 items. A diagnosis can be supported by the lorazepam challenge or the zolpidem challenge. While proven useful in the past, barbiturates are no longer commonly used in psychiatry; thus the option of either benzodiazepines or ECT. Differential diagnosis The differential diagnosis of catatonia is extensive as signs and symptoms of catatonia may overlap significantly with those of other conditions. Therefore, a careful and detailed history, medication review, and physical exam are key to diagnosing catatonia and differentiating it from other conditions. Furthermore, some of these conditions can themselves lead to catatonia. The differential diagnosis is as follows: Neuroleptic malignant syndrome (NMS) and catatonia are both life-threatening conditions that share many of the same characteristics including fever, autonomic instability, rigidity, and delirium. Lab values of low serum iron, elevated creatine kinase, and white blood cell count are also shared by the two disorders further complicating the diagnosis. There are features of malignant catatonia (posturing, impulsivity, etc.) that are absent from NMS and the lab results are not as consistent in malignant catatonia as they are in NMS. Some experts consider NMS to be a drug-induced condition associated with antipsychotics, particularly, first generation antipsychotics, but it has not been established as a subtype. Therefore, discontinuing antipsychotics and starting benzodiazepines is a treatment for this condition, and similarly it is helpful in catatonia as well. Anti-NMDA receptor encephalitis is an autoimmune disorder characterized by neuropsychiatric features and the presence of IgG antibodies. The presentation of anti-NMDAR encephalitis has been categorized into 5 phases: prodromal phase, psychotic phase, unresponsive phase, hyperkinetic phase, and recovery phase. The psychotic phase progresses into the unresponsive phase characterized by mutism, decreased motor activity, and catatonia. Both serotonin syndrome and malignant catatonia may present with signs and symptoms of delirium, autonomic instability, hyperthermia, and rigidity. Again, similar to the presentation in NSM. However, patients with Serotonin syndrome have a history of ingestion of serotonergic drugs (Ex: SSRI). These patients will also present with hyperreflexia, myoclonus, nausea, vomiting, and diarrhea. Malignant hyperthermia and malignant catatonia share features of autonomic instability, hyperthermia, and rigidity. However, malignant hyperthermia is a hereditary disorder of skeletal muscle that makes these patients susceptible to exposure to halogenated anesthetics and/or depolarizing muscle relaxants like succinylcholine. Malignant hyperthermia most commonly occurs in the intraoperative or postoperative periods. Other signs and symptoms of malignant hyperthermia include metabolic and respiratory acidosis, hyperkalemia, and cardiac arrhythmias. Akinetic mutism is a neurological disorder characterized by a decrease in goal-directed behavior and motivation; however, the patient has an intact level of consciousness. Patients may present with apathy, and may seem indifferent to pain, hunger, or thirst. Akinetic mutism has been associated with structural damage in a variety of brain areas. Akinetic mutism and catatonia may both manifest with immobility, mutism, and waxy flexibility. Differentiating both disorders is the fact that akinetic mutism does not present with echolalia, echopraxia, or posturing. Furthermore, it is not responsive to benzodiazepines as is the case for catatonia. Elective mutism has an anxious etiology but has also been associated with personality disorders. Patients with this disorder fail to speak with some individuals but will speak with others. Likewise, they may refuse to speak in certain situations; for example, a child who refuses to speak at school but is conversational at home. This disorder is distinguished from catatonia by the absence of any other signs/symptoms. Nonconvulsive status epilepticus is seizure activity with no accompanying tonic-clonic movements. It can present with stupor, similar to catatonia, and they both respond to benzodiazepines. Nonconvulsive status epilepticus is diagnosed by the presence of seizure activity seen on electroencephalogram (EEG). Catatonia on the other hand, is associated with normal EEG or diffuse slowing. Delirium is characterized by fluctuating disturbed perception and consciousness in the ill individual. It has hypoactive and hyperactive or mixed forms. People with hyperactive delirium present similarly to those with excited catatonia and have symptoms of restlessness, agitation, and aggression. Those with hypoactive delirium present with similarly to retarded catatonia, withdrawn and quiet. However, catatonia also includes other distinguishing features including posturing and rigidity as well as a positive response to benzodiazepines. Patients with locked-in syndrome present with immobility and mutism; however, unlike patients with catatonia who are unmotivated to communicate, patients with locked-in syndrome try to communicate with eye movements and blinking. Furthermore, locked-in syndrome is caused by damage to the brainstem. Stiff-person syndrome and catatonia are similar in that they may both present with rigidity, autonomic instability and a positive response to benzodiazepines. However, stiff-person syndrome may be associated with anti-glutamic acid decarboxylase (anti-GAD) antibodies and other catatonic signs such as mutism and posturing are not part of the syndrome. Untreated late-stage Parkinson's disease may present similarly to retarded catatonia with symptoms of immobility, rigidity, and difficulty speaking. Further complicating the diagnosis is the fact that many patients with Parkinson's disease will have major depressive disorder, which may be the underlying cause of catatonia. Parkinson's disease can be distinguished from catatonia by a positive response to levodopa. Catatonia on the other hand will show a positive response to benzodiazepines. Extrapyramidal side effects of antipsychotic medication, especially dystonia and akathisia, can be difficult to distinguish from catatonic symptoms, or may confound them in the psychiatric setting. Extrapyramidal motor disorders usually do not involve social symptoms like negativism, while individuals with catatonic excitement typically do not have the physically painful compulsion to move that is seen in akathisia. Certain stimming behaviors and stress responses in individuals with autism spectrum disorders can present similarly to catatonia. In autism spectrum disorders, chronic catatonia is distinguished by a lasting deterioration of adaptive skills from the background of pre-existing autistic symptomatology that cannot be easily explained. Acute catatonia is usually clearly distinguishable from autistic symptoms. The diagnostic entities of obsessional slowness and psychogenic parkinsonism show overlapping features with catatonia, such as motor slowness, gegenhalten (oppositional paratonia), mannerisms, and reduced or absent speech. However, psychogenic parkinsonism involves tremor which is unusual in catatonia. Obsessional slowness is a controversial diagnosis, with presentations ranging from severe but common manifestations of obsessive compulsive disorder to catatonia. Down Syndrome Disintegrative Disorder (or Down Syndrome Regression Disorder, DSDD / DSRD) is a chronic condition characterized by loss of previously acquired adaptive, cognitive and social functioning occurring in persons with Down Syndrome, usually during adolescence or early adulthood. The clinical picture is variable, but often includes catatonic signs, which is why it was called "catatonic psychosis" in initial reports in 1946. DSDD seems to phenotypically overlap with obsessional slowness (see above) and catatonia-like regression occurring in ASD. Treatment The initial treatment of catatonia is to stop medication that could be potentially leading to the syndrome. These may include steroids, stimulants, anticonvulsants, neuroleptics, dopamine blockers, etc. The next step is to provide a "lorazepam challenge," in which patients are given 2 mg of IV lorazepam (or another benzodiazepine). Most patients with catatonia will respond significantly to this within the first 15–30 minutes. If no change is observed during the first dose, then a second dose is given and the patient is re-examined. If the patient responds to the lorazepam challenge, then lorazepam can be scheduled at interval doses until the catatonia resolves. The lorazepam must be tapered slowly, otherwise, the catatonia symptoms may return. The underlying cause of the catatonia should also be treated during this time. If within a week the catatonia is not resolved, then ECT can be used to reverse the symptoms. ECT in combination with benzodiazepines is used to treat malignant catatonia. In France, zolpidem has also been used in diagnosis, and response may occur within the same time period. Ultimately the underlying cause needs to be treated. Electroconvulsive therapy (ECT) is an effective treatment for catatonia that is well acknowledged. ECT has also shown favorable outcomes in patients with chronic catatonia. However, it has been pointed out that further high quality randomized controlled trials are needed to evaluate the efficacy, tolerance, and protocols of ECT in catatonia. Antipsychotics should be used with care as they can worsen catatonia and are the cause of neuroleptic malignant syndrome, a dangerous condition that can mimic catatonia and requires immediate discontinuation of the antipsychotic. There is evidence clozapine works better than other antipsychotics to treat catatonia, following a recent systematic review. Excessive glutamate activity is believed to be involved in catatonia; when first-line treatment options fail, NMDA antagonists such as amantadine or memantine may be used. Amantadine may have an increased incidence of tolerance with prolonged use and can cause psychosis, due to its additional effects on the dopamine system. Memantine has a more targeted pharmacological profile for the glutamate system, reduced incidence of psychosis and may therefore be preferred for individuals who cannot tolerate amantadine. Topiramate is another treatment option for resistant catatonia; it produces its therapeutic effects by producing glutamate antagonism via modulation of AMPA receptors. Prognosis Patients who experience an episode of catatonia are more likely to experience another recurring episode. Treatment response for patients with catatonia is 50–70% and these patients have a good prognosis. However, failure to respond to medication is a very poor prognosis. Many of these patients will require long-term and continuous mental health care. For patients with catatonia with underlying schizophrenia, the prognosis is much poorer. Epidemiology Catatonia has been mostly studied in acutely ill psychiatric patients. Catatonia frequently goes unrecognized, leading to the belief that the syndrome is rare; however, this is not true and prevalence has been reported to be as high as 10% in patients with acute psychiatric illnesses. One large population estimate has suggested that the incidence of catatonia is 10.6 episodes per 100 000 person-years. It occurs in males and females in approximately equal numbers. 21-46% of all catatonia cases can be attributed to a general medical condition. History Reports of stupor-like and catatonia-like states abound in the history of psychiatry. After the middle of the 19th century there was an increase of interest in the motor disorders accompanying madness, culminating in the publication by Karl Ludwig Kahlbaum in 1874 of (Catatonia or Tension Insanity). See also Akinetic mutism Autistic catatonia Awakenings (1990 biopic about catatonic patients, based on Oliver Sacks's book of the same name) Blank expression Botulism Disorganized schizophrenia Homecoming (features catatonia as a main plot point) Karolina Olsson Oneiroid syndrome Paranoid schizophrenia Persistent vegetative state Resignation syndrome Sensory overload Tonic immobility Sleep paralysis References External links Catatonia in DSM-5 Encyclopedia of Mental Disorders – Catatonic Disorders "Schizophrenia: Catatonic Type" video by Heinz Edgar Lehmann, 1952 Bush-Francis Catatonia Rating Scale Mood disorders Schizophrenia Psychopathological syndromes
5244
https://en.wikipedia.org/wiki/Cipher
Cipher
In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography. Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information. Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates." When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it. The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext. Most modern ciphers can be categorized in several ways By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers). By whether the same key is used for both encryption and decryption (symmetric key algorithms), or if a different key is used for each (asymmetric key algorithms). If the algorithm is symmetric, the key must be known to the recipient and sender and to no one else. If the algorithm is an asymmetric one, the enciphering key is different from, but closely related to, the deciphering key. If one key cannot be deduced from the other, the asymmetric key algorithm has the public/private key property and one of the keys may be made public without loss of confidentiality. Etymology Originating from the Arabic word for zero صفر (sifr), the word "cipher" spread to Europe as part of the Arabic numeral system during the Middle Ages. The Roman numeral system lacked the concept of zero, and this limited advances in mathematics. In this transition, the word was adopted into Medieval Latin as cifra, and then into Middle French as cifre. This eventually led to the English word cipher (minority spelling cypher). One theory for how the term came to refer to encoding is that the concept of zero was confusing to Europeans, and so the term came to refer to a message or communication that was not easily understood. The term cipher was later also used to refer to any Arabic digit, or to calculation using them, so encoding text in the form of Arabic numerals is literally converting the text to "ciphers". Versus codes In casual contexts, "code" and "cipher" can typically be used interchangeably, however, the technical usages of the words refer to different concepts. Codes contain meaning; words and phrases are assigned to numbers or symbols, creating a shorter message. An example of this is the commercial telegraph code which was used to shorten long telegraph messages which resulted from entering into commercial contracts using exchanges of telegrams. Another example is given by whole word ciphers, which allow the user to replace an entire word with a symbol or character, much like the way written Japanese utilizes Kanji (meaning Chinese characters in Japanese) characters to supplement the native Japanese characters representing syllables. An example using English language with Kanji could be to replace "The quick brown fox jumps over the lazy dog" by "The quick brown 狐 jumps 上 the lazy 犬". Stenographers sometimes use specific symbols to abbreviate whole words. Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, using superencipherment to increase the security. In some cases the terms codes and ciphers are used synonymously with substitution and transposition, respectively. Historically, cryptography was split into a dichotomy of codes and ciphers, while coding had its own terminology analogous to that of ciphers: "encoding, codetext, decoding" and so on. However, codes have a variety of drawbacks, including susceptibility to cryptanalysis and the difficulty of managing a cumbersome codebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique. Types There are a variety of different types of encryption. Algorithms used earlier in the history of cryptography are substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys. Historical The Caesar Cipher is one of the earliest known cryptographic systems. Julius Caesar used a cipher that shifts the letters in the alphabet in place by three and wrapping the remaining letters to the front to write to Marcus Tullius Cicero in approximately 50 BC.[11] Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include simple substitution ciphers (such as ROT13) and transposition ciphers (such as a Rail Fence Cipher). For example, "GOOD DOG" can be encrypted as "PLLX XLP" where "L" substitutes for "O", "P" for "G", and "X" for "D" in the message. Transposition of the letters "GOOD DOG" can result in "DGOGDOO". These simple ciphers and examples are easy to crack, even without plaintext-ciphertext pairs. William Shakespeare often used the concept of ciphers in his writing to symbolize nothingness. In Shakespeare's Henry V, he relates one of the accounting methods that brought the Arabic Numeral system and zero to Europe, to the human imagination. The actors who perform this play were not at the battles of Henry V's reign, so they represent absence. In another sense, ciphers are important to people who work with numbers, but they do not hold value. Shakespeare used this concept to outline how those who counted and identified the dead from the battles used that information as a political weapon, furthering class biases and xenophobia. In the 1640s, the Parliamentarian commander, Edward Montagu, 2nd Earl of Manchester, developed ciphers to send coded messages to his allies during the English Civil War. Simple ciphers were replaced by polyalphabetic substitution ciphers (such as the Vigenère) which changed the substitution alphabet for every letter. For example, "GOOD DOG" can be encrypted as "PLSX TWF" where "L", "S", and "W" substitute for "O". With even a small amount of known or estimated plaintext, simple polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack. It is possible to create a secure pen and paper cipher based on a one-time pad though, but the usual disadvantages of one-time pads apply. During the early twentieth century, electro-mechanical machines were invented to do encryption and decryption using transposition, polyalphabetic substitution, and a kind of "additive" substitution. In rotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided another substitution. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, other machines such as the British Bombe were invented to crack these encryption methods. Modern Modern encryption methods can be divided by two criteria: by type of key used, and by type of input data. By type of key used ciphers are divided into: symmetric key algorithms (Private-key cryptography), where one same key is used for encryption and decryption, and asymmetric key algorithms (Public-key cryptography), where two different keys are used for encryption and decryption. In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. The design of AES (Advanced Encryption System) was beneficial because it aimed to overcome the flaws in the design of the DES (Data encryption standard). AES's designer's claim that the common means of modern cipher cryptanalytic attacks are ineffective against AES due to its design structure.[12] Ciphers can be distinguished into two types by the type of input data: block ciphers, which encrypt block of data of fixed size, and stream ciphers, which encrypt continuous streams of data. Key size and vulnerability In a pure mathematical attack, (i.e., lacking any other information to help break a cipher) two factors above all count: Computational power available, i.e., the computing power which can be brought to bear on the problem. It is important to note that average performance/capacity of a single computer is not the only factor to consider. An adversary can use multiple computers at once, for instance, to increase the speed of exhaustive search for a key (i.e., "brute force" attack) substantially. Key size, i.e., the size of key used to encrypt a message. As the key size increases, so does the complexity of exhaustive search to the point where it becomes impractical to crack encryption directly. Since the desired effect is computational difficulty, in theory one would choose an algorithm and desired difficulty level, thus decide the key length accordingly. An example of this process can be found at Key Length which uses multiple reports to suggest that a symmetrical cipher with 128 bits, an asymmetric cipher with 3072 bit keys, and an elliptic curve cipher with 256 bits, all have similar difficulty at present. Claude Shannon proved, using information theory considerations, that any theoretically unbreakable cipher must have keys which are at least as long as the plaintext, and used only once: one-time pad. See also Autokey cipher Cover-coding Encryption software List of ciphertexts Steganography Telegraph code Notes References Richard J. Aldrich, GCHQ: The Uncensored Story of Britain's Most Secret Intelligence Agency, HarperCollins July 2010. Helen Fouché Gaines, "Cryptanalysis", 1939, Dover. Ibrahim A. Al-Kadi, "The origins of cryptology: The Arab contributions", Cryptologia, 16(2) (April 1992) pp. 97–126. David Kahn, The Codebreakers - The Story of Secret Writing () (1967) David A. King, The ciphers of the monks - A forgotten number notation of the Middle Ages, Stuttgart: Franz Steiner, 2001 () Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America, 1966. William Stallings, ''Cryptography and Network Security, principles and practices, 4th Edition "Ciphers vs. Codes (Article) | Cryptography." Khan Academy, Khan Academy, https://www.khanacademy.org/computing/computer-science/cryptography/ciphers/a/ciphers-vs-codes. Caldwell, William Casey. "Shakespeare's Henry V and the Ciphers of History." SEL Studies in English Literature, 1500-1900, vol. 61, no. 2, 2021, pp. 241–68. EBSCOhost, . Luciano, Dennis, and Gordon Prichett. "Cryptology: From Caesar Ciphers to Public-Key Cryptosystems." The College Mathematics Journal, vol. 18, no. 1, 1987, pp. 2–17. JSTOR, https://doi.org/10.2307/2686311. Accessed 19 Feb. 2023. Ho Yean Li, et al. "Heuristic Cryptanalysis of Classical and Modern Ciphers." 2005 13th IEEE International Conference on Networks Jointly Held with the 2005 IEEE 7th Malaysia International Conf on Communic, Networks, 2005. Jointly Held with the 2005 IEEE 7th Malaysia International Conference on Communication., 2005 13th IEEE International Conference on, Networks and Communications, vol. 2, Jan. 2005. EBSCOhost, . External links Kish cypher Cryptography
5247
https://en.wikipedia.org/wiki/Country%20music
Country music
Country (also called country and western) is a music genre originating in the Southern and Southwestern United States. First produced in the 1920s, country music primarily focuses on working class Americans and blue-collar American life. Country music is known for its ballads and dance tunes (also known as "honky-tonk music") with simple form, folk lyrics, and harmonies generally accompanied by instruments such as banjos, fiddles, harmonicas, and many types of guitar (including acoustic, electric, steel, and resonator guitars). Though it is primarily rooted in various forms of American folk music, such as old-time music and Appalachian music, many other traditions, including African-American, Mexican, Irish, and Hawaiian music, have also had a formative influence on the genre. Blues modes have been used extensively throughout its history as well. The term country music gained popularity in the 1940s in preference to hillbilly music; it came to encompass western music, which evolved parallel to hillbilly music from similar roots, in the mid-20th century. Contemporary styles of western music include Texas country, red dirt, and Hispano- and Mexican American-led Tejano and New Mexico music, all extant alongside longstanding indigenous traditions. In 2009, in the United States, country music was the most listened to rush hour radio genre during the evening commute, and second most popular in the morning commute. Origins The main components of the modern country music style date back to music traditions throughout the Southern United States and Southwestern United States, while its place in American popular music was established in the 1920s during the early days of music recording. According to country historian Bill C. Malone, country music was "introduced to the world as a Southern phenomenon." Migration into the southern Appalachian Mountains, of the Southeastern United States, brought the folk music and instruments of Europe, Africa, and the Mediterranean Basin along with it for nearly 300 years, which developed into Appalachian music. As the country expanded westward, the Mississippi River and Louisiana became a crossroads for country music, giving rise to Cajun music. In the Southwestern United States, it was the Rocky Mountains, American frontier, and Rio Grande that acted as a similar backdrop for Native American, Mexican, and cowboy ballads, which resulted in New Mexico music and the development of western music, and its directly related Red Dirt, Texas country, and Tejano music styles. In the Asia-Pacific, the steel guitar sound of country music has its provenance in the music of Hawaii. Role of East Tennessee The U.S. Congress has formally recognized Bristol, Tennessee as the "Birthplace of Country Music", based on the historic Bristol recording sessions of 1927. Since 2014, the city has been home to the Birthplace of Country Music Museum. Historians have also noted the influence of the less-known Johnson City sessions of 1928 and 1929, and the Knoxville sessions of 1929 and 1930. In addition, the Mountain City Fiddlers Convention, held in 1925, helped to inspire modern country music. Before these, pioneer settlers, in the Great Smoky Mountains region, had developed a rich musical heritage. Generations The first generation emerged in the 1920s, with Atlanta's music scene playing a major role in launching country's earliest recording artists. James Gideon "Gid" Tanner (1885–1960) was an American old-time fiddler and one of the earliest stars of what would come to be known as country music. His band, the Skillet Lickers, was one of the most innovative and influential string bands of the 1920s and 1930s. Its most notable members were Clayton McMichen (fiddle and vocal), Dan Hornsby (vocals), Riley Puckett (guitar and vocal) and Robert Lee Sweat (guitar). New York City record label Okeh Records began issuing hillbilly music records by Fiddlin' John Carson as early as 1923, followed by Columbia Records (series 15000D "Old Familiar Tunes") (Samantha Bumgarner) in 1924, and RCA Victor Records in 1927 with the first famous pioneers of the genre Jimmie Rodgers, who is widely considered the "Father of Country Music", and the first family of country music the Carter Family. Many "hillbilly" musicians recorded blues songs throughout the 1920s. During the second generation (1930s–1940s), radio became a popular source of entertainment, and "barn dance" shows featuring country music were started all over the South, as far north as Chicago, and as far west as California. The most important was the Grand Ole Opry, aired starting in 1925 by WSM in Nashville and continuing to the present day. During the 1930s and 1940s, cowboy songs, or western music, which had been recorded since the 1920s, were popularized by films made in Hollywood, many featuring Gene Autry, who was known as king of the "singing cowboys", and Hank Williams. Bob Wills was another country musician from the Lower Great Plains who had become very popular as the leader of a "hot string band," and who also appeared in Hollywood westerns. His mix of country and jazz, which started out as dance hall music, would become known as western swing. Wills was one of the first country musicians known to have added an electric guitar to his band, in 1938. Country musicians began recording boogie in 1939, shortly after it had been played at Carnegie Hall, when Johnny Barfield recorded "Boogie Woogie". The third generation (1950s–1960s) started at the end of World War II with "mountaineer" string band music known as bluegrass, which emerged when Bill Monroe, along with Lester Flatt and Earl Scruggs were introduced by Roy Acuff at the Grand Ole Opry. Gospel music remained a popular component of country music. The Native American, Hispano, and American frontier music of the Southwestern United States and Northern Mexico, became popular among poor communities in New Mexico, Oklahoma, and Texas; the basic ensemble consisted of classical guitar, bass guitar, dobro or steel guitar, though some larger ensembles featured electric guitars, trumpets, keyboards (especially the honky-tonk piano, a type of tack piano), banjos, and drums. By the early 1950s it blended with rock and roll, becoming the rockabilly sound produced by Sam Phillips, Norman Petty, and Bob Keane. Musicians like Elvis Presley, Bo Diddley, Buddy Holly, Jerry Lee Lewis, Ritchie Valens, Carl Perkins, Roy Orbison, and Johnny Cash emerged as enduring representatives of the style. Beginning in the mid-1950s, and reaching its peak during the early 1960s, the Nashville sound turned country music into a multimillion-dollar industry centered in Nashville, Tennessee; Patsy Cline and Jim Reeves were two of the most broadly popular Nashville sound artists, and their deaths in separate plane crashes in the early 1960s were a factor in the genre's decline. Starting in the 1950s to the mid-1960s, western singer-songwriters such as Michael Martin Murphey and Marty Robbins rose in prominence as did others, throughout western music traditions, like New Mexico music's Al Hurricane. The late 1960s in American music produced a unique blend as a result of traditionalist backlash within separate genres. In the aftermath of the British Invasion, many desired a return to the "old values" of rock n' roll. At the same time there was a lack of enthusiasm in the country sector for Nashville-produced music. What resulted was a crossbred genre known as country rock. Fourth generation (1970s–1980s) music included outlaw country with roots in the Bakersfield sound, and country pop with roots in the countrypolitan, folk music and soft rock. Between 1972 and 1975 singer/guitarist John Denver released a series of hugely successful songs blending country and folk-rock musical styles. By the mid-1970s, Texas country and Tejano music gained popularity with performers like Freddie Fender. During the early 1980s country artists continued to see their records perform well on the pop charts. In 1980 a style of "neocountry disco music" was popularized. During the mid-1980s a group of new artists began to emerge who rejected the more polished country-pop sound that had been prominent on radio and the charts in favor of more traditional "back-to-basics" production. During the fifth generation (1990s), neotraditionalists and stadium country acts prospered. The sixth generation (2000s–present) has seen a certain amount of diversification in regard to country music styles. It has also, however, seen a shift into patriotism and conservative politics since 9/11, though such themes are less prevalent in more modern trends. The influence of rock music in country has become more overt during the late 2000s and early 2010s. Most of the best-selling country songs of this era were those by Lady A, Florida Georgia Line, Carrie Underwood, and Taylor Swift. Hip hop also made its mark on country music with the emergence of country rap. History First generation (1920s) The first commercial recordings of what was considered instrumental music in the traditional country style were "Arkansas Traveler" and "Turkey in the Straw" by fiddlers Henry Gilliland & A.C. (Eck) Robertson on June 30, 1922, for Victor Records and released in April 1923. Columbia Records began issuing records with "hillbilly" music (series 15000D "Old Familiar Tunes") as early as 1924. The first commercial recording of what is widely considered to be the first country song featuring vocals and lyrics was Fiddlin' John Carson with "Little Log Cabin in the Lane" for Okeh Records on June 14, 1923. Vernon Dalhart was the first country singer to have a nationwide hit in May 1924 with "Wreck of the Old 97". The flip side of the record was "Lonesome Road Blues", which also became very popular. In April 1924, "Aunt" Samantha Bumgarner and Eva Davis became the first female musicians to record and release country songs. Many of the early country musicians, such as the yodeler Cliff Carlisle, recorded blues songs into the 1930s. Other important early recording artists were Riley Puckett, Don Richardson, Fiddlin' John Carson, Uncle Dave Macon, Al Hopkins, Ernest V. Stoneman, Blind Alfred Reed, Charlie Poole and the North Carolina Ramblers and the Skillet Lickers. The steel guitar entered country music as early as 1922, when Jimmie Tarlton met famed Hawaiian guitarist Frank Ferera on the West Coast. Jimmie Rodgers and the Carter Family are widely considered to be important early country musicians. From Scott County, Virginia, the Carters had learned sight reading of hymnals and sheet music using solfege. Their songs were first captured at a historic recording session in Bristol, Tennessee, on August 1, 1927, where Ralph Peer was the talent scout and sound recordist. A scene in the movie O Brother, Where Art Thou? depicts a similar occurrence in the same timeframe. Rodgers fused hillbilly country, gospel, jazz, blues, pop, cowboy, and folk, and many of his best songs were his compositions, including "Blue Yodel", which sold over a million records and established Rodgers as the premier singer of early country music. Beginning in 1927, and for the next 17 years, the Carters recorded some 300 old-time ballads, traditional tunes, country songs and gospel hymns, all representative of America's southeastern folklore and heritage. Maybelle Carter went on to continue the family tradition with her daughters as The Carter Sisters; her daughter June would marry (in succession) Carl Smith, Rip Nix and Johnny Cash, having children with each who would also become country singers. Second generation (1930s–1940s) Record sales declined during the Great Depression, but radio became a popular source of entertainment, and "barn dance" shows featuring country music were started by radio stations all over the South, as far north as Chicago, and as far west as California. The most important was the Grand Ole Opry, aired starting in 1925 by WSM in Nashville and continuing to the present day. Some of the early stars on the Opry were Uncle Dave Macon, Roy Acuff and African American harmonica player DeFord Bailey. WSM's 50,000-watt signal (in 1934) could often be heard across the country. Many musicians performed and recorded songs in any number of styles. Moon Mullican, for example, played western swing but also recorded songs that can be called rockabilly. Between 1947 and 1949, country crooner Eddy Arnold placed eight songs in the top 10. From 1945 to 1955 Jenny Lou Carson was one of the most prolific songwriters in country music. Singing cowboys and western swing In the 1930s and 1940s, cowboy songs, or western music, which had been recorded since the 1920s, were popularized by films made in Hollywood. Some of the popular singing cowboys from the era were Gene Autry, the Sons of the Pioneers, and Roy Rogers. Country music and western music were frequently played together on the same radio stations, hence the term country and western music, despite country and western being two distinct genres. Cowgirls contributed to the sound in various family groups. Patsy Montana opened the door for female artists with her history-making song "I Want To Be a Cowboy's Sweetheart". This would begin a movement toward opportunities for women to have successful solo careers. Bob Wills was another country musician from the Lower Great Plains who had become very popular as the leader of a "hot string band," and who also appeared in Hollywood westerns. His mix of country and jazz, which started out as dance hall music, would become known as western swing. Cliff Bruner, Moon Mullican, Milton Brown and Adolph Hofner were other early western swing pioneers. Spade Cooley and Tex Williams also had very popular bands and appeared in films. At its height, western swing rivaled the popularity of big band swing music. Changing instrumentation Drums were scorned by early country musicians as being "too loud" and "not pure", but by 1935 western swing big band leader Bob Wills had added drums to the Texas Playboys. In the mid-1940s, the Grand Ole Opry did not want the Playboys' drummer to appear on stage. Although drums were commonly used by rockabilly groups by 1955, the less-conservative-than-the-Grand-Ole-Opry Louisiana Hayride kept its infrequently used drummer back stage as late as 1956. By the early 1960s, however, it was rare for a country band not to have a drummer. Bob Wills was one of the first country musicians known to have added an electric guitar to his band, in 1938. A decade later (1948) Arthur Smith achieved top 10 US country chart success with his MGM Records recording of "Guitar Boogie", which crossed over to the US pop chart, introducing many people to the potential of the electric guitar. For several decades Nashville session players preferred the warm tones of the Gibson and Gretsch archtop electrics, but a "hot" Fender style, using guitars which became available beginning in the early 1950s, eventually prevailed as the signature guitar sound of country. Hillbilly boogie Country musicians began recording boogie in 1939, shortly after it had been played at Carnegie Hall, when Johnny Barfield recorded "Boogie Woogie". The trickle of what was initially called hillbilly boogie, or okie boogie (later to be renamed country boogie), became a flood beginning in late 1945. One notable release from this period was the Delmore Brothers' "Freight Train Boogie", considered to be part of the combined evolution of country music and blues towards rockabilly. In 1948, Arthur "Guitar Boogie" Smith achieved top ten US country chart success with his MGM Records recordings of "Guitar Boogie" and "Banjo Boogie", with the former crossing over to the US pop charts. Other country boogie artists included Moon Mullican, Merrill Moore and Tennessee Ernie Ford. The hillbilly boogie period lasted into the 1950s and remains one of many subgenres of country into the 21st century. Bluegrass, folk and gospel By the end of World War II, "mountaineer" string band music known as bluegrass had emerged when Bill Monroe joined with Lester Flatt and Earl Scruggs, introduced by Roy Acuff at the Grand Ole Opry. That was the ordination of bluegrass music and how Bill Monroe came to be known as the "Father of Bluegrass." Gospel music, too, remained a popular component of bluegrass and other sorts of country music. Red Foley, the biggest country star following World War II, had one of the first million-selling gospel hits ("Peace in the Valley") and also sang boogie, blues and rockabilly. In the post-war period, country music was called "folk" in the trades, and "hillbilly" within the industry. In 1944, Billboard replaced the term "hillbilly" with "folk songs and blues," and switched to "country and western" in 1949. Honky tonk Another type of stripped down and raw music with a variety of moods and a basic ensemble of guitar, bass, dobro or steel guitar (and later) drums became popular, especially among rural residents in the three states of Texhomex, those being Texas, Oklahoma, and New Mexico. It became known as honky tonk and had its roots in western swing and the ranchera music of Mexico and the border states, particularly New Mexico and Texas, together with the blues of the American South. Bob Wills and His Texas Playboys personified this music which has been described as "a little bit of this, and a little bit of that, a little bit of black and a little bit of white ... just loud enough to keep you from thinking too much and to go right on ordering the whiskey." East Texan Al Dexter had a hit with "Honky Tonk Blues", and seven years later "Pistol Packin' Mama". These "honky tonk" songs were associated with barrooms, and was performed by the likes of Ernest Tubb, Kitty Wells (the first major female country solo singer), Ted Daffan, Floyd Tillman, the Maddox Brothers and Rose, Lefty Frizzell and Hank Williams; the music of these artists would later be called "traditional" country. Williams' influence in particular would prove to be enormous, inspiring many of the pioneers of rock and roll, such as Elvis Presley, Jerry Lee Lewis, Chuck Berry and Ike Turner, while providing a framework for emerging honky tonk talents like George Jones. Webb Pierce was the top-charting country artist of the 1950s, with 13 of his singles spending 113 weeks at number one. He charted 48 singles during the decade; 31 reached the top ten and 26 reached the top four. Third generation (1950s–1960s) By the early 1950s, a blend of western swing, country boogie, and honky tonk was played by most country bands, a mixture which followed in the footsteps of Gene Autry, Lydia Mendoza, Roy Rogers, and Patsy Montana. Western music, influenced by the cowboy ballads, New Mexico, Texas country and Tejano music rhythms of the Southwestern United States and Northern Mexico, reached its peak in popularity in the late 1950s, most notably with the song "El Paso", first recorded by Marty Robbins in September 1959. Western music's influence would continue to grow within the country music sphere, western musicians like Michael Martin Murphey, New Mexico music artists Al Hurricane and Antonia Apodaca, Tejano music performer Little Joe, and even folk revivalist John Denver, all first rose to prominence during this time. This western music influence largely kept the music of the folk revival and folk rock from influencing the country music genre much, despite the similarity in instrumentation and origins (see, for instance, the Byrds' negative reception during their appearance on the Grand Ole Opry). The main concern was largely political: most folk revival was largely driven by progressive activists, a stark contrast to the culturally conservative audiences of country music. John Denver was perhaps the only musician to have major success in both the country and folk revival genres throughout his career, later only a handful of artists like Burl Ives and Canadian musician Gordon Lightfoot successfully made the crossover to country after folk revival fell out of fashion. During the mid-1950s a new style of country music became popular, eventually to be referred to as rockabilly. In 1953, the first all-country radio station was established in Lubbock, Texas. The music of the 1960s and 1970s targeted the American working class, and truckers in particular. As country radio became more popular, trucking songs like the 1963 hit song Six Days on the Road by Dave Dudley began to make up their own subgenre of country. These revamped songs sought to portray American truckers as a "new folk hero", marking a significant shift in sound from earlier country music. The song was written by actual truckers and contained numerous references to the trucker culture of the time like "ICC" for Interstate Commerce Commission and "little white pills" as a reference to amphetamines. Starday Records in Nashville followed up on Dudley's initial success with the release of Give Me 40 Acres by the Willis Brothers. Rockabilly Rockabilly was most popular with country fans in the 1950s; one of the first rock and roll superstars was former western yodeler Bill Haley, who repurposed his Four Aces of Western Swing into a rockabilly band in the early 1950s and renamed it the Comets. Bill Haley & His Comets are credited with two of the first successful rock and roll records, "Crazy Man, Crazy" of 1953 and "Rock Around the Clock" in 1954. 1956 could be called the year of rockabilly in country music. Rockabilly was an early form of rock and roll, an upbeat combination of blues and country music. The number two, three and four songs on Billboard's charts for that year were Elvis Presley, "Heartbreak Hotel"; Johnny Cash, "I Walk the Line"; and Carl Perkins, "Blue Suede Shoes". Reflecting this success, George Jones released a rockabilly record that year under the pseudonym "Thumper Jones", wanting to capitalize on the popularity of rockabilly without alienating his traditional country base. Cash and Presley placed songs in the top 5 in 1958 with No. 3 "Guess Things Happen That Way/Come In, Stranger" by Cash, and No. 5 by Presley "Don't/I Beg of You." Presley acknowledged the influence of rhythm and blues artists and his style, saying "The colored folk been singin' and playin' it just the way I'm doin' it now, man for more years than I know." Within a few years, many rockabilly musicians returned to a more mainstream style or had defined their own unique style. Country music gained national television exposure through Ozark Jubilee on ABC-TV and radio from 1955 to 1960 from Springfield, Missouri. The program showcased top stars including several rockabilly artists, some from the Ozarks. As Webb Pierce put it in 1956, "Once upon a time, it was almost impossible to sell country music in a place like New York City. Nowadays, television takes us everywhere, and country music records and sheet music sell as well in large cities as anywhere else." The Country Music Association was founded in 1958, in part because numerous country musicians were appalled by the increased influence of rock and roll on country music. The Nashville and countrypolitan sounds Beginning in the mid-1950s, and reaching its peak during the early 1960s, the Nashville sound turned country music into a multimillion-dollar industry centered in Nashville, Tennessee. Under the direction of producers such as Chet Atkins, Bill Porter, Paul Cohen, Owen Bradley, Bob Ferguson, and later Billy Sherrill, the sound brought country music to a diverse audience and helped revive country as it emerged from a commercially fallow period. This subgenre was notable for borrowing from 1950s pop stylings: a prominent and smooth vocal, backed by a string section (violins and other orchestral strings) and vocal chorus. Instrumental soloing was de-emphasized in favor of trademark "licks". Leading artists in this genre included Jim Reeves, Skeeter Davis, Connie Smith, the Browns, Patsy Cline, and Eddy Arnold. The "slip note" piano style of session musician Floyd Cramer was an important component of this style. The Nashville Sound collapsed in mainstream popularity in 1964, a victim of both the British Invasion and the deaths of Reeves and Cline in separate airplane crashes. By the mid-1960s, the genre had developed into countrypolitan. Countrypolitan was aimed straight at mainstream markets, and it sold well throughout the later 1960s into the early 1970s. Top artists included Tammy Wynette, Lynn Anderson and Charlie Rich, as well as such former "hard country" artists as Ray Price and Marty Robbins. Despite the appeal of the Nashville sound, many traditional country artists emerged during this period and dominated the genre: Loretta Lynn, Merle Haggard, Buck Owens, Porter Wagoner, George Jones, and Sonny James among them. Country-soul crossover In 1962, Ray Charles surprised the pop world by turning his attention to country and western music, topping the charts and rating number three for the year on Billboard's pop chart with the "I Can't Stop Loving You" single, and recording the landmark album Modern Sounds in Country and Western Music. Bakersfield sound Another subgenre of country music grew out of hardcore honky tonk with elements of western swing and originated north-northwest of Los Angeles in Bakersfield, California, where many "Okies" and other Dust Bowl migrants had settled. Influenced by one-time West Coast residents Bob Wills and Lefty Frizzell, by 1966 it was known as the Bakersfield sound. It relied on electric instruments and amplification, in particular the Telecaster electric guitar, more than other subgenres of the country music of the era, and it can be described as having a sharp, hard, driving, no-frills, edgy flavor—hard guitars and honky-tonk harmonies. Leading practitioners of this style were Buck Owens, Merle Haggard, Tommy Collins, Dwight Yoakam, Gary Allan, and Wynn Stewart, each of whom had his own style. Ken Nelson, who had produced Owens and Haggard and Rose Maddox became interested in the trucking song subgenre following the success of Six Days on the Road and asked Red Simpson to record an album of trucking songs. Haggard's White Line Fever was also part of the trucking subgenre. Western music merges with country The country music scene of the 1940s until the 1970s was largely dominated by western music influences, so much so that the genre began to be called "country and western". Even today, cowboy and frontier values continue to play a role in the larger country music, with western wear, cowboy boots, and cowboy hats continues to be in fashion for country artists. West of the Mississippi river, many of these western genres continue to flourish, including the Red Dirt of Oklahoma, New Mexico music of New Mexico, and both Texas country music and Tejano music of Texas. During the 1950s until the early 1970s, the latter part of the western heyday in country music, many of these genres featured popular artists that continue to influence both their distinctive genres and larger country music. Red Dirt featured Bob Childers and Steve Ripley; for New Mexico music Al Hurricane, Al Hurricane Jr., and Antonia Apodaca; and within the Texas scenes Willie Nelson, Freddie Fender, Johnny Rodriguez, and Little Joe. As Outlaw country music emerged as subgenre in its own right, Red Dirt, New Mexico, Texas country, and Tejano grew in popularity as a part of the Outlaw country movement. Originating in the bars, fiestas, and honky-tonks of Oklahoma, New Mexico, and Texas, their music supplemented outlaw country's singer-songwriter tradition as well as 21st-century rock-inspired alternative country and hip hop-inspired country rap artists. Fourth generation (1970s–1980s) Outlaw movement Outlaw country was derived from the traditional western, including Red Dirt, New Mexico, Texas country, Tejano, and honky-tonk musical styles of the late 1950s and 1960s. Songs such as the 1963 Johnny Cash popularized "Ring of Fire" show clear influences from the likes of Al Hurricane and Little Joe, this influence just happened to culminate with artists such as Ray Price (whose band, the "Cherokee Cowboys", included Willie Nelson and Roger Miller) and mixed with the anger of an alienated subculture of the nation during the period, a collection of musicians that came to be known as the outlaw movement revolutionized the genre of country music in the early 1970s. "After I left Nashville (the early 70s), I wanted to relax and play the music that I wanted to play, and just stay around Texas, maybe Oklahoma. Waylon and I had that outlaw image going, and when it caught on at colleges and we started selling records, we were O.K. The whole outlaw thing, it had nothing to do with the music, it was something that got written in an article, and the young people said, 'Well, that's pretty cool.' And started listening." (Willie Nelson) The term outlaw country is traditionally associated with Willie Nelson, Jerry Jeff Walker, Hank Williams, Jr., Merle Haggard, Waylon Jennings and Joe Ely. It was encapsulated in the 1976 album Wanted! The Outlaws. Though the outlaw movement as a cultural fad had died down after the late 1970s (with Jennings noting in 1978 that it had gotten out of hand and led to real-life legal scrutiny), many western and outlaw country music artists maintained their popularity during the 1980s by forming supergroups, such as The Highwaymen, Texas Tornados, and Bandido. Country pop Country pop or soft pop, with roots in the countrypolitan sound, folk music, and soft rock, is a subgenre that first emerged in the 1970s. Although the term first referred to country music songs and artists that crossed over to top 40 radio, country pop acts are now more likely to cross over to adult contemporary music. It started with pop music singers like Glen Campbell, Bobbie Gentry, John Denver, Olivia Newton-John, Anne Murray, B. J. Thomas, the Bellamy Brothers, and Linda Ronstadt having hits on the country charts. Between 1972 and 1975, singer/guitarist John Denver released a series of hugely successful songs blending country and folk-rock musical styles ("Rocky Mountain High", "Sunshine on My Shoulders", "Annie's Song", "Thank God I'm a Country Boy", and "I'm Sorry"), and was named Country Music Entertainer of the Year in 1975. The year before, Olivia Newton-John, an Australian pop singer, won the "Best Female Country Vocal Performance" as well as the Country Music Association's most coveted award for females, "Female Vocalist of the Year". In response George Jones, Tammy Wynette, Jean Shepard and other traditional Nashville country artists dissatisfied with the new trend formed the short-lived "Association of Country Entertainers" in 1974; the ACE soon unraveled in the wake of Jones and Wynette's bitter divorce and Shepard's realization that most others in the industry lacked her passion for the movement. During the mid-1970s, Dolly Parton, a successful mainstream country artist since the late 1960s, mounted a high-profile campaign to cross over to pop music, culminating in her 1977 hit "Here You Come Again", which topped the U.S. country singles chart, and also reached No. 3 on the pop singles charts. Parton's male counterpart, Kenny Rogers, came from the opposite direction, aiming his music at the country charts, after a successful career in pop, rock and folk music with the First Edition, achieving success the same year with "Lucille", which topped the country charts and reached No. 5 on the U.S. pop singles charts, as well as reaching Number 1 on the British all-genre chart. Parton and Rogers would both continue to have success on both country and pop charts simultaneously, well into the 1980s. Country music propelled Kenny Rogers’ career, making him a three-time Grammy Award winner and six-time Country Music Association Awards winner. Having sold more than 50 million albums in the US, one of his Song "The Gambler," inspired several TV films, with Rogers as the main character. Artists like Crystal Gayle, Ronnie Milsap and Barbara Mandrell would also find success on the pop charts with their records. In 1975, author Paul Hemphill stated in the Saturday Evening Post, "Country music isn't really country anymore; it is a hybrid of nearly every form of popular music in America." During the early 1980s, country artists continued to see their records perform well on the pop charts. Willie Nelson and Juice Newton each had two songs in the top 5 of the Billboard Hot 100 in the early eighties: Nelson charted "Always on My Mind" (#5, 1982) and "To All the Girls I've Loved Before" (#5, 1984, a duet with Julio Iglesias), and Newton achieved success with "Queen of Hearts" (#2, 1981) and "Angel of the Morning" (#4, 1981). Four country songs topped the Billboard Hot 100 in the 1980s: "Lady" by Kenny Rogers, from the late fall of 1980; "9 to 5" by Dolly Parton, "I Love a Rainy Night" by Eddie Rabbitt (these two back-to-back at the top in early 1981); and "Islands in the Stream", a duet by Dolly Parton and Kenny Rogers in 1983, a pop-country crossover hit written by Barry, Robin, and Maurice Gibb of the Bee Gees. Newton's "Queen of Hearts" almost reached No. 1, but was kept out of the spot by the pop ballad juggernaut "Endless Love" by Diana Ross and Lionel Richie. The move of country music toward neotraditional styles led to a marked decline in country/pop crossovers in the late 1980s, and only one song in that period—Roy Orbison's "You Got It", from 1989—made the top 10 of both the Billboard Hot Country Singles" and Hot 100 charts, due largely to a revival of interest in Orbison after his sudden death. The only song with substantial country airplay to reach number one on the pop charts in the late 1980s was "At This Moment" by Billy Vera and the Beaters, an R&B song with slide guitar embellishment that appeared at number 42 on the country charts from minor crossover airplay. The record-setting, multi-platinum group Alabama was named Artist of the Decade for the 1980s by the Academy of Country Music. Country rock Country rock is a genre that started in the 1960s but became prominent in the 1970s. The late 1960s in American music produced a unique blend as a result of traditionalist backlash within separate genres. In the aftermath of the British Invasion, many desired a return to the "old values" of rock n' roll. At the same time there was a lack of enthusiasm in the country sector for Nashville-produced music. What resulted was a crossbred genre known as country rock. Early innovators in this new style of music in the 1960s and 1970s included Bob Dylan, who was the first to revert to country music with his 1967 album John Wesley Harding (and even more so with that album's follow-up, Nashville Skyline), followed by Gene Clark, Clark's former band the Byrds (with Gram Parsons on Sweetheart of the Rodeo) and its spin-off the Flying Burrito Brothers (also featuring Gram Parsons), guitarist Clarence White, Michael Nesmith (the Monkees and the First National Band), the Grateful Dead, Neil Young, Commander Cody, the Allman Brothers Band, Charlie Daniels, the Marshall Tucker Band, Poco, Buffalo Springfield, Stephen Stills' band Manassas and Eagles, among many, even the former folk music duo Ian & Sylvia, who formed Great Speckled Bird in 1969. The Eagles would become the most successful of these country rock acts, and their compilation album Their Greatest Hits (1971–1975) remains the second-best-selling album in the US with 29 million copies sold. The Rolling Stones also got into the act with songs like "Dead Flowers"; the original recording of "Honky Tonk Women" was performed in a country style, but it was subsequently re-recorded in a hard rock style for the single version, and the band's preferred country version was later released on the album Let It Bleed, under the title "Country Honk". Described by AllMusic as the "father of country-rock", Gram Parsons' work in the early 1970s was acclaimed for its purity and for his appreciation for aspects of traditional country music. Though his career was cut tragically short by his 1973 death, his legacy was carried on by his protégé and duet partner Emmylou Harris; Harris would release her debut solo in 1975, an amalgamation of country, rock and roll, folk, blues and pop. Subsequent to the initial blending of the two polar opposite genres, other offspring soon resulted, including Southern rock, heartland rock and in more recent years, alternative country. In the decades that followed, artists such as Juice Newton, Alabama, Hank Williams, Jr. (and, to an even greater extent, Hank Williams III), Gary Allan, Shania Twain, Brooks & Dunn, Faith Hill, Garth Brooks, Dwight Yoakam, Steve Earle, Dolly Parton, Rosanne Cash and Linda Ronstadt moved country further towards rock influence. Neocountry In 1980, a style of "neocountry disco music" was popularized by the film Urban Cowboy. It was during this time that a glut of pop-country crossover artists began appearing on the country charts: former pop stars Bill Medley (of the Righteous Brothers), "England Dan" Seals (of England Dan and John Ford Coley), Tom Jones, and Merrill Osmond (both alone and with some of his brothers; his younger sister Marie Osmond was already an established country star) all recorded significant country hits in the early 1980s. Sales in record stores rocketed to $250 million in 1981; by 1984, 900 radio stations began programming country or neocountry pop full-time. As with most sudden trends, however, by 1984 sales had dropped below 1979 figures. Truck driving country Truck driving country music is a genre of country music and is a fusion of honky-tonk, country rock and the Bakersfield sound. It has the tempo of country rock and the emotion of honky-tonk, and its lyrics focus on a truck driver's lifestyle. Truck driving country songs often deal with the profession of trucking and love. Well-known artists who sing truck driving country include Dave Dudley, Red Sovine, Dick Curless, Red Simpson, Del Reeves, the Willis Brothers and Jerry Reed, with C. W. McCall and Cledus Maggard (pseudonyms of Bill Fries and Jay Huguely, respectively) being more humorous entries in the subgenre. Dudley is known as the father of truck driving country. Neotraditionalist movement During the mid-1980s, a group of new artists began to emerge who rejected the more polished country-pop sound that had been prominent on radio and the charts, in favor of more, traditional, "back-to-basics" production. Many of the artists during the latter half of the 1980s drew on traditional honky-tonk, bluegrass, folk and western swing. Artists who typified this sound included Travis Tritt, Reba McEntire, George Strait, Keith Whitley, Alan Jackson, John Anderson, Patty Loveless, Kathy Mattea, Randy Travis, Dwight Yoakam, Clint Black, Ricky Skaggs, and the Judds. Fifth generation (1990s) Country music was aided by the U.S. Federal Communications Commission's (FCC) Docket 80–90, which led to a significant expansion of FM radio in the 1980s by adding numerous higher-fidelity FM signals to rural and suburban areas. At this point, country music was mainly heard on rural AM radio stations; the expansion of FM was particularly helpful to country music, which migrated to FM from the AM band as AM became overcome by talk radio (the country music stations that stayed on AM developed the classic country format for the AM audience). At the same time, beautiful music stations already in rural areas began abandoning the format (leading to its effective demise) to adopt country music as well. This wider availability of country music led to producers seeking to polish their product for a wider audience. In 1990, Billboard, which had published a country music chart since the 1940s, changed the methodology it used to compile the chart: singles sales were removed from the methodology, and only airplay on country radio determined a song's place on the chart. In the 1990s, country music became a worldwide phenomenon thanks to Garth Brooks, who enjoyed one of the most successful careers in popular music history, breaking records for both sales and concert attendance throughout the decade. The RIAA has certified his recordings at a combined (128× platinum), denoting roughly 113 million U.S. shipments. Other artists who experienced success during this time included Clint Black, John Michael Montgomery, Tracy Lawrence, Tim McGraw, Kenny Chesney, Travis Tritt, Alan Jackson and the newly formed duo of Brooks & Dunn; George Strait, whose career began in the 1980s, also continued to have widespread success in this decade and beyond. Toby Keith began his career as a more pop-oriented country singer in the 1990s, evolving into an outlaw persona in the early 2000s with Pull My Chain and its follow-up, Unleashed. Success of female artists Female artists such as Reba McEntire, Patty Loveless, Faith Hill, Martina McBride, Deana Carter, LeAnn Rimes, Mindy McCready, Pam Tillis, Lorrie Morgan, Shania Twain, and Mary Chapin Carpenter all released platinum-selling albums in the 1990s. The Dixie Chicks became one of the most popular country bands in the 1990s and early 2000s. Their 1998 debut album Wide Open Spaces went on to become certified 12× platinum while their 1999 album Fly went on to become 10× platinum. After their third album, Home, was released in 2003, the band made political news in part because of lead singer Natalie Maines's comments disparaging then-President George W. Bush while the band was overseas (Maines stated that she and her bandmates were ashamed to be from the same state as Bush, who had just commenced the Iraq War a few days prior). The comments caused a rift between the band and the country music scene, and the band's fourth (and most recent) album, 2006's Taking the Long Way, took a more rock-oriented direction; the album was commercially successful overall among non-country audiences but largely ignored among country audiences. After Taking the Long Way, the band broke up for a decade (with two of its members continuing as the Court Yard Hounds) before reuniting in 2016 and releasing new material in 2020. Canadian artist Shania Twain became the best selling female country artist of the decade. This was primarily due to the success of her breakthrough sophomore 1995 album, The Woman in Me, which was certified 12× platinum sold over 20 million copies worldwide and its follow-up, 1997's Come On Over, which was certified 20× platinum and sold over 40 million copies. The album became a major worldwide phenomenon and became one of the world's best selling albums for three years (1998, 1999 and 2000); it also went on to become the best selling country album of all time. Unlike the majority of her contemporaries, Twain enjoyed large international success that had been seen by very few country artists, before or after her. Critics have noted that Twain enjoyed much of her success due to breaking free of traditional country stereotypes and for incorporating elements of rock and pop into her music. In 2002, she released her successful fourth studio album, titled Up!, which was certified 11× platinum and sold over 15 million copies worldwide. Shania Twain has been nominated eighteen times for Grammy Awards and won five Grammys. [] She was the best-paid country music star in 2016 according to Forbes, with a net worth of $27.5 million. []Twain has been credited with breaking international boundaries for country music, as well as inspiring many country artists to incorporate different genres into their music in order to attract a wider audience. She is also credited with changing the way in which many female country performers would market themselves, as unlike many before her she used fashion and her sex appeal to get rid of the stereotypical 'honky-tonk' image the majority of country singers had in order to distinguish herself from many female country artists of the time. Line dancing revival In the early-mid-1990s, country western music was influenced by the popularity of line dancing. This influence was so great that Chet Atkins was quoted as saying, "The music has gotten pretty bad, I think. It's all that damn line dancing." By the end of the decade, however, at least one line dance choreographer complained that good country line dance music was no longer being released. In contrast, artists such as Don Williams and George Jones who had more or less had consistent chart success through the 1970s and 1980s suddenly had their fortunes fall rapidly around 1991 when the new chart rules took effect. Alternative country Country influences combined with Punk rock and alternative rock to forge the "cowpunk" scene in Southern California during the 1980s, which included bands such as the Long Ryders, Lone Justice and the Beat Farmers, as well as the established punk group X, whose music had begun to include country and rockabilly influences. Simultaneously, a generation of diverse country artists outside of California emerged that rejected the perceived cultural and musical conservatism associated with Nashville's mainstream country musicians in favor of more countercultural outlaw country and the folk singer-songwriter traditions of artists such as Woody Guthrie, Gram Parsons and Bob Dylan. Artists from outside California who were associated with early alternative country included singer-songwriters such as Lucinda Williams, Lyle Lovett and Steve Earle, the Nashville country rock band Jason and the Scorchers, the Providence "cowboy pop" band Rubber Rodeo, and the British post-punk band the Mekons. Earle, in particular, was noted for his popularity with both country and college rock audiences: He promoted his 1986 debut album Guitar Town with a tour that saw him open for both country singer Dwight Yoakam and alternative rock band the Replacements. Yoakam also cultivated a fanbase spanning multiple genres through his stripped-down honky-tonk influenced sound, association with the cowpunk scene, and performances at Los Angeles punk rock clubs. These early styles had coalesced into a genre by the time the Illinois group Uncle Tupelo released their influential debut album No Depression in 1990. The album is widely credited as being the first "alternative country" album, and inspired the name of No Depression magazine, which exclusively covered the new genre. Following Uncle Tupelo's disbanding in 1994, its members formed two significant bands in genre: Wilco and Son Volt. Although Wilco's sound had moved away from country and towards indie rock by the time they released their critically acclaimed album Yankee Hotel Foxtrot in 2002, they have continued to be an influence on later alt-country artists. Other acts who became prominent in the alt-country genre during the 1990s and 2000s included the Bottle Rockets, the Handsome Family, Blue Mountain, Robbie Fulks, Blood Oranges, Bright Eyes, Drive-By Truckers, Old 97's, Old Crow Medicine Show, Nickel Creek, Neko Case, and Whiskeytown, whose lead singer Ryan Adams later had a successful solo-career. Alt-country, in various iterations overlapped with other genres, including Red Dirt country music (Cross Canadian Ragweed), jam bands (My Morning Jacket and the String Cheese Incident), and indie folk (the Avett Brothers). Despite the genre's growing popularity in the 1980s, 1990s and 2000s, alternative country and neo-traditionalist artists saw minimal support from country radio in those decades, despite strong sales and critical acclaim for albums such as the soundtrack to the 2000 film O Brother, Where Art Thou?. In 1987, the Beat Farmers gained airplay on country music stations with their song "Make It Last", but the single was pulled from the format when station programmers decreed the band's music was too rock-oriented for their audience. However, some alt-country songs have been crossover hits to mainstream country radio in cover versions by established artists on the format; Lucinda Williams' "Passionate Kisses" was a hit for Mary Chapin Carpenter in 1993, Ryan Adams' "When the Stars Go Blue" was a hit for Tim McGraw in 2007, and Old Crow Medicine Show's "Wagon Wheel" was a hit for Darius Rucker (member of Hootie & The Blowfish) in 2013. In the 2010s, the alt-country genre saw an increase in its critical and commercial popularity, owing to the success of artists such as the Civil Wars, Chris Stapleton, Sturgill Simpson, Jason Isbell, Lydia Loveless and Margo Price. In 2019, Kacey Musgraves – a country artist who had gained a following with indie rock fans and music critics despite minimal airplay on country radio – won the Grammy Award for Album of the Year for her album Golden Hour. Sixth generation (2000s–present) The sixth generation of country music continued to be influenced by other genres such as pop, rock, and R&B. Richard Marx crossed over with his Days in Avalon album, which features five country songs and several singers and musicians. Alison Krauss sang background vocals to Marx's single "Straight from My Heart." Also, Bon Jovi had a hit single, "Who Says You Can't Go Home", with Jennifer Nettles of Sugarland. Kid Rock's collaboration with Sheryl Crow, "Picture," was a major crossover hit in 2001 and began Kid Rock's transition from hard rock to a country-rock hybrid that would later produce another major crossover hit, 2008's "All Summer Long." (Crow, whose music had often incorporated country elements, would also officially cross over into country with her hit "Easy" from her debut country album Feels like Home). Darius Rucker, frontman for the 1990s pop-rock band Hootie & the Blowfish, began a country solo career in the late 2000s, one that to date has produced five albums and several hits on both the country charts and the Billboard Hot 100. Singer-songwriter Unknown Hinson became famous for his appearance in the Charlotte television show Wild, Wild, South, after which Hinson started his own band and toured in southern states. Other rock stars who featured a country song on their albums were Don Henley (who released Cass County in 2015, an album which featured collaborations with numerous country artists) and Poison. The back half of the 2010-2020 decade saw an increasing number of mainstream country acts collaborate with pop and R&B acts; many of these songs achieved commercial success by appealing to fans across multiple genres; examples include collaborations between Kane Brown and Marshmello and Maren Morris and Zedd. There has also been interest from pop singers in country music, including Beyoncé, Lady Gaga, Alicia Keys, Gwen Stefani, Justin Timberlake, Justin Bieber and Pink. Supporting this movement is the new generation of contemporary pop-country, including Taylor Swift, Miranda Lambert, Carrie Underwood, Kacey Musgraves, Miley Cyrus, Billy Ray Cyrus, Sam Hunt, Chris Young, who introduced new themes in their works, touching on fundamental rights, feminism, and controversies about racism and religion of the older generations. Popular culture In 2005, country singer Carrie Underwood rose to fame as the winner of the fourth season of American Idol and has since become one of the most prominent recording artists in the genre, with worldwide sales of more than 65 million records and seven Grammy Awards. With her first single, "Inside Your Heaven", Underwood became the only solo country artist to have a number 1 hit on the Billboard Hot 100 chart in the 2000–2009 decade and also broke Billboard chart history as the first country music artist ever to debut at No. 1 on the Hot 100. Underwood's debut album, Some Hearts, became the best-selling solo female debut album in country music history, the fastest-selling debut country album in the history of the SoundScan era and the best-selling country album of the last 10 years, being ranked by Billboard as the number 1 Country Album of the 2000–2009 decade. She has also become the female country artist with the most number one hits on the Billboard Hot Country Songs chart in the Nielsen SoundScan era (1991–present), having 14 #1s and breaking her own Guinness Book record of ten. In 2007, Underwood won the Grammy Award for Best New Artist, becoming only the second Country artist in history (and the first in a decade) to win it. She also made history by becoming the seventh woman to win Entertainer of the Year at the Academy of Country Music Awards, and the first woman in history to win the award twice, as well as twice consecutively. Time has listed Underwood as one of the 100 most influential people in the world. In 2016, Underwood topped the Country Airplay chart for the 15th time, becoming the female artist with the most number ones on that chart. Carrie Underwood was only one of several country stars produced by a television series in the 2000s. In addition to Underwood, American Idol launched the careers of Kellie Pickler, Josh Gracin, Bucky Covington, Kristy Lee Cook, Danny Gokey, Lauren Alaina and Scotty McCreery (as well as that of occasional country singer Kelly Clarkson) in the decade, and would continue to launch country careers in the 2010s. The series Nashville Star, while not nearly as successful as Idol, did manage to bring Miranda Lambert, Kacey Musgraves and Chris Young to mainstream success, also launching the careers of lower-profile musicians such as Buddy Jewell, Sean Patrick McGraw, and Canadian musician George Canyon. Can You Duet? produced the duos Steel Magnolia and Joey + Rory. Teen sitcoms also have influenced modern country music; in 2008, actress Jennette McCurdy (best known as the sidekick Sam on the teen sitcom iCarly) released her first single, "So Close", following that with the single "Generation Love" in 2011. Another teen sitcom star, Miley Cyrus (of Disney Channel's Hannah Montana), also had a crossover hit in the late 2000s with "The Climb" and another with a duet with her father, Billy Ray Cyrus, with "Ready, Set, Don't Go." Jana Kramer, an actress in the teen drama One Tree Hill, released a country album in 2012 that has produced two hit singles as of 2013. Actresses Hayden Panettiere and Connie Britton began recording country songs as part of their roles in the TV shows Nashville and Pretty Little Liars star Lucy Hale released her debut album Road Between in 2014. In 2010, the group Lady Antebellum won five Grammys, including the coveted Song of the Year and Record of the Year for "Need You Now". A large number of duos and vocal groups emerged on the charts in the 2010s, many of which feature close harmony in the lead vocals. In addition to Lady A, groups such as Little Big Town, the Band Perry, Gloriana, Thompson Square, Eli Young Band, Zac Brown Band and British duo the Shires have emerged to occupy a large share of mainstream success alongside solo singers such as Kacey Musgraves and Miranda Lambert. One of the most commercially successful country artists of the late 2000s and early 2010s has been singer-songwriter Taylor Swift. Swift first became widely known in 2006 when her debut single, "Tim McGraw", was released when Swift was only 16 years old. In 2006, Swift released her self-titled debut studio album, which spent 275 weeks on Billboard 200, one of the longest runs of any album on that chart. In 2008, Taylor Swift released her second studio album, Fearless, which made her the second longest number-one charted on Billboard 200 and the second best-selling album (just behind Adele's 21) within the past 5 years. At the 2010 Grammys, Taylor Swift was 20 and won Album of the Year for Fearless, which made her the youngest artist to win this award. Swift has received twelve Grammys already. Buoyed by her teen idol status among girls and a change in the methodology of compiling the Billboard charts to favor pop-crossover songs, Swift's 2012 single "We Are Never Ever Getting Back Together" spent the most weeks at the top of Billboard's Hot 100 chart and Hot Country Songs chart of any song in nearly five decades. The song's long run at the top of the chart was somewhat controversial, as the song is largely a pop song without much country influence and its success on the charts driven by a change to the chart's criteria to include airplay on non-country radio stations, prompting disputes over what constitutes a country song; many of Swift's later releases, such as album 1989 (2014), Reputation (2017), and Lover (2019) were released solely to pop audiences. Swift returned to country music in her recent folk-inspired releases, Folklore (2020) and Evermore (2020), with songs like "Betty" and "No Body, No Crime". Modern variations Influence of rock, pop and hip-hop In the mid to late 2010s, country music began to increasingly sound more like the style of modern-day Pop music, with more simple and repetitive lyrics, more electronic-based instrumentation, and experimentation with "talk-singing" and rap, pop-country pulled farther away from the traditional sounds of country music and received criticisms from country music purists while gaining in popularity with mainstream audiences. The topics addressed have also changed, turning controversial such as acceptance of the LGBT community, safe sex, recreational marijuana use, and questioning religious sentiment. Influences also come from some pop artists' interest in the country genre, including Justin Timberlake with the album Man of the Woods, Beyoncé's single "Daddy Lessons" from Lemonade, Gwen Stefani with "Nobody but You", Bruno Mars, Lady Gaga, Alicia Keys, Kelly Clarkson, and Pink. The influence of rock music in country has become more overt during the late 2000s and early 2010s as artists like Eric Church, Jason Aldean, and Brantley Gilbert have had success; Aaron Lewis, former frontman for the rock group Staind, had a moderately successful entry into country music in 2011 and 2012, as did Dallas Smith, former frontman of the band Default. Maren Morris success collaboration "The Middle" with EDM producer Zedd is considered one of the representations of the fusion of electro-pop with country music. Lil Nas X song "Old Town Road" spent 19 weeks atop the US Billboard Hot 100 chart, becoming the longest-running number-one song since the chart debuted in 1958, winning Billboard Music Awards, MTV Video Music Awards and Grammy Award. Sam Hunt "Leave the Night On" peaked concurrently on the Hot Country Songs and Country Airplay charts, making Hunt the first country artist in 22 years, since Billy Ray Cyrus, to reach the top of three country charts simultaneously in the Nielsen SoundScan-era. With the fusion genre of "country trap"—a fusion of country/western themes to a hip hop beat, but usually with fully sung lyrics—emerging in the late 2010s, line dancing country had a minor revival, examples of the phenomenon include "The Git Up" by Blanco Brown. Blanco Brown has gone of to make more traditional country soul songs such as "I Need Love" and a rendition of "Don't Take the Girl" with Tim McGraw, and collaborations like "Just the Way" with Parmalee. Another country trap artist known as Breland has seen success with "My Truck, "Throw It Back" with Keith Urban, and "Praise the Lord" featuring Thomas Rhett. Emo rap musician Sueco, released a cowpunk song in collaboration is country musician Warren Zeiders titled "Ride It Hard". Alex Melton, known for his music covers, blends pop punk with country music. Bro country In the early 2010s, "bro-country", a genre noted primarily for its themes on drinking and partying, girls, and pickup trucks became particularly popular. Notable artists associated with this genre are Luke Bryan, Jason Aldean, Blake Shelton, Jake Owen and Florida Georgia Line whose song "Cruise" became the best-selling country song of all time. Research in the mid-2010s suggested that about 45 percent of country's best-selling songs could be considered bro-country, with the top two artists being Luke Bryan and Florida Georgia Line. Albums by bro-country singers also sold very well—in 2013, Luke Bryan's Crash My Party was the third best-selling of all albums in the United States, with Florida Georgia Line's Here's to the Good Times at sixth, and Blake Shelton's Based on a True Story at ninth. It is also thought that the popularity of bro-country helped country music to surpass classic rock as the most popular genre in the American country in 2012. The genre however is controversial as it has been criticized by other country musicians and commentators over its themes and depiction of women, opening up a divide between the older generation of country singers and the younger bro country singers that was described as "civil war" by musicians, critics, and journalists." In 2014, Maddie & Tae's "Girl in a Country Song", addressing many of the controversial bro-country themes, peaked at number one on the Billboard Country Airplay chart. Bluegrass and Americana is a genre that contain songs about going through hard times, country loving, and telling stories. Newer artists like Billy Strings, the Grascals, Molly Tuttle, Tyler Childers and the Infamous Stringdusters have been increasing the popularity of this genre, alongside some of the genres more established stars who still remain popular including Rhonda Vincent, Alison Krauss and Union Station, Ricky Skaggs and Del McCoury. The genre has developed in the Northern Kentucky and Cincinnati area. Other artists include New South (band), Doc Watson, Osborne Brothers, and many others. In an effort to combat the over-reliance of mainstream country music on pop-infused artists, the sister genre of Americana began to gain popularity and increase in prominence, receiving eight Grammy categories of its own in 2009. Americana music incorporates elements of country music, bluegrass, folk, blues, gospel, rhythm and blues, roots rock and southern soul and is overseen by the Americana Music Association and the Americana Music Honors & Awards. As a result of an increasingly pop-leaning mainstream, many more traditional-sounding artists such as Tyler Childers, Zach Bryan and Old Crow Medicine Show began to associate themselves more with Americana and the alternative country scene where their sound was more celebrated. Similarly, many established country acts who no longer received commercial airplay, including Emmylou Harris and Lyle Lovett, began to flourish again. Contemporary country and western revival During the mid-1980s, a group of new artists began to emerge who rejected the more polished country-pop sound that had been prominent on radio and the charts, in favor of more, traditional, "back-to-basics" production. Many of the artists during the latter half of the 1980s drew on traditional honky-tonk, bluegrass, folk and western swing. Artists who typified this sound included Travis Tritt, Reba McEntire, George Strait, Keith Whitley, Alan Jackson, John Anderson, Patty Loveless, Kathy Mattea, Randy Travis, Dwight Yoakam, Clint Black, Ricky Skaggs, and the Judds. Beginning in 1989, a confluence of events brought an unprecedented commercial boom to country music. New marketing strategies were used to engage fans, powered by technology that more accurately tracked the popularity of country music, and boosted by a political and economic climate that focused attention on the genre. Garth Brooks ("Friends in Low Places") in particular attracted fans with his fusion of neotraditionalist country and stadium rock. Other artists such as Brooks and Dunn ("Boot Scootin' Boogie") also combined conventional country with slick, rock elements, while Lorrie Morgan, Mary Chapin Carpenter, and Kathy Mattea updated neotraditionalist styles. Roots of conservative country was Lee Greenwood's "God Bless the USA". The September 11 attacks of 2001 and the economic recession helped move country music back into the spotlight. Many country artists, such as Alan Jackson with his ballad on terrorist attacks, "Where Were You (When the World Stopped Turning)", wrote songs that celebrated the military, highlighted the gospel, and emphasized home and family values over wealth. Alt-Country singer Ryan Adams song "New York, New York" pays tribute to New York City, and its popular music video (which was shot 4 days before the attacks) shows Adams playing in front of the Manhattan skyline, Along with several shots of the city. In contrast, more rock-oriented country singers took more direct aim at the attacks' perpetrators; Toby Keith's "Courtesy of the Red, White and Blue (The Angry American)" threatened to "a boot in" the posterior of the enemy, while Charlie Daniels's "This Ain't No Rag, It's a Flag" promised to "hunt" the perpetrators "down like a mad dog hound." These songs gained such recognition that it put country music back into popular culture. Darryl Worley recorded "Have You Forgotten" also. There have been numerous patriotic country songs throughout the years. Some modern artists that primarily or entirely produce country pop music include Kacey Musgraves, Maren Morris, Kelsea Ballerini, Sam Hunt, Kane Brown, Chris Lane, and Dan + Shay. The singers who are part of this country movement are also defined as "Nashville's new generation of country". Although the changes made by the new generation, it has been recognized by major music awards associations and successes in Billboard and international charts. Golden Hour by Kacey Musgraves won album of the year at 61st Annual Grammy Awards, Academy of Country Music Awards, Country Music Association Awards, although it has received widespread criticism from the more traditionalist public. International Australia Australian country music has a long tradition. Influenced by US country music, it has developed a distinct style, shaped by British and Irish folk ballads and Australian bush balladeers like Henry Lawson and Banjo Paterson. Country instruments, including the guitar, banjo, fiddle and harmonica, create the distinctive sound of country music in Australia and accompany songs with strong storyline and memorable chorus. Folk songs sung in Australia between the 1780s and 1920s, based around such themes as the struggle against government tyranny, or the lives of bushrangers, swagmen, drovers, stockmen and shearers, continue to influence the genre. This strain of Australian country, with lyrics focusing on Australian subjects, is generally known as "bush music" or "bush band music". "Waltzing Matilda", often regarded as Australia's unofficial national anthem, is a quintessential Australian country song, influenced more by British and Irish folk ballads than by US country and western music. The lyrics were composed by the poet Banjo Paterson in 1895. Other popular songs from this tradition include "The Wild Colonial Boy", "Click Go the Shears", "The Queensland Drover" and "The Dying Stockman". Later themes which endure to the present include the experiences of war, of droughts and flooding rains, of Aboriginality and of the railways and trucking routes which link Australia's vast distances. Pioneers of a more Americanised popular country music in Australia included Tex Morton (known as "The Father of Australian Country Music") in the 1930s. Author Andrew Smith delivers a through research and engaged view of Tex Morton's life and his impact on the country music scene in Australia in the 1930s and 1940s. Other early stars included Buddy Williams, Shirley Thoms and Smoky Dawson. Buddy Williams (1918–1986) was the first Australian-born to record country music in Australia in the late 1930s and was the pioneer of a distinctly Australian style of country music called the bush ballad that others such as Slim Dusty would make popular in later years. During the Second World War, many of Buddy Williams recording sessions were done whilst on leave from the Army. At the end of the war, Williams would go on to operate some of the largest travelling tent rodeo shows Australia has ever seen. In 1952, Dawson began a radio show and went on to national stardom as a singing cowboy of radio, TV and film. Slim Dusty (1927–2003) was known as the "King of Australian Country Music" and helped to popularise the Australian bush ballad. His successful career spanned almost six decades, and his 1957 hit "A Pub with No Beer" was the biggest-selling record by an Australian to that time, and with over seven million record sales in Australia he is the most successful artist in Australian musical history. Dusty recorded and released his one-hundredth album in the year 2000 and was given the honour of singing "Waltzing Matilda" in the closing ceremony of the Sydney 2000 Olympic Games. Dusty's wife Joy McKean penned several of his most popular songs. Chad Morgan, who began recording in the 1950s, has represented a vaudeville style of comic Australian country; Frank Ifield achieved considerable success in the early 1960s, especially in the UK Singles Charts and Reg Lindsay was one of the first Australians to perform at Nashville's Grand Ole Opry in 1974. Eric Bogle's 1972 folk lament to the Gallipoli Campaign "And the Band Played Waltzing Matilda" recalled the British and Irish origins of Australian folk-country. Singer-songwriter Paul Kelly, whose music style straddles folk, rock and country, is often described as the poet laureate of Australian music. By the 1990s, country music had attained crossover success in the pop charts, with artists like James Blundell and James Reyne singing "Way Out West", and country star Kasey Chambers winning the ARIA Award for Best Female Artist in three years (2000, 2002 and 2004), tying with pop stars Wendy Matthews and Sia for the most wins in that category. Furthermore, Chambers has gone on to win nine ARIA Awards for Best Country Album and, in 2018, became the youngest artist to ever be inducted into the ARIA Hall of Fame. The crossover influence of Australian country is also evident in the music of successful contemporary bands the Waifs and the John Butler Trio. Nick Cave has been heavily influenced by the country artist Johnny Cash. In 2000, Cash, covered Cave's "The Mercy Seat" on the album American III: Solitary Man, seemingly repaying Cave for the compliment he paid by covering Cash's "The Singer" (originally "The Folk Singer") on his Kicking Against the Pricks album. Subsequently, Cave cut a duet with Cash on a version of Hank Williams' "I'm So Lonesome I Could Cry" for Cash's American IV: The Man Comes Around album (2002). Popular contemporary performers of Australian country music include John Williamson (who wrote the iconic "True Blue"), Lee Kernaghan (whose hits include "Boys from the Bush" and "The Outback Club"), Gina Jeffreys, Forever Road and Sara Storer. In the U.S., Olivia Newton-John, Sherrié Austin and Keith Urban have attained great success. During her time as a country singer in the 1970s, Newton-John became the first (and to date only) non-US winner of the Country Music Association Award for Female Vocalist of the Year which many considered a controversial decision by the CMA; after starring in the rock-and-roll musical film Grease in 1978, Newton-John (mirroring the character she played in the film) shifted to pop music in the 1980s. Urban is arguably considered the most successful international Australian country star, winning nine CMA Awards, including three Male Vocalist of the Year wins and two wins of the CMA's top honour Entertainer of the Year. Pop star Kylie Minogue found success with her 2018 country pop album Golden which she recorded in Nashville reaching number one in Scotland, the UK and her native Australia. Country music has been a particularly popular form of musical expression among Indigenous Australians. Troy Cassar-Daley is among Australia's successful contemporary indigenous performers, and Kev Carmody and Archie Roach employ a combination of folk-rock and country music to sing about Aboriginal rights issues. The Tamworth Country Music Festival began in 1973 and now attracts up to 100,000 visitors annually. Held in Tamworth, New South Wales (country music capital of Australia), it celebrates the culture and heritage of Australian country music. During the festival the CMAA holds the Country Music Awards of Australia ceremony awarding the Golden Guitar trophies. Other significant country music festivals include the Whittlesea Country Music Festival (near Melbourne) and the Mildura Country Music Festival for "independent" performers during October, and the Canberra Country Music Festival held in the national capital during November. Country HQ showcases new talent on the rise in the country music scene down under. CMC (the Country Music Channel), a 24‑hour music channel dedicated to non-stop country music, can be viewed on pay TV and features once a year the Golden Guitar Awards, CMAs and CCMAs alongside international shows such as The Wilkinsons, The Road Hammers, and Country Music Across America. Canada Outside of the United States, Canada has the largest country music fan and artist base, something that is to be expected given the two countries' proximity and cultural parallels. Mainstream country music is culturally ingrained in the prairie provinces, the British Columbia Interior, Northern Ontario, and in Atlantic Canada. Celtic traditional music developed in Atlantic Canada in the form of Scottish, Acadian and Irish folk music popular amongst Irish, French and Scottish immigrants to Canada's Atlantic Provinces (Newfoundland, Nova Scotia, New Brunswick, and Prince Edward Island). Like the southern United States and Appalachia, all four regions are of heavy British Isles stock and rural; as such, the development of traditional music in the Maritimes somewhat mirrored the development of country music in the US South and Appalachia. Country and western music never really developed separately in Canada; however, after its introduction to Canada, following the spread of radio, it developed quite quickly out of the Atlantic Canadian traditional scene. While true Atlantic Canadian traditional music is very Celtic or "sea shanty" in nature, even today, the lines have often been blurred. Certain areas often are viewed as embracing one strain or the other more openly. For example, in Newfoundland the traditional music remains unique and Irish in nature, whereas traditional musicians in other parts of the region may play both genres interchangeably. Don Messer's Jubilee was a Halifax, Nova Scotia-based country/folk variety television show that was broadcast nationally from 1957 to 1969. In Canada it out-performed The Ed Sullivan Show broadcast from the United States and became the top-rated television show throughout much of the 1960s. Don Messer's Jubilee followed a consistent format throughout its years, beginning with a tune named "Goin' to the Barndance Tonight", followed by fiddle tunes by Messer, songs from some of his "Islanders" including singers Marg Osburne and Charlie Chamberlain, the featured guest performance, and a closing hymn. It ended with "Till We Meet Again". The guest performance slot gave national exposure to numerous Canadian folk musicians, including Stompin' Tom Connors and Catherine McKinnon. Some Maritime country performers went on to further fame beyond Canada. Hank Snow, Wilf Carter (also known as Montana Slim), and Anne Murray are the three most notable. The cancellation of the show by the public broadcaster in 1969 caused a nationwide protest, including the raising of questions in the Parliament of Canada. The Prairie provinces, due to their western cowboy and agrarian nature, are the true heartland of Canadian country music. While the Prairies never developed a traditional music culture anything like the Maritimes, the folk music of the Prairies often reflected the cultural origins of the settlers, who were a mix of Scottish, Ukrainian, German and others. For these reasons polkas and western music were always popular in the region, and with the introduction of the radio, mainstream country music flourished. As the culture of the region is western and frontier in nature, the specific genre of country and western is more popular today in the Prairies than in any other part of the country. No other area of the country embraces all aspects of the culture, from two-step dancing, to the cowboy dress, to rodeos, to the music itself, like the Prairies do. The Atlantic Provinces, on the other hand, produce far more traditional musicians, but they are not usually specifically country in nature, usually bordering more on the folk or Celtic genres. Canadian country pop star Shania Twain is the best-selling female country artist of all time and one of the best-selling artists of all time in any genre. Furthermore, she is the only woman to have three consecutive albums be certified Diamond. Mexico and Latin America Country music artists from the U.S. have seen crossover with Latin American audiences, particularly in Mexico. Country music artists from throughout the U.S. have recorded renditions of Mexican folk songs, including "El Rey" which was performed on George Strait's Twang album and during Al Hurricane's tribute concert. American Latin pop crossover musicians, like Lorenzo Antonio's "Ranchera Jam" have also combined Mexican songs with country songs in a New Mexico music style. While Tejano and New Mexico music is typically thought of as being Spanish language, the genres have also had charting musicians focused on English language music. During the 1970s, singer-songwriter Freddy Fender had two #1 country music singles, that were popular throughout North America, with "Before the Next Teardrop Falls" and "Wasted Days and Wasted Nights". Notable songs which have been influenced by Hispanic and Latin culture as performed by US country music artists include Marty Robbins' "El Paso" trilogy, Willie Nelson and Merle Haggard covering the Townes Van Zandt song "Pancho and Lefty", "Toes" by Zac Brown Band, and "Sangria" by Blake Shelton. Regional Mexican is a radio format featuring many of Mexico's versions of country music. It includes a number of different styles, usually named after their region of origin. One specific song style, the Canción Ranchera, or simply Ranchera, literally meaning "ranch song", found its origins in the Mexican countryside and was first popularized with Mariachi. It has since also become popular with Grupero, Banda, Norteño, Tierra Caliente, Duranguense and other regional Mexican styles. The Corrido, a different song style with a similar history, is also performed in many other regional styles, and is most related to the western style of the United States and Canada. Other song styles performed in regional Mexican music include Ballads, Cumbias, Boleros, among others. Country en Español (Country in Spanish) is also popular in Mexico. Some Mexican artists began performing country songs in Spanish during the 1970s, and the genre became prominent mainly in the northern regions of the country during the 1980s. A Country en Español popularity boom also reached the central regions of Mexico during the 1990s. For most of its history, Country en Español mainly resembled Neotraditional country. However, in more modern times, some artists have incorporated influences from other country music subgenres. In Brazil, there is Música Sertaneja, the most popular music genre in that country. It originated in the countryside of São Paulo state in the 1910s, before the development of U.S. country music. In Argentina, on the last weekend of September, the yearly San Pedro Country Music Festival takes place in the town of San Pedro, Buenos Aires. The festival features bands from different places in Argentina, as well as international artists from Brazil, Uruguay, Chile, Peru and the U.S. United Kingdom Country music is popular in the United Kingdom, although somewhat less so than in other English-speaking countries. There are some British country music acts and publications. Although radio stations devoted to country are among the most popular in other Anglophone nations, none of the top ten most-listened-to stations in the UK are country stations, and national broadcaster BBC Radio does not offer a full-time country station (BBC Radio 2 Country, a "pop-up" station, operated four days each year between 2015 and 2017). The BBC does offer a country show on BBC Radio 2 each week hosted by Bob Harris. The most successful British country music act of the 21st century are Ward Thomas and the Shires. In 2015, the Shires' album Brave, became the first UK country act ever to chart in the Top 10 of the UK Albums Chart and they became the first UK country act to receive an award from the American Country Music Association. In 2016, Ward Thomas then became the first UK country act to hit number 1 in the UK Albums Chart with their album Cartwheels. There is the C2C: Country to Country festival held every year, and for many years there was a festival at Wembley Arena, which was broadcast on the BBC, the International Festivals of Country Music, promoted by Mervyn Conn, held at the venue between 1969 and 1991. The shows were later taken into Europe, and featured such stars as Johnny Cash, Dolly Parton, Tammy Wynette, David Allan Coe, Emmylou Harris, Boxcar Willie, Johnny Russell and Jerry Lee Lewis. A handful of country musicians had even greater success in mainstream British music than they did in the U.S., despite a certain amount of disdain from the music press. Britain's largest music festival Glastonbury has featured major US country acts in recent years, such as Kenny Rogers in 2013 and Dolly Parton in 2014. From within the UK, few country musicians achieved widespread mainstream success. Many British singers who performed the occasional country songs are of other genres. Tom Jones, by this point near the end of his peak success as a pop singer, had a string of country hits in the late 1970s and early 1980s. The Bee Gees had some fleeting success in the genre, with one country hit as artists ("Rest Your Love on Me") and a major hit as songwriters ("Islands in the Stream"); Barry Gibb, the band's usual lead singer and last surviving member, acknowledged that country music was a major influence on the band's style. Singer Engelbert Humperdinck, while charting only once in the U.S. country top 40 with "After the Lovin'," achieved widespread success on both the U.S. and British pop charts with his covers of Nashville country ballads such as "Release Me," "Am I That Easy to Forget" and "There Goes My Everything." Welsh singer Bonnie Tyler initially started her career making country records, and in 1978 her single "It's a Heartache" reached number four on the UK Singles Chart. In 2013, Tyler returned to her roots, blending the country elements of her early work with the rock of her successful material on her album Rocks and Honey which featured a duet with Vince Gill. The songwriting tandem of Roger Cook and Roger Greenaway wrote a number of country hits, in addition to their widespread success in pop songwriting; Cook is notable for being the only Briton to be inducted into the Nashville Songwriters Hall of Fame. A niche country subgenre popular in the West Country is Scrumpy and Western, which consists mostly of novelty songs and comedy music recorded there (its name comes from scrumpy, an alcoholic beverage). A primarily local interest, the largest Scrumpy and Western hit in the UK and Ireland was "The Combine Harvester," which pioneered the genre and reached number one in both the UK and Ireland; Fred Wedlock had a number-six hit in 1981 with "The Oldest Swinger in Town." In 1975, comedian Billy Connolly topped the UK Singles Chart with "D.I.V.O.R.C.E.", a parody of the Tammy Wynette song "D-I-V-O-R-C-E". The British Country Music Festival is an annual three-day festival held in the seaside resort of Blackpool. It uniquely promotes artists from the United Kingdom and Ireland to celebrate the impact that Celtic and British settlers to America had on the origins of country music. Past headline artists have included Amy Wadge, Ward Thomas, Tom Odell, Nathan Carter, Lisa McHugh, Catherine McGrath, Wildwood Kin, The Wandering Hearts and Henry Priestman. Ireland In Ireland, Country and Irish is a music genre that combines traditional Irish folk music with US country music. Television channel TG4 began a quest for Ireland's next country star called Glór Tíre, translated as "Country Voice". It is now in its sixth season and is one of TG4's most-watched TV shows. Over the past ten years, country and gospel recording artist James Kilbane has reached multi-platinum success with his mix of Christian and traditional country influenced albums. James Kilbane like many other Irish artists is today working closer with Nashville. Daniel O'Donnell achieved international success with his brand of music crossing country, Irish folk and European easy listening, earning a strong following among older women both in the British Isles and in North America. A recent success in the Irish arena has been Crystal Swing. Japan and Asia In Japan, there are forms of J-country and J-western similar to other J-pop movements, J-hip hop and J-rock. One of the first J-western musicians was Biji Kuroda & The Chuck Wagon Boys, other vintage artists included Jimmie Tokita and His Mountain Playboys, The Blue Rangers, Wagon Aces, and Tomi Fujiyama. J-country continues to have a dedicated following in Japan, thanks to Charlie Nagatani, Katsuoshi Suga, J.T. Kanehira, Dicky Kitano, and Manami Sekiya. Country and western venues in Japan include the former annual Country Gold which were put together by Charlie Nagatani, and the modern honky tonks at Little Texas in Tokyo and Armadillo in Nagoya. In India, there is an annual concert festival called "Blazing Guitars" held in Chennai brings together Anglo-Indian musicians from all over the country (including some who have emigrated to places like Australia). The year 2003 brought home-grown Indian, Bobby Cash to the forefront of the country music culture in India when he became India's first international country music artist to chart singles in Australia. In the Philippines, country music has found their way into Cordilleran way of life, which often compares the Igorot lifestyle to that of US cowboys. Baguio City has an FM station that caters to country music, DZWR 99.9 Country, which is part of the Catholic Media Network. Bombo Radyo Baguio has a segment on its Sunday slot for Igorot, Ilocano and country music. And as of recently, DWUB occasionally plays country music. Many country music musicians tour the Philippines. Original Pinoy Music has influences from country. Other international country music Tom Roland, from the Country Music Association International, explains country music's global popularity: "In this respect, at least, Country Music listeners around the globe have something in common with those in the United States. In Germany, for instance, Rohrbach identifies three general groups that gravitate to the genre: people intrigued with the US cowboy icon, middle-aged fans who seek an alternative to harder rock music and younger listeners drawn to the pop-influenced sound that underscores many current Country hits." One of the first US people to perform country music abroad was George Hamilton IV. He was the first country musician to perform in the Soviet Union; he also toured in Australia and the Middle East. He was deemed the "International Ambassador of Country Music" for his contributions to the globalization of country music. Johnny Cash, Emmylou Harris, Keith Urban, and Dwight Yoakam have also made numerous international tours. The Country Music Association undertakes various initiatives to promote country music internationally. Middle East In Iran, country music has appeared in recent years. According to Melody Music Magazine, the pioneer of country music in Iran is the English-speaking country music band Dream Rovers, whose founder, singer and songwriter is Erfan Rezayatbakhsh (elf). The band was formed in 2007 in Tehran, and during this time they have been trying to introduce and popularize country music in Iran by releasing two studio albums and performing live at concerts, despite the difficulties that the Islamic regime in Iran makes for bands that are active in the western music field. Musician Toby Keith performed alongside Saudi Arabian folk musician Rabeh Sager in 2017. This concert was similar to the performances of Jazz ambassadors that performed distinctively American style music internationally. Continental Europe In Sweden, Rednex rose to stardom combining country music with electro-pop in the 1990s. In 1994, the group had a worldwide hit with their version of the traditional Southern tune "Cotton-Eyed Joe". Artists popularizing more traditional country music in Sweden have been Ann-Louise Hanson, Hasse Andersson, Kikki Danielsson, Elisabeth Andreassen and Jill Johnson. In Poland an international country music festival, known as Piknik Country, has been organised in Mrągowo in Masuria since 1983. The number of country music artists in France has increased. Some of the most important are Liane Edwards, Annabel, Rockie Mountains, Tahiana, and Lili West. French rock and roll singer Eddy Mitchell is also inspired by Americana and country music. In the Netherlands there are many artists producing popular country and Americana music, which is mostly in the English language, as well as Dutch country and country-like music in the Dutch language. The latter is mainly popular on the countrysides in the northern and eastern parts of the Netherlands and is less associated with its US brethren, although it sounds sometimes very similar. Well-known popular artists mainly performing in English are Waylon, Danny Vera, Ilse DeLange, Douwe Bob and Henk Wijngaard. Performers and shows US cable television Several US television networks are at least partly devoted to the genre: Country Music Television (the first channel devoted to country music) and CMT Music (both owned by Paramount Global), RFD-TV and The Cowboy Channel (both owned by Rural Media Group), Heartland (owned by Get After It Media), Circle (a joint venture of the Grand Ole Opry and Gray Television), The Country Network (owned by TCN Country, LLC), and Country Music Channel (the country-oriented sister channel of California Music Channel). The Nashville Network (TNN) was launched in 1983 as a channel devoted to country music, and later added sports and outdoor lifestyle programming. It actually launched just two days after CMT. In 2000, after TNN and CMT fell under the same corporate ownership, TNN was stripped of its country format and rebranded as The National Network, then Spike TV in 2003, Spike in 2006, and finally Paramount Network in 2018. TNN was later revived from 2012 to 2013 after Jim Owens Entertainment (the company responsible for prominent TNN hosts Crook & Chase) acquired the trademark and licensed it to Luken Communications; that channel renamed itself Heartland after Luken was embroiled in an unrelated dispute that left the company bankrupt. Great American Country (GAC) was launched in 1995, also as a country music-oriented channel that would later add lifestyle programming pertaining to the American Heartland and South. In Spring 2021, GAC's then-owner, Discovery, Inc. divested the network to GAC Media, which also acquired the equestrian network Ride TV. Later, in the summer of that year, GAC Media relaunched Great American Country as GAC Family, a family-oriented general entertainment network, while Ride TV was relaunched as GAC Living, a network devoted to programming pertaining to lifestyles of the American South. The GAC acronym which once stood for "Great American Country" now stands for "Great American Channels". Canadian television Only one television channel was dedicated to country music in Canada: CMT owned by Corus Entertainment (90%) and Viacom (10%). However, the lifting of strict genre licensing restrictions saw the network remove the last of its music programming at the end of August 2017 for a schedule of generic off-network family sitcoms, Cancom-compliant lifestyle programming, and reality programming. In the past, the current-day Cottage Life network saw some country focus as Country Canada and later, CBC Country Canada before that network drifted into an alternate network for overflow CBC content as Bold. Stingray Music continues to maintain several country music audio-only channels on cable radio. In the past, country music had an extensive presence, especially on the Canadian national broadcaster, CBC Television. The show Don Messer's Jubilee significantly affected country music in Canada; for instance, it was the program that launched Anne Murray's career. Gordie Tapp's Country Hoedown and its successor, The Tommy Hunter Show, ran for a combined 36 years on the CBC, from 1956 to 1992; in its last nine years on air, the U.S. cable network TNN carried Hunter's show. Australian cable television The only network dedicated to country music in Australia was the Country Music Channel owned by Foxtel. It ceased operations in June 2020 and was replaced by CMT (owned by Network 10 parent company Paramount Networks UK & Australia). British digital television One music video channel is now dedicated to country music in the United Kingdom: Spotlight TV, owned by Canis Media. Festivals Criticism Subgenres misrepresented on streaming services Computer science and music experts identified issues with algorithms on streaming services such as Spotify and Apple Music, specifically the categorical homogenization of music curation and metadata within larger genres such as country music. Musicians and songs from minority heritage styles, such as Appalachian, Cajun, New Mexico, and Tejano music, underperform on these platforms due to underrepresentation and miscategorization of these subgenres. Race issue in modern country music The Country Music Association has awarded the New Artist award to a black American only twice in 63 years, and never to a Hispanic musician. The broader modern Nashville-based Country music industry has underrepresented significant black and Latino contributions within Country music, including popular subgenres such as Cajun, Creole, Tejano, and New Mexico music. A 2021 CNN article states, "Some in country music have signaled that they are no longer content to be associated with a painful history of racism." Black country-music artist Mickey Guyton had been included among the nominees for the 2021 award, effectively creating a litmus-test for the genre. Guyton has expressed bewilderment that, despite substantial coverage by online platforms like Spotify and Apple Music, her music, like that of Valerie June, another black musician who embraces aspects of country in her Appalachian- and Gospel-tinged work and who has been embraced by international music audiences, is still effectively ignored by American broadcast country-music radio. Guyton's 2021 album Remember Her Name in part references the case of black health-care professional Breonna Taylor, who was killed in her home by police. In 2023, "Try That in a Small Town" by Jason Aldean became the subject of widespread controversy and media attention following the release of its music video. Tennessee state representative Justin Jones referred to the song as a "heinous vile racist song" which attempts to normalize "racist, violence, vigilantism and white nationalism". Others thought the lyrics were supportive of lynchings and sundown towns. Amanda Marie Martinez of NPR wrote that the song "builds on a lineage of anti-city songs in country music that place the rural and urban along not only a moral versus immoral binary, but an implicitly racialized one as well...selective availability of home loans in suburbs and racially restrictive housing covenants in cities furthered white flight, making cities synonymous with non-whiteness." She concluded by stating that such songs are "why country music continues to be a frightening space for marginalized communities". See also American Country Countdown Awards Canadian Country Music Association CMT Music Awards Country (identity) Country and Irish Country Music Hall of Fame and Museum Country-western dance Culture of the Southern United States Music genre List of country music performers List of RPM number-one country singles Music of the United States Pop music Western Music Association 2021 in country music References Further reading Thomas S. Johnson (1981) "That Ain't Country: The Distinctiveness of Commercial Western Music" JEMF Quarterly. Vol. 17, No. 62. Summer, 1981. pp 75–84. Bill Legere (1977). Record Collectors Guide of Country LPs. Limited ed. Mississauga, Ont.: W.J. Legere. 269, 25, 29, 2 p., thrice perforated and looseleaf. Without ISBN Bill Legere ([1977]). E[lectrical] T[anscription]s: Transcription Library of Bill Legere. Mississauga, Ont.: B. Legere. 3 vols., each of which is thrice perforated and looseleaf. N.B.: Vol. 1–2, Country Artists—vol. 2, Pop Artists. Without ISBN Diane Pecknold (ed.) Hidden in the Mix: The African American Presence in Country Music. Durham, NC: Duke University Press, 2013. External links The Country Music Association – Nashville, Tennessee(CMA) Western Music Association (WMA) Country Music Hall of Fame and Museum – Nashville, Tennessee Grand Ole Opry – Nashville, Tennessee Irish country music Country Music Festivals Ontario Website Nashville Songwriters Hall of Fame Foundation TIME Archive of country music's progression Xroad.virginia.edu, alt country from American Studies at the University of Virginia Largest collection of online Country music radio stations Kingwood Kowboy's History Of Country Music A Treasure Trove for Country Music Collectors. The British Archive of Country Music Records, BACM, is dedicated to the preservation of traditional country music 2021 1920s in music 1930s in music 1940s in music 1950s in music 1960s in music 1970s in music 1980s in music 1990s in music 2000s in music 2010s in music 2020s in music American styles of music Culture of the Southern United States Radio formats
5248
https://en.wikipedia.org/wiki/Cold%20War%20%281948%E2%80%931953%29
Cold War (1948–1953)
The Cold War (1948–1953) is the period within the Cold War from the incapacitation of the Allied Control Council in 1948 to the conclusion of the Korean War in 1953. The list of world leaders in these years is as follows: 1948–49: Clement Attlee (UK); Harry Truman (US); Vincent Auriol (France); Joseph Stalin (USSR); Chiang Kai-shek (Allied China) 1950–51: Clement Attlee (UK); Harry Truman (US); Vincent Auriol (France); Joseph Stalin (USSR); Mao Zedong (China) 1952–53: Winston Churchill (UK); Harry Truman (US); Vincent Auriol (France); Joseph Stalin (USSR); Mao Zedong (China) Europe Berlin Blockade After the Marshall Plan, the introduction of a new currency to Western Germany to replace the debased Reichsmark and massive electoral losses for communist parties in 1946, in June 1948, the Soviet Union cut off surface road access to Berlin. On the day of the Berlin Blockade, a Soviet representative told the other occupying powers "We are warning both you and the population of Berlin that we shall apply economic and administrative sanctions that will lead to circulation in Berlin exclusively of the currency of the Soviet occupation zone." Thereafter, street and water communications were severed, rail and barge traffic was stopped and the Soviets initially stopped supplying food to the civilian population in the non-Soviet sectors of Berlin. Because Berlin was located within the Soviet-occupied zone of Germany and the other occupying powers had previously relied on Soviet good will for access to Berlin, the only available methods of supplying the city were three limited air corridors. By February 1948, because of massive post-war military cuts, the entire United States army had been reduced to 552,000 men. Military forces in non-Soviet Berlin sectors totaled only 8,973 Americans, 7,606 British and 6,100 French. Soviet military forces in the Soviet sector that surrounded Berlin totaled one and a half million men. The two United States regiments in Berlin would have provided little resistance against a Soviet attack. Believing that Britain, France and the United States had little option other than to acquiesce, the Soviet Military Administration in Germany celebrated the beginning of the blockade. Thereafter, a massive aerial supply campaign of food, water and other goods was initiated by the United States, Britain, France and other countries. The Soviets derided "the futile attempts of the Americans to save face and to maintain their untenable position in Berlin." The success of the airlift eventually caused the Soviets to lift their blockade in May 1949. However, the Soviet Army was still capable of conquering Western Europe without much difficulty. In September 1948, US military intelligence experts estimated that the Soviets had about 485,000 troops in their German occupation zone and in Poland, and some 1.785 million troops in Europe in total. At the same time, the number of US troops in 1948 was about 140,000. Tito–Stalin Split After disagreements between Yugoslavian leader Josip Broz Tito and the Soviet Union regarding Greece and the People's Republic of Albania, a Tito–Stalin Split occurred, followed by Yugoslavia being expelled from the Cominform in June 1948 and a brief failed Soviet putsch in Belgrade. The split created two separate communist forces in Europe. A vehement campaign against "Titoism" was immediately started in the Eastern Bloc, describing agents of both the West and Tito in all places engaging in subversive activity. This resulted in the persecution of many major party cadres, including those in East Germany. was split up and dissolved in 1954 and 1975, also because of the détente between the West and Tito. NATO The United States joined Britain, France, Canada, Denmark, Portugal, Norway, Belgium, Iceland, Luxembourg, Italy, and the Netherlands in 1949 to form the North Atlantic Treaty Organization (NATO), the United States' first "entangling" European alliance in 170 years. West Germany, Spain, Greece, and Turkey would later join this alliance. The Eastern leaders retaliated against these steps by integrating the economies of their nations in Comecon, their version of the Marshall Plan; exploding the first Soviet atomic device in 1949; signing an alliance with People's Republic of China in February 1950; and forming the Warsaw Pact, Eastern Europe's counterpart to NATO, in 1955. The Soviet Union, Albania, Czechoslovakia, Hungary, East Germany, Bulgaria, Romania, and Poland founded this military alliance. NSC 68 U.S. officials quickly moved to escalate and expand "containment." In a secret 1950 document, NSC 68, they proposed to strengthen their alliance systems, quadruple defense spending, and embark on an elaborate propaganda campaign to convince the U.S. public to fight this costly cold war. Truman ordered the development of a hydrogen bomb. In early 1950, the U.S. took its first efforts to oppose communist forces in Vietnam; planned to form a West German army, and prepared proposals for a peace treaty with Japan that would guarantee long-term U.S. military bases there. Outside Europe The Cold War took place worldwide, but it had a partially different timing and trajectory outside Europe. In Africa, decolonization took place first; it was largely accomplished in the 1950s. The main rivals then sought bases of support in the new national political alignments. In Latin America, the first major confrontation took place in Guatemala in 1954. When the new Castro government of Cuba turned to Soviets support in 1960, Cuba became the center of the anti-American Cold War forces, supported by the Soviet Union. Chinese Civil War As Japan's empire collapsed in 1945 the civil war resumed in China between the Kuomintang (KMT) led by Generalissimo Chiang Kai-shek and the Chinese Communist Party led by Mao Zedong. The USSR had signed a Treaty of Friendship with the Kuomintang in 1945 and disavowed support for the Chinese Communists. The outcome was closely fought, with the Communists finally prevailing with superior military tactics. Although the Nationalists had an advantage in numbers of men and weapons, initially controlled a much larger territory and population than their adversaries, and enjoyed considerable international support, they were exhausted by the long war with Japan and the attendant internal responsibilities. In addition, the Chinese Communists were able to fill the political vacuum left in Manchuria after Soviet forces withdrew from the area and thus gained China's prime industrial base. The Chinese Communists were able to fight their way from the north and northeast, and virtually all of mainland China was taken by the end of 1949. On October 1, 1949, Mao Zedong proclaimed the People's Republic of China (PRC). Chiang Kai-shek and 600,000 Nationalist troops and 2 million refugees, predominantly from the government and business community, fled from the mainland to the island of Taiwan. In December 1949, Chiang proclaimed Taipei the temporary capital of the Republic of China (ROC) and continued to assert his government as the sole legitimate authority in China. The continued hostility between the Communists on the mainland and the Nationalists on Taiwan continued throughout the Cold War. Though the United States refused to aide Chiang Kai-shek in his hope to "recover the mainland," it continued supporting the Republic of China with military supplies and expertise to prevent Taiwan from falling into PRC hands. Through the support of the Western bloc (most Western countries continued to recognize the ROC as the sole legitimate government of China), the Republic of China on Taiwan retained China's seat in the United Nations until 1971. Madiun Affair Madiun Affair took place on September 18, 1948 in the city of Madiun, East Java. This rebellion was carried out by the Front Demokrasi Rakyat (FDR, People's Democratic Front) which united all socialist and communist groups in Indonesia. This rebellion ended 3 months later after its leaders were arrested and executed by the TNI. This revolt began with the fall of the Amir Syarifuddin Cabinet due to the signing of the Renville Agreement which benefited the Dutch and was eventually replaced by the Hatta Cabinet which did not belong to the left wing. This led Amir Syarifuddin to declare opposition to the Hatta Cabinet government and to declare the formation of the People's Democratic Front. Before it, In the PKI Politburo session on August 13–14, 1948, Musso, an Indonesian communist figure, introduced a political concept called "Jalan Baru". He also wanted a single Marxism party called the PKI (Communist Party of Indonesia) consisting of illegal communists, the Labour Party of Indonesia, and Partai Sosialis(Socialist Party). On September 18, 1948, the FDR declared the formation of the Republic of Soviet-Indonesia. In addition, the communists also carried out a rebellion in the Pati Residency and the kidnapping of groups who were considered to be against communists. Even this rebellion resulted in the murder of the Governor of East Java at the time, Raden Mas Tumenggung Ario Soerjo. The crackdown operation against this movement began. This operation was led by A.H. Nasution. The Indonesian government also applied Commander General Sudirman to the Military Operations Movement I where General Sudirman ordered Colonel Gatot Soebroto and Colonel Sungkono to mobilize the TNI and police to crush the rebellion. On September 30, 1948, Madiun was captured again by the Republic of Indonesia. Musso was shot dead on his escape in Sumoroto and Amir Syarifuddin was executed after being captured in Central Java. In early December 1948, the Madiun Affair crackdown was declared complete. Korean War In early 1950, the United States made its first commitment to form a peace treaty with Japan that would guarantee long-term U.S. military bases. Some observers (including George Kennan) believed that the Japanese treaty led Stalin to approve a plan to invade U.S.-supported South Korea on June 25, 1950. Korea had been divided at the end of World War II along the 38th parallel into Soviet and U.S. occupation zones, in which a communist government was installed in the North by the Soviets, and an elected government in the South came to power after UN-supervised elections in 1948. In June 1950, Kim Il Sung's North Korean People's Army invaded South Korea. Fearing that communist Korea under a Kim Il Sung dictatorship could threaten Japan and foster other communist movements in Asia, Truman committed U.S. forces and obtained help from the United Nations to counter the North Korean invasion. The Soviets boycotted UN Security Council meetings while protesting the Council's failure to seat the People's Republic of China and, thus, did not veto the Council's approval of UN action to oppose the North Korean invasion. A joint UN force of personnel from South Korea, the United States, Britain, Turkey, Canada, Australia, France, the Philippines, the Netherlands, Belgium, New Zealand and other countries joined to stop the invasion. After a Chinese invasion to assist the North Koreans, fighting stabilized along the 38th parallel, which had separated the Koreas. Truman faced a hostile China, a Sino-Soviet partnership, and a defense budget that had quadrupled in eighteen months. The Korean Armistice Agreement was signed in July 1953 after the death of Stalin, who had been insisting that the North Koreans continue fighting. In North Korea, Kim Il Sung created a highly centralized and brutal dictatorship, according himself unlimited power and generating a formidable cult of personality. Hydrogen bomb A hydrogen bomb—which produced nuclear fusion instead of nuclear fission—was first tested by the United States in November 1952 and the Soviet Union in August 1953. Such bombs were first deployed in the 1960s. Culture and media Fear of a nuclear war spurred the production of public safety films by the United States federal government's Civil Defense branch that demonstrated ways on protecting oneself from a Soviet nuclear attack. The 1951 children's film Duck and Cover is a prime example. George Orwell's classic dystopia Nineteen Eighty-Four was published in 1949. The novel explores life in an imagined future world where a totalitarian government has achieved terrifying levels of power and control. With Nineteen Eighty-Four, Orwell taps into the anti-communist fears that would continue to haunt so many in the West for decades to come. In a Cold War setting his descriptions could hardly fail to evoke comparison to Soviet communism and the seeming willingness of Stalin and his successors to control those within the Soviet bloc by whatever means necessary. Orwell's famous allegory of totalitarian rule, Animal Farm, published in 1945, provoked similar anti-communist sentiments. See also Western Union History of the Soviet Union (1927–1953) History of the United States (1945–1964) Timeline of events in the Cold War Animal Farm Notes References Ball, S. J. The Cold War: An International History, 1947–1991 (1998). British perspective Brzezinski, Zbigniew. The Grand Failure: The Birth and Death of Communism in the Twentieth Century (1989); Brune, Lester Brune and Richard Dean Burns. Chronology of the Cold War: 1917–1992 (2005) 700pp; highly detailed month-by-month summary for many countries Gaddis, John Lewis. The Cold War: A New History (2005) Gaddis, John Lewis. Long Peace: Inquiries into the History of the Cold War (1987) Gaddis, John Lewis. Strategies of Containment: A Critical Appraisal of Postwar American National Security Policy (1982) LaFeber, Walter. America, Russia, and the Cold War, 1945–1992 7th ed. (1993) Lewkowicz, Nicolas (2018) The United States, the Soviet Union and the Geopolitical Implications of the Origins of the Cold War, Anthem Press, London Mitchell, George. The Iron Curtain: The Cold War in Europe (2004) Ninkovich, Frank. Germany and the United States: The Transformation of the German Question since 1945 (1988) Paterson, Thomas G. Meeting the Communist Threat: Truman to Reagan (1988) Sivachev, Nikolai and Nikolai Yakolev, Russia and the United States (1979), by Soviet historians Ulam, Adam B. Expansion and Coexistence: Soviet Foreign Policy, 1917–1973, 2nd ed. (1974) Walker, J. Samuel. "Historians and Cold War Origins: The New Consensus", in Gerald K. Haines and J. Samuel Walker, eds., American Foreign Relations: A Historiographical Review (1981), 207–236. Cumings, Bruce The Origins of the Korean War (2 vols., 1981–90), friendly to North Korea and hostile to U.S. Holloway, David. Stalin and the Bomb: The Soviet Union and Atomic Energy, 1959-1956 (1994) Goncharov, Sergei, John Lewis and Xue Litai, Uncertain Partners: Stalin, Mao and the Korean War (1993) Leffler, Melvyn. A Preponderance of Power: National Security, the Truman Administration and the Cold War (1992). Mastny, Vojtech. Russia's Road to the Cold War: Diplomacy, Warfare, and the Politics of Communism, 1941–1945 (1979) Zhang, Shu Guang. Beijing's Economic Statecraft during the Cold War, 1949-1991 (2014). online review External links Draft, Report on Communist Expansion, February 28, 1947 The division of Europe on CVCE website James F. Byrnes, Speaking Frankly The division of Germany. From BYRNES, James F. Speaking Frankly. New York: Harper and Brothers Publishers, 1947. 324 p, Available on the CVCE website. The beginning of the Cold War on CVCE website The Sinews of Peace Winston Churchill speech in 5, March, 1946, warning about the advance of communism in central Europe. Sound extract on the CVCE website. Dividing up Europe The 1944 division of Europe between the Soviet Union and Britain into zones of influence. On CVCE website James Francis Byrnes and U.S. Policy towards Germany 1945–1947 Deutsch-Amerikanische Zentrum / James-F.-Byrnes-Institut e.V UK Policy towards Germany National Archives excerpts of Cabinet meetings. Royal Engineers Museum Royal Engineers and the Cold War Cold War overview Cold War by period 1947 in international relations 1948 in international relations 1949 in international relations 1950 in international relations 1951 in international relations 1952 in international relations 1953 in international relations
5249
https://en.wikipedia.org/wiki/Crony%20capitalism
Crony capitalism
Crony capitalism sometimes also called simply cronyism, is a pejorative term used in political discourse to describe a situation in which businesses profit from a close relationship with state power, either through an anti-competitive regulatory environment, direct government largesse, and/or corruption. Examples given for crony capitalism include obtainment of permits, government grants, tax breaks, or other undue influence from businesses over the state's deployment of public goods, for example, mining concessions for primary commodities or contracts for public works. In other words, it is used to describe a situation where businesses thrive not as a result of free enterprise, but rather collusion between a business class and the political class. Money is then made not merely by making a profit in the market, but through profiteering by rent seeking using this monopoly or oligopoly. Entrepreneurship and innovative practices which seek to reward risk are stifled since the value-added is little by crony businesses, as hardly anything of significant value is created by them, with transactions taking the form of trading. Crony capitalism spills over into the government, the politics, and the media, when this nexus distorts the economy and affects society to an extent it corrupts public-serving economic, political, and social ideals. Historical usage The first extensive use of the term "crony capitalism" came about in the 1980s, to characterize the Philippine economy under the dictatorship of Ferdinand Marcos. Early uses of this term to describe the economic practices of the Marcos regime included that of Ricardo Manapat, who introduced it in his 1979 pamphlet "Some are Smarter than Others", which was later published in 1991; former Time magazine business editor George M. Taber, who used the term in a Time magazine article in 1980, and activist (and later Finance Minister) Jaime Ongpin, who used the term extensively in his writing and is sometimes credited for having coined it. The term crony capitalism made a significant impact in the public as an explanation of the Asian financial crisis. It is also used to describe governmental decisions favoring cronies of governmental officials. The term is used largely interchangeably with the related term corporate welfare, although the latter is by definition specific to corporations. In practice Crony capitalism exists along a continuum. In its lightest form, crony capitalism consists of collusion among market players which is officially tolerated or encouraged by the government. While perhaps lightly competing against each other, they will present a unified front (sometimes called a trade association or industry trade group) to the government in requesting subsidies or aid or regulation. For instance, newcomers to a market then need to surmount significant barriers to entry in seeking loans, acquiring shelf space, or receiving official sanction. Some such systems are very formalized, such as sports leagues and the Medallion System of the taxicabs of New York City, but often the process is more subtle, such as expanding training and certification exams to make it more expensive for new entrants to enter a market and thereby limiting potential competition. In technological fields, there may evolve a system whereby new entrants may be accused of infringing on patents that the established competitors never assert against each other. In spite of this, some competitors may succeed when the legal barriers are light. The term crony capitalism is generally used when these practices either come to dominate the economy as a whole, or come to dominate the most valuable industries in an economy. Intentionally ambiguous laws and regulations are common in such systems. Taken strictly, such laws would greatly impede practically all business activity, but in practice they are only erratically enforced. The specter of having such laws suddenly brought down upon a business provides an incentive to stay in the good graces of political officials. Troublesome rivals who have overstepped their bounds can have these laws suddenly enforced against them, leading to fines or even jail time. Even in high-income democracies with well-established legal systems and freedom of the press in place, a larger state is generally associated with increased political corruption. The term crony capitalism was initially applied to states involved in the 1997 Asian financial crisis such as Indonesia, South Korea and Thailand. In these cases, the term was used to point out how family members of the ruling leaders become extremely wealthy with no non-political justification. Southeast Asian nations, such as Hong Kong and Malaysia, still score very poorly in rankings measuring this. It was also used in this context as part of a broader liberal critique of economic dirigisme. The term has also been applied to the system of oligarchs in Russia. Other states to which the term has been applied include India, in particular the system after the 1990s liberalization, whereby land and other resources were given at throwaway prices in the name of public private partnerships, the more recent coal-gate scam and cheap allocation of land and resources to Adani SEZ under the Congress and BJP governments. Similar references to crony capitalism have been made to other countries such as Argentina and Greece. Wu Jinglian, one of China's leading economists and a longtime advocate of its transition to free markets, says that it faces two starkly contrasting futures, namely a market economy under the rule of law or crony capitalism. A dozen years later, prominent political scientist Pei Minxin had concluded that the latter course had become deeply embedded in China. The anti-corruption campaign under Xi Jinping (2012–) has seen more than 100,000 high- and low-ranking Chinese officials indicted and jailed. Many prosperous nations have also had varying amounts of cronyism throughout their history, including the United Kingdom especially in the 1600s and 1700s, the United States and Japan. Crony capitalism index The Economist benchmarks countries based on a crony-capitalism index calculated via how much economic activity occurs in industries prone to cronyism. Its 2014 Crony Capitalism Index ranking listed Hong Kong, Russia and Malaysia in the top three spots. In finance Crony capitalism in finance was found in the Second Bank of the United States. It was a private company, but its largest stockholder was the federal government which owned 20%. It was an early bank regulator and grew to be one being the most powerful organizations in the country due largely to being the depository of the government's revenue. The Gramm–Leach–Bliley Act in 1999 completely removed Glass–Steagall’s separation between commercial banks and investment banks. After this repeal, commercial banks, investment banks and insurance companies combined their lobbying efforts. Critics claim this was instrumental in the passage of the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005. In sections of an economy More direct government involvement in a specific sector can also lead to specific areas of crony capitalism, even if the economy as a whole may be competitive. This is most common in natural resource sectors through the granting of mining or drilling concessions, but it is also possible through a process known as regulatory capture where the government agencies in charge of regulating an industry come to be controlled by that industry. Governments will often establish in good faith government agencies to regulate an industry. However, the members of an industry have a very strong interest in the actions of that regulatory body while the rest of the citizenry are only lightly affected. As a result, it is not uncommon for current industry players to gain control of the watchdog and to use it against competitors. This typically takes the form of making it very expensive for a new entrant to enter the market. An 1824 landmark United States Supreme Court ruling overturned a New York State-granted monopoly ("a veritable model of state munificence" facilitated by Robert R. Livingston, one of the Founding Fathers) for the then-revolutionary technology of steamboats. Leveraging the Supreme Court's establishment of Congressional supremacy over commerce, the Interstate Commerce Commission was established in 1887 with the intent of regulating railroad robber barons. President Grover Cleveland appointed Thomas M. Cooley, a railroad ally, as its first chairman and a permit system was used to deny access to new entrants and legalize price fixing. The defense industry in the United States is often described as an example of crony capitalism in an industry. Connections with the Pentagon and lobbyists in Washington are described by critics as more important than actual competition due to the political and secretive nature of defense contracts. In the Airbus-Boeing WTO dispute, Airbus (which receives outright subsidies from European governments) has stated Boeing receives similar subsidies which are hidden as inefficient defense contracts. Other American defense companies were put under scrutiny for no-bid contracts for Iraq War and Hurricane Katrina related contracts purportedly due to having cronies in the Bush administration. Gerald P. O'Driscoll, former vice president at the Federal Reserve Bank of Dallas, stated that Fannie Mae and Freddie Mac became examples of crony capitalism as government backing let Fannie and Freddie dominate mortgage underwriting, saying. "The politicians created the mortgage giants, which then returned some of the profits to the pols—sometimes directly, as campaign funds; sometimes as "contributions" to favored constituents". In developing economies In its worst form, crony capitalism can devolve into simple corruption where any pretense of a free market is dispensed with, bribes to government officials are considered de rigueur and tax evasion is common. This is seen in many parts of Africa and is sometimes called plutocracy (rule by wealth) or kleptocracy (rule by theft). Kenyan economist David Ndii has repeatedly brought to light how this system has manifested over time, occasioned by the reign of Uhuru Kenyatta as president. Corrupt governments may favor one set of business owners who have close ties to the government over others. This may also be done with, religious, or ethnic favoritism. For instance, Alawites in Syria have a disproportionate share of power in the government and business there (President Assad himself is an Alawite). This can be explained by considering personal relationships as a social network. As government and business leaders try to accomplish various things, they naturally turn to other powerful people for support in their endeavors. These people form hubs in the network. In a developing country those hubs may be very few, thus concentrating economic and political power in a small interlocking group. Normally, this will be untenable to maintain in business as new entrants will affect the market. However, if business and government are entwined, then the government can maintain the small-hub network. Raymond Vernon, specialist in economics and international affairs, wrote that the Industrial Revolution began in Great Britain because they were the first to successfully limit the power of veto groups (typically cronies of those with power in government) to block innovations, writing: "Unlike most other national environments, the British environment of the early 19th century contained relatively few threats to those who improved and applied existing inventions, whether from business competitors, labor, or the government itself. In other European countries, by contrast, the merchant guilds ... were a pervasive source of veto for many centuries. This power was typically bestowed upon them by government." For example, a Russian inventor produced a steam engine in 1766 and disappeared without a trace. Vermon further stated that "a steam powered horseless carriage produced in France in 1769 was officially suppressed." James Watt began experimenting with steam in 1763, got a patent in 1769 and began commercial production in 1775. Raghuram Rajan, former governor of the Reserve Bank of India, has said: "One of the greatest dangers to the growth of developing countries is the middle income trap, where crony capitalism creates oligarchies that slow down growth. If the debate during the elections is any pointer, this is a very real concern of the public in India today". Tavleen Singh, columnist for The Indian Express, has disagreed. According to Singh, India's corporate success is not a product of crony capitalism, but because India is no longer under the influence of crony socialism. Political viewpoints While the problem is generally accepted across the political spectrum, ideology shades the view of the problem's causes and therefore its solutions. Political views mostly fall into two camps which might be called the socialist and capitalist critique. The socialist position is that crony capitalism is the inevitable result of any strictly capitalist system and thus broadly democratic government must regulate economic, or wealthy, interests to restrict monopoly. The capitalist position is that natural monopolies are rare, therefore governmental regulations generally abet established wealthy interests by restricting competition. Socialist critique Critics of crony capitalism including socialists and anti-capitalists often assert that so-called crony capitalism is simply the inevitable result of any strictly capitalist system. Jane Jacobs described it as a natural consequence of collusion between those managing power and trade while Noam Chomsky has argued that the word crony is superfluous when describing capitalism. Since businesses make money and money leads to political power, business will inevitably use their power to influence governments. Much of the impetus behind campaign finance reform in the United States and in other countries is an attempt to prevent economic power being used to take political power. Ravi Batra argues that "all official economic measures adopted since 1981 ... have devastated the middle class" and that the Occupy Wall Street movement should push for their repeal and thus end the influence of the super wealthy in the political process which he considers a manifestation of crony capitalism. Socialist economists, such as Robin Hahnel, have criticized the term as an ideologically motivated attempt to cast what is in their view the fundamental problems of capitalism as avoidable irregularities. Socialist economists dismiss the term as an apologetic for failures of neoliberal policy and more fundamentally their perception of the weaknesses of market allocation. Capitalist critique Supporters of capitalism also generally oppose crony capitalism. Further, supporters such as classical liberals, neoliberals and right-libertarians consider it an aberration brought on by governmental favors incompatible with free market.. In the capitalist view, cronyism is the result of an excess of interference in the market which inevitably will result in a toxic combination of corporations and government officials running sectors of the economy. For instance, the Financial Times observed that, in Vietnam during the 2010s, the primary beneficiaries of cronyism were Communist party officials, noting also the "common practice of employing only party members and their family members and associates to government jobs or to jobs in state-owned enterprises." Conservative commentator Ben Shapiro prefers to equate this problem with terms such as corporatocracy or corporatism, considered "a modern form of mercantilism", to emphasize that the only way to run a profitable business in such a system is to have help from corrupt government officials. Likewise, Hernando de Soto said that mercantilism "is also known as 'crony' or 'noninclusive' capitalism". Even if the initial regulation was well-intentioned (to curb actual abuses) and even if the initial lobbying by corporations was well-intentioned (to reduce illogical regulations), the mixture of business and government stifles competition, a collusive result called regulatory capture. Burton W. Folsom Jr. distinguishes those that engage in crony capitalism—designated by him political entrepreneurs—from those who compete in the marketplace without special aid from government, whom he calls market entrepreneurs. The market entrepreneurs such as James J. Hill, Cornelius Vanderbilt and John D. Rockefeller succeeded by producing a quality product at a competitive price. For example, the political entrepreneurs such as Edward Collins in steamships and the leaders of the Union Pacific Railroad in railroads were men who used the power of government to succeed. They tried to gain subsidies or in some way use government to stop competitors. See also Corporatocracy Cronies of Ferdinand Marcos Economic History of the Philippines under Ferdinand Marcos Government failure Government-owned corporation Inverted totalitarianism Iron triangle (US politics) Licence Raj (concept in Indian political-economics) Mercantilism Patrimonialism Political family Political machine Regulatory capture Rent-seeking Stamocap State capture Supercapitalism Zhao family Notes References Further reading Khatri, Naresh (2013). Anatomy of Indian Brand of Crony Capitalism. https://ssrn.com/abstract=2335201. http://mpra.ub.uni-muenchen.de/19626/1/WP0802.pdf Bribery Capitalism Management cybernetics Political corruption Political terminology Public choice theory
5252
https://en.wikipedia.org/wiki/Lists%20of%20universities%20and%20colleges
Lists of universities and colleges
This is a list of lists of universities and colleges. Subject of study Aerospace engineering Agriculture Art schools Business Chiropractic Engineering Forestry Law Maritime studies Medicine Music Nanotechnology Osteopathy Pharmaceuticals Social Work Institution type Community colleges For-profit universities and colleges Land-grant universities Liberal arts universities National universities Postgraduate-only institutions Private universities Public universities Research universities Technical universities Sea-grant universities Space-grant universities State universities and colleges Unaccredited universities Location Lists of universities and colleges by country List of largest universities Religious affiliation Assemblies of God Baptist colleges and universities in the United States Catholic universities Ecclesiastical universities Benedictine colleges and universities Jesuit institutions Opus Dei universities Pontifical universities International Council of Universities of Saint Thomas Aquinas International Federation of Catholic Universities Christian churches and churches of Christ Churches of Christ Church of the Nazarene Islamic seminaries Lutheran colleges and universities International Association of Methodist-related Schools, Colleges, and Universities Muslim educational institutions Association of Presbyterian Colleges and Universities Extremities Endowment Largest universities by enrollment Oldest madrasahs in continuous operation Oldest universities in continuous operation Other Colleges and universities named after people History Medieval universities Ancient universities in Britain and Ireland See also Lists of schools Distance education
5253
https://en.wikipedia.org/wiki/Constitution
Constitution
A constitution is the aggregate of fundamental principles or established precedents that constitute the legal basis of a polity, organization or other type of entity, and commonly determines how that entity is to be governed. When these principles are written down into a single document or set of legal documents, those documents may be said to embody a written constitution; if they are encompassed in a single comprehensive document, it is said to embody a codified constitution. The Constitution of the United Kingdom is a notable example of an uncodified constitution; it is instead written in numerous fundamental Acts of a legislature, court cases, or treaties. Constitutions concern different levels of organizations, from sovereign countries to companies and unincorporated associations. A treaty that establishes an international organization is also its constitution, in that it would define how that organization is constituted. Within states, a constitution defines the principles upon which the state is based, the procedure in which laws are made and by whom. Some constitutions, especially codified constitutions, also act as limiters of state power, by establishing lines which a state's rulers cannot cross, such as fundamental rights. The Constitution of India is the longest written constitution of any country in the world, with 146,385 words in its English-language version, while the Constitution of Monaco is the shortest written constitution with 3,814 words. The Constitution of San Marino might be the world's oldest active written constitution, since some of its core documents have been in operation since 1600, while the Constitution of the United States is the oldest active codified constitution. The historical life expectancy of a constitution since 1789 is approximately 19 years. Etymology The term constitution comes through French from the Latin word , used for regulations and orders, such as the imperial enactments (constitutiones principis: edicta, mandata, decreta, rescripta). Later, the term was widely used in canon law for an important determination, especially a decree issued by the Pope, now referred to as an apostolic constitution. William Blackstone used the term for significant and egregious violations of public trust, of a nature and extent that the transgression would justify a revolutionary response. The term as used by Blackstone was not for a legal text, nor did he intend to include the later American concept of judicial review: "for that were to set the judicial power above that of the legislature, which would be subversive of all government". General features Generally, every modern written constitution confers specific powers on an organization or institutional entity, established upon the primary condition that it abides by the constitution's limitations. According to Scott Gordon, a political organization is constitutional to the extent that it "contain[s] institutionalized mechanisms of power control for the protection of the interests and liberties of the citizenry, including those that may be in the minority". Activities of officials within an organization or polity that fall within the constitutional or statutory authority of those officials are termed "within power" (or, in Latin, intra vires); if they do not, they are termed "beyond power" (or, in Latin, ultra vires). For example, a students' union may be prohibited as an organization from engaging in activities not concerning students; if the union becomes involved in non-student activities, these activities are considered to be ultra vires of the union's charter, and nobody would be compelled by the charter to follow them. An example from the constitutional law of sovereign states would be a provincial parliament in a federal state trying to legislate in an area that the constitution allocates exclusively to the federal parliament, such as ratifying a treaty. Action that appears to be beyond power may be judicially reviewed and, if found to be beyond power, must cease. Legislation that is found to be beyond power will be "invalid" and of no force; this applies to primary legislation, requiring constitutional authorization, and secondary legislation, ordinarily requiring statutory authorization. In this context, "within power", intra vires, "authorized" and "valid" have the same meaning; as do "beyond power", ultra vires, "not authorized" and "invalid". In most but not all modern states the constitution has supremacy over ordinary statutory law (see Uncodified constitution below); in such states when an official act is unconstitutional, i.e. it is not a power granted to the government by the constitution, that act is null and void, and the nullification is ab initio, that is, from inception, not from the date of the finding. It was never "law", even though, if it had been a statute or statutory provision, it might have been adopted according to the procedures for adopting legislation. Sometimes the problem is not that a statute is unconstitutional, but that the application of it is, on a particular occasion, and a court may decide that while there are ways it could be applied that are constitutional, that instance was not allowed or legitimate. In such a case, only that application may be ruled unconstitutional. Historically, the remedies for such violations have been petitions for common law writs, such as quo warranto. Scholars debate whether a constitution must necessarily be autochthonous, resulting from the nations "spirit". Hegel said "A constitution...is the work of centuries; it is the idea, the consciousness of rationality so far as that consciousness is developed in a particular nation." History and development Since 1789, along with the Constitution of the United States of America (U.S. Constitution), which is the oldest and shortest written constitution still in force, close to 800 constitutions have been adopted and subsequently amended around the world by independent states. In the late 18th century, Thomas Jefferson predicted that a period of 20 years would be the optimal time for any constitution to be still in force, since "the earth belongs to the living, and not to the dead". Indeed, according to recent studies, the average life of any new written constitution is around 19 years. However, a great number of constitutions do not last more than 10 years, and around 10% do not last more than one year, as was the case of the French Constitution of 1791. By contrast, some constitutions, notably that of the United States, have remained in force for several centuries, often without major revision for long periods of time. The most common reasons for these frequent changes are the political desire for an immediate outcome and the short time devoted to the constitutional drafting process. A study in 2009 showed that the average time taken to draft a constitution is around 16 months, however there were also some extreme cases registered. For example, the Myanmar 2008 Constitution was being secretly drafted for more than 17 years, whereas at the other extreme, during the drafting of Japan's 1946 Constitution, the bureaucrats drafted everything in no more than a week. Japan has the oldest unamended constitution in the world. The record for the shortest overall process of drafting, adoption, and ratification of a national constitution belongs to the Romania's 1938 constitution, which installed a royal dictatorship in less than a month. Studies showed that typically extreme cases where the constitution-making process either takes too long or is extremely short were non-democracies. Constitutional rights are not a specific characteristic of democratic countries. Non-democratic countries have constitutions, such as that of North Korea, which officially grants every citizen, among other rights, the freedom of expression. Pre-modern constitutions Ancient Excavations in modern-day Iraq by Ernest de Sarzec in 1877 found evidence of the earliest known code of justice, issued by the Sumerian king Urukagina of Lagash . Perhaps the earliest prototype for a law of government, this document itself has not yet been discovered; however it is known that it allowed some rights to his citizens. For example, it is known that it relieved tax for widows and orphans, and protected the poor from the usury of the rich. After that, many governments ruled by special codes of written laws. The oldest such document still known to exist seems to be the Code of Ur-Nammu of Ur (c. 2050 BC). Some of the better-known ancient law codes are the code of Lipit-Ishtar of Isin, the code of Hammurabi of Babylonia, the Hittite code, the Assyrian code, and Mosaic law. In 621 BC, a scribe named Draco codified the oral laws of the city-state of Athens; this code prescribed the death penalty for many offenses (thus creating the modern term "draconian" for very strict rules). In 594 BC, Solon, the ruler of Athens, created the new Solonian Constitution. It eased the burden of the workers, and determined that membership of the ruling class was to be based on wealth (plutocracy), rather than on birth (aristocracy). Cleisthenes again reformed the Athenian constitution and set it on a democratic footing in 508 BC. Aristotle (c. 350 BC) was the first to make a formal distinction between ordinary law and constitutional law, establishing ideas of constitution and constitutionalism, and attempting to classify different forms of constitutional government. The most basic definition he used to describe a constitution in general terms was "the arrangement of the offices in a state". In his works Constitution of Athens, Politics, and Nicomachean Ethics, he explores different constitutions of his day, including those of Athens, Sparta, and Carthage. He classified both what he regarded as good and what he regarded as bad constitutions, and came to the conclusion that the best constitution was a mixed system including monarchic, aristocratic, and democratic elements. He also distinguished between citizens, who had the right to participate in the state, and non-citizens and slaves, who did not. The Romans initially codified their constitution in 450 BC as the Twelve Tables. They operated under a series of laws that were added from time to time, but Roman law was not reorganized into a single code until the Codex Theodosianus (438 AD); later, in the Eastern Empire, the Codex repetitæ prælectionis (534) was highly influential throughout Europe. This was followed in the east by the Ecloga of Leo III the Isaurian (740) and the Basilica of Basil I (878). The Edicts of Ashoka established constitutional principles for the 3rd century BC Maurya king's rule in India. For constitutional principles almost lost to antiquity, see the code of Manu. Early Middle Ages Many of the Germanic peoples that filled the power vacuum left by the Western Roman Empire in the Early Middle Ages codified their laws. One of the first of these Germanic law codes to be written was the Visigothic Code of Euric (471 AD). This was followed by the Lex Burgundionum, applying separate codes for Germans and for Romans; the Pactus Alamannorum; and the Salic Law of the Franks, all written soon after 500. In 506, the Breviarum or "Lex Romana" of Alaric II, king of the Visigoths, adopted and consolidated the Codex Theodosianus together with assorted earlier Roman laws. Systems that appeared somewhat later include the Edictum Rothari of the Lombards (643), the Lex Visigothorum (654), the Lex Alamannorum (730), and the Lex Frisionum (c. 785). These continental codes were all composed in Latin, while Anglo-Saxon was used for those of England, beginning with the Code of Æthelberht of Kent (602). Around 893, Alfred the Great combined this and two other earlier Saxon codes, with various Mosaic and Christian precepts, to produce the Doom book code of laws for England. Japan's Seventeen-article constitution written in 604, reportedly by Prince Shōtoku, is an early example of a constitution in Asian political history. Influenced by Buddhist teachings, the document focuses more on social morality than on institutions of government, and remains a notable early attempt at a government constitution. The Constitution of Medina (, Ṣaḥīfat al-Madīna), also known as the Charter of Medina, was drafted by the Islamic prophet Muhammad after his flight (hijra) to Yathrib where he became political leader. It constituted a formal agreement between Muhammad and all of the significant tribes and families of Yathrib (later known as Medina), including Muslims, Jews, and pagans. The document was drawn up with the explicit concern of bringing to an end the bitter intertribal fighting between the clans of the Aws (Aus) and Khazraj within Medina. To this effect it instituted a number of rights and responsibilities for the Muslim, Jewish, and pagan communities of Medina bringing them within the fold of one community – the Ummah. The precise dating of the Constitution of Medina remains debated, but generally scholars agree it was written shortly after the Hijra (622). In Wales, the Cyfraith Hywel (Law of Hywel) was codified by Hywel Dda c. 942–950. Middle Ages after 1000 The Pravda Yaroslava, originally combined by Yaroslav the Wise the Grand Prince of Kiev, was granted to Great Novgorod around 1017, and in 1054 was incorporated into the Russkaya Pravda; it became the law for all of Kievan Rus'. It survived only in later editions of the 15th century. In England, Henry I's proclamation of the Charter of Liberties in 1100 bound the king for the first time in his treatment of the clergy and the nobility. This idea was extended and refined by the English barony when they forced King John to sign Magna Carta in 1215. The most important single article of the Magna Carta, related to "habeas corpus", provided that the king was not permitted to imprison, outlaw, exile or kill anyone at a whim – there must be due process of law first. This article, Article 39, of the Magna Carta read: This provision became the cornerstone of English liberty after that point. The social contract in the original case was between the king and the nobility, but was gradually extended to all of the people. It led to the system of Constitutional Monarchy, with further reforms shifting the balance of power from the monarchy and nobility to the House of Commons. The Nomocanon of Saint Sava () was the first Serbian constitution from 1219. St. Sava's Nomocanon was the compilation of civil law, based on Roman Law, and canon law, based on Ecumenical Councils. Its basic purpose was to organize the functioning of the young Serbian kingdom and the Serbian church. Saint Sava began the work on the Serbian Nomocanon in 1208 while he was at Mount Athos, using The Nomocanon in Fourteen Titles, Synopsis of Stefan the Efesian, Nomocanon of John Scholasticus, and Ecumenical Council documents, which he modified with the canonical commentaries of Aristinos and Joannes Zonaras, local church meetings, rules of the Holy Fathers, the law of Moses, the translation of Prohiron, and the Byzantine emperors' Novellae (most were taken from Justinian's Novellae). The Nomocanon was a completely new compilation of civil and canonical regulations, taken from Byzantine sources but completed and reformed by St. Sava to function properly in Serbia. Besides decrees that organized the life of church, there are various norms regarding civil life; most of these were taken from Prohiron. Legal transplants of Roman-Byzantine law became the basis of the Serbian medieval law. The essence of Zakonopravilo was based on Corpus Iuris Civilis. Stefan Dušan, emperor of Serbs and Greeks, enacted Dušan's Code () in Serbia, in two state congresses: in 1349 in Skopje and in 1354 in Serres. It regulated all social spheres, so it was the second Serbian constitution, after St. Sava's Nomocanon (Zakonopravilo). The Code was based on Roman-Byzantine law. The legal transplanting within articles 171 and 172 of Dušan's Code, which regulated the juridical independence, is notable. They were taken from the Byzantine code Basilika (book VII, 1, 16–17). In 1222, Hungarian King Andrew II issued the Golden Bull of 1222. Between 1220 and 1230, a Saxon administrator, Eike von Repgow, composed the Sachsenspiegel, which became the supreme law used in parts of Germany as late as 1900. Around 1240, the Coptic Egyptian Christian writer, 'Abul Fada'il Ibn al-'Assal, wrote the Fetha Negest in Arabic. 'Ibn al-Assal took his laws partly from apostolic writings and Mosaic law and partly from the former Byzantine codes. There are a few historical records claiming that this law code was translated into Ge'ez and entered Ethiopia around 1450 in the reign of Zara Yaqob. Even so, its first recorded use in the function of a constitution (supreme law of the land) is with Sarsa Dengel beginning in 1563. The Fetha Negest remained the supreme law in Ethiopia until 1931, when a modern-style Constitution was first granted by Emperor Haile Selassie I. In the Principality of Catalonia, the Catalan constitutions were promulgated by the Court from 1283 (or even two centuries before, if Usatges of Barcelona is considered part of the compilation of Constitutions) until 1716, when Philip V of Spain gave the Nueva Planta decrees, finishing with the historical laws of Catalonia. These Constitutions were usually made formally as a royal initiative, but required for its approval or repeal the favorable vote of the Catalan Courts, the medieval antecedent of the modern Parliaments. These laws, like other modern constitutions, had preeminence over other laws, and they could not be contradicted by mere decrees or edicts of the king. The Kouroukan Founga was a 13th-century charter of the Mali Empire, reconstructed from oral tradition in 1988 by Siriman Kouyaté. The Golden Bull of 1356 was a decree issued by a Reichstag in Nuremberg headed by Emperor Charles IV that fixed, for a period of more than four hundred years, an important aspect of the constitutional structure of the Holy Roman Empire. In China, the Hongwu Emperor created and refined a document he called Ancestral Injunctions (first published in 1375, revised twice more before his death in 1398). These rules served as a constitution for the Ming Dynasty for the next 250 years. The oldest written document still governing a sovereign nation today is that of San Marino. The Leges Statutae Republicae Sancti Marini was written in Latin and consists of six books. The first book, with 62 articles, establishes councils, courts, various executive officers, and the powers assigned to them. The remaining books cover criminal and civil law and judicial procedures and remedies. Written in 1600, the document was based upon the Statuti Comunali (Town Statute) of 1300, itself influenced by the Codex Justinianus, and it remains in force today. In 1392 the Carta de Logu was legal code of the Giudicato of Arborea promulgated by the giudicessa Eleanor. It was in force in Sardinia until it was superseded by the code of Charles Felix in April 1827. The Carta was a work of great importance in Sardinian history. It was an organic, coherent, and systematic work of legislation encompassing the civil and penal law. The Gayanashagowa, the oral constitution of the Haudenosaunee nation also known as the Great Law of Peace, established a system of governance as far back as 1190 AD (though perhaps more recently at 1451) in which the Sachems, or tribal chiefs, of the Iroquois League's member nations made decisions on the basis of universal consensus of all chiefs following discussions that were initiated by a single nation. The position of Sachem descends through families and are allocated by the senior female clan heads, though, prior to the filling of the position, candidacy is ultimately democratically decided by the community itself. Modern constitutions In 1634 the Kingdom of Sweden adopted the 1634 Instrument of Government, drawn up under the Lord High Chancellor of Sweden Axel Oxenstierna after the death of king Gustavus Adolphus, it can be seen as the first written constitution adopted by a modern state. In 1639, the Colony of Connecticut adopted the Fundamental Orders, which was the first North American constitution, and is the basis for every new Connecticut constitution since, and is also the reason for Connecticut's nickname, "the Constitution State". The English Protectorate that was set up by Oliver Cromwell after the English Civil War promulgated the first detailed written constitution adopted by a modern state; it was called the Instrument of Government. This formed the basis of government for the short-lived republic from 1653 to 1657 by providing a legal rationale for the increasing power of Cromwell after Parliament consistently failed to govern effectively. Most of the concepts and ideas embedded into modern constitutional theory, especially bicameralism, separation of powers, the written constitution, and judicial review, can be traced back to the experiments of that period. Drafted by Major-General John Lambert in 1653, the Instrument of Government included elements incorporated from an earlier document "Heads of Proposals", which had been agreed to by the Army Council in 1647, as a set of propositions intended to be a basis for a constitutional settlement after King Charles I was defeated in the First English Civil War. Charles had rejected the propositions, but before the start of the Second Civil War, the Grandees of the New Model Army had presented the Heads of Proposals as their alternative to the more radical Agreement of the People presented by the Agitators and their civilian supporters at the Putney Debates. On January 4, 1649, the Rump Parliament declared "that the people are, under God, the original of all just power; that the Commons of England, being chosen by and representing the people, have the supreme power in this nation". The Instrument of Government was adopted by Parliament on December 15, 1653, and Oliver Cromwell was installed as Lord Protector on the following day. The constitution set up a state council consisting of 21 members while executive authority was vested in the office of "Lord Protector of the Commonwealth." This position was designated as a non-hereditary life appointment. The Instrument also required the calling of triennial Parliaments, with each sitting for at least five months. The Instrument of Government was replaced in May 1657 by England's second, and last, codified constitution, the Humble Petition and Advice, proposed by Sir Christopher Packe. The Petition offered hereditary monarchy to Oliver Cromwell, asserted Parliament's control over issuing new taxation, provided an independent council to advise the king and safeguarded "Triennial" meetings of Parliament. A modified version of the Humble Petition with the clause on kingship removed was ratified on 25 May. This finally met its demise in conjunction with the death of Cromwell and the Restoration of the monarchy. Other examples of European constitutions of this era were the Corsican Constitution of 1755 and the Swedish Constitution of 1772. All of the British colonies in North America that were to become the 13 original United States, adopted their own constitutions in 1776 and 1777, during the American Revolution (and before the later Articles of Confederation and United States Constitution), with the exceptions of Massachusetts, Connecticut and Rhode Island. The Commonwealth of Massachusetts adopted its Constitution in 1780, the oldest still-functioning constitution of any U.S. state; while Connecticut and Rhode Island officially continued to operate under their old colonial charters, until they adopted their first state constitutions in 1818 and 1843, respectively. Democratic constitutions What is sometimes called the "enlightened constitution" model was developed by philosophers of the Age of Enlightenment such as Thomas Hobbes, Jean-Jacques Rousseau, and John Locke. The model proposed that constitutional governments should be stable, adaptable, accountable, open and should represent the people (i.e., support democracy). Agreements and Constitutions of Laws and Freedoms of the Zaporizian Host was written in 1710 by Pylyp Orlyk, hetman of the Zaporozhian Host. It was written to establish a free Zaporozhian-Ukrainian Republic, with the support of Charles XII of Sweden. It is notable in that it established a democratic standard for the separation of powers in government between the legislative, executive, and judiciary branches, well before the publication of Montesquieu's Spirit of the Laws. This Constitution also limited the executive authority of the hetman, and established a democratically elected Cossack parliament called the General Council. However, Orlyk's project for an independent Ukrainian State never materialized, and his constitution, written in exile, never went into effect. Corsican Constitutions of 1755 and 1794 were inspired by Jean-Jacques Rousseau. The latter introduced universal suffrage for property owners. The Swedish constitution of 1772 was enacted under King Gustavus III and was inspired by the separation of powers by Montesquieu. The king also cherished other enlightenment ideas (as an enlighted despot) and repealed torture, liberated agricultural trade, diminished the use of the death penalty and instituted a form of religious freedom. The constitution was commended by Voltaire. The United States Constitution, ratified June 21, 1788, was influenced by the writings of Polybius, Locke, Montesquieu, and others. The document became a benchmark for republicanism and codified constitutions written thereafter. The Polish–Lithuanian Commonwealth Constitution was passed on May 3, 1791. Its draft was developed by the leading minds of the Enlightenment in Poland such as King Stanislaw August Poniatowski, Stanisław Staszic, Scipione Piattoli, Julian Ursyn Niemcewicz, Ignacy Potocki and Hugo Kołłątaj. It was adopted by the Great Sejm and is considered the first constitution of its kind in Europe and the world's second oldest one after the American Constitution. Another landmark document was the French Constitution of 1791. The 1811 Constitution of Venezuela was the first Constitution of Venezuela and Latin America, promulgated and drafted by Cristóbal Mendoza and Juan Germán Roscio and in Caracas. It established a federal government but was repealed one year later. On March 19, the Spanish Constitution of 1812 was ratified by a parliament gathered in Cadiz, the only Spanish continental city which was safe from French occupation. The Spanish Constitution served as a model for other liberal constitutions of several South European and Latin American nations, for example, the Portuguese Constitution of 1822, constitutions of various Italian states during Carbonari revolts (i.e., in the Kingdom of the Two Sicilies), the Norwegian constitution of 1814, or the Mexican Constitution of 1824. In Brazil, the Constitution of 1824 expressed the option for the monarchy as political system after Brazilian Independence. The leader of the national emancipation process was the Portuguese prince Pedro I, elder son of the king of Portugal. Pedro was crowned in 1822 as first emperor of Brazil. The country was ruled by Constitutional monarchy until 1889, when it adopted the Republican model. In Denmark, as a result of the Napoleonic Wars, the absolute monarchy lost its personal possession of Norway to Sweden. Sweden had already enacted its 1809 Instrument of Government, which saw the division of power between the Riksdag, the king and the judiciary. However the Norwegians managed to infuse a radically democratic and liberal constitution in 1814, adopting many facets from the American constitution and the revolutionary French ones, but maintaining a hereditary monarch limited by the constitution, like the Spanish one. The first Swiss Federal Constitution was put in force in September 1848 (with official revisions in 1878, 1891, 1949, 1971, 1982 and 1999). The Serbian revolution initially led to a proclamation of a proto-constitution in 1811; the full-fledged Constitution of Serbia followed few decades later, in 1835. The first Serbian constitution (Sretenjski ustav) was adopted at the national assembly in Kragujevac on February 15, 1835. The Constitution of Canada came into force on July 1, 1867, as the British North America Act, an act of the British Parliament. Over a century later, the BNA Act was patriated to the Canadian Parliament and augmented with the Canadian Charter of Rights and Freedoms. Apart from the Constitution Acts, 1867 to 1982, Canada's constitution also has unwritten elements based in common law and convention. Principles of constitutional design After tribal people first began to live in cities and establish nations, many of these functioned according to unwritten customs, while some developed autocratic, even tyrannical monarchs, who ruled by decree, or mere personal whim. Such rule led some thinkers to take the position that what mattered was not the design of governmental institutions and operations, as much as the character of the rulers. This view can be seen in Plato, who called for rule by "philosopher-kings". Later writers, such as Aristotle, Cicero and Plutarch, would examine designs for government from a legal and historical standpoint. The Renaissance brought a series of political philosophers who wrote implied criticisms of the practices of monarchs and sought to identify principles of constitutional design that would be likely to yield more effective and just governance from their viewpoints. This began with revival of the Roman law of nations concept and its application to the relations among nations, and they sought to establish customary "laws of war and peace" to ameliorate wars and make them less likely. This led to considerations of what authority monarchs or other officials have and don't have, from where that authority derives, and the remedies for the abuse of such authority. A seminal juncture in this line of discourse arose in England from the Civil War, the Cromwellian Protectorate, the writings of Thomas Hobbes, Samuel Rutherford, the Levellers, John Milton, and James Harrington, leading to the debate between Robert Filmer, arguing for the divine right of monarchs, on the one side, and on the other, Henry Neville, James Tyrrell, Algernon Sidney, and John Locke. What arose from the latter was a concept of government being erected on the foundations of first, a state of nature governed by natural laws, then a state of society, established by a social contract or compact, which bring underlying natural or social laws, before governments are formally established on them as foundations. Along the way several writers examined how the design of government was important, even if the government were headed by a monarch. They also classified various historical examples of governmental designs, typically into democracies, aristocracies, or monarchies, and considered how just and effective each tended to be and why, and how the advantages of each might be obtained by combining elements of each into a more complex design that balanced competing tendencies. Some, such as Montesquieu, also examined how the functions of government, such as legislative, executive, and judicial, might appropriately be separated into branches. The prevailing theme among these writers was that the design of constitutions is not completely arbitrary or a matter of taste. They generally held that there are underlying principles of design that constrain all constitutions for every polity or organization. Each built on the ideas of those before concerning what those principles might be. The later writings of Orestes Brownson would try to explain what constitutional designers were trying to do. According to Brownson there are, in a sense, three "constitutions" involved: The first the constitution of nature that includes all of what was called "natural law". The second is the constitution of society, an unwritten and commonly understood set of rules for the society formed by a social contract before it establishes a government, by which it establishes the third, a constitution of government. The second would include such elements as the making of decisions by public conventions called by public notice and conducted by established rules of procedure. Each constitution must be consistent with, and derive its authority from, the ones before it, as well as from a historical act of society formation or constitutional ratification. Brownson argued that a state is a society with effective dominion over a well-defined territory, that consent to a well-designed constitution of government arises from presence on that territory, and that it is possible for provisions of a written constitution of government to be "unconstitutional" if they are inconsistent with the constitutions of nature or society. Brownson argued that it is not ratification alone that makes a written constitution of government legitimate, but that it must also be competently designed and applied. Other writers have argued that such considerations apply not only to all national constitutions of government, but also to the constitutions of private organizations, that it is not an accident that the constitutions that tend to satisfy their members contain certain elements, as a minimum, or that their provisions tend to become very similar as they are amended after experience with their use. Provisions that give rise to certain kinds of questions are seen to need additional provisions for how to resolve those questions, and provisions that offer no course of action may best be omitted and left to policy decisions. Provisions that conflict with what Brownson and others can discern are the underlying "constitutions" of nature and society tend to be difficult or impossible to execute, or to lead to unresolvable disputes. Constitutional design has been treated as a kind of metagame in which play consists of finding the best design and provisions for a written constitution that will be the rules for the game of government, and that will be most likely to optimize a balance of the utilities of justice, liberty, and security. An example is the metagame Nomic. Political economy theory regards constitutions as coordination devices that help citizens to prevent rulers from abusing power. If the citizenry can coordinate a response to police government officials in the face of a constitutional fault, then the government have the incentives to honor the rights that the constitution guarantees. An alternative view considers that constitutions are not enforced by the citizens at-large, but rather by the administrative powers of the state. Because rulers cannot themselves implement their policies, they need to rely on a set of organizations (armies, courts, police agencies, tax collectors) to implement it. In this position, they can directly sanction the government by refusing to cooperate, disabling the authority of the rulers. Therefore, constitutions could be characterized by a self-enforcing equilibria between the rulers and powerful administrators. Key features Most commonly, the term constitution refers to a set of rules and principles that define the nature and extent of government. Most constitutions seek to regulate the relationship between institutions of the state, in a basic sense the relationship between the executive, legislature and the judiciary, but also the relationship of institutions within those branches. For example, executive branches can be divided into a head of government, government departments/ministries, executive agencies and a civil service/administration. Most constitutions also attempt to define the relationship between individuals and the state, and to establish the broad rights of individual citizens. It is thus the most basic law of a territory from which all the other laws and rules are hierarchically derived; in some territories it is in fact called "Basic Law". Classification Classification Codification A fundamental classification is codification or lack of codification. A codified constitution is one that is contained in a single document, which is the single source of constitutional law in a state. An uncodified constitution is one that is not contained in a single document, consisting of several different sources, which may be written or unwritten; see constitutional convention. Codified constitution Most states in the world have codified constitutions. Codified constitutions are often the product of some dramatic political change, such as a revolution. The process by which a country adopts a constitution is closely tied to the historical and political context driving this fundamental change. The legitimacy (and often the longevity) of codified constitutions has often been tied to the process by which they are initially adopted and some scholars have pointed out that high constitutional turnover within a given country may itself be detrimental to separation of powers and the rule of law. States that have codified constitutions normally give the constitution supremacy over ordinary statute law. That is, if there is any conflict between a legal statute and the codified constitution, all or part of the statute can be declared ultra vires by a court, and struck down as unconstitutional. In addition, exceptional procedures are often required to amend a constitution. These procedures may include: convocation of a special constituent assembly or constitutional convention, requiring a supermajority of legislators' votes, approval in two terms of parliament, the consent of regional legislatures, a referendum process, and/or other procedures that make amending a constitution more difficult than passing a simple law. Constitutions may also provide that their most basic principles can never be abolished, even by amendment. In case a formally valid amendment of a constitution infringes these principles protected against any amendment, it may constitute a so-called unconstitutional constitutional law. Codified constitutions normally consist of a ceremonial preamble, which sets forth the goals of the state and the motivation for the constitution, and several articles containing the substantive provisions. The preamble, which is omitted in some constitutions, may contain a reference to God and/or to fundamental values of the state such as liberty, democracy or human rights. In ethnic nation-states such as Estonia, the mission of the state can be defined as preserving a specific nation, language and culture. Uncodified constitution only two sovereign states, New Zealand and the United Kingdom, have wholly uncodified constitutions. The Basic Laws of Israel have since 1950 been intended to be the basis for a constitution, but as of 2017 it had not been drafted. The various Laws are considered to have precedence over other laws, and give the procedure by which they can be amended, typically by a simple majority of members of the Knesset (parliament). Uncodified constitutions are the product of an "evolution" of laws and conventions over centuries (such as in the Westminster System that developed in Britain). By contrast to codified constitutions, uncodified constitutions include both written sources – e.g. constitutional statutes enacted by the Parliament – and unwritten sources – constitutional conventions, observation of precedents, royal prerogatives, customs and traditions, such as holding general elections on Thursdays; together these constitute British constitutional law. Mixed constitutions Some constitutions are largely, but not wholly, codified. For example, in the Constitution of Australia, most of its fundamental political principles and regulations concerning the relationship between branches of government, and concerning the government and the individual are codified in a single document, the Constitution of the Commonwealth of Australia. However, the presence of statutes with constitutional significance, namely the Statute of Westminster, as adopted by the Commonwealth in the Statute of Westminster Adoption Act 1942, and the Australia Act 1986 means that Australia's constitution is not contained in a single constitutional document. It means the Constitution of Australia is uncodified, it also contains constitutional conventions, thus is partially unwritten. The Constitution of Canada resulted from the passage of several British North America Acts from 1867 to the Canada Act 1982, the act that formally severed British Parliament's ability to amend the Canadian constitution. The Canadian constitution includes specific legislative acts as mentioned in section 52(2) of the Constitution Act, 1982. However, some documents not explicitly listed in section 52(2) are also considered constitutional documents in Canada, entrenched via reference; such as the Proclamation of 1763. Although Canada's constitution includes a number of different statutes, amendments, and references, some constitutional rules that exist in Canada is derived from unwritten sources and constitutional conventions. The terms written constitution and codified constitution are often used interchangeably, as are unwritten constitution and uncodified constitution, although this usage is technically inaccurate. A codified constitution is a single document; states that do not have such a document have uncodified, but not entirely unwritten, constitutions, since much of an uncodified constitution is usually written in laws such as the Basic Laws of Israel and the Parliament Acts of the United Kingdom. Uncodified constitutions largely lack protection against amendment by the government of the time. For example, the U.K. Fixed-term Parliaments Act 2011 legislated by simple majority for strictly fixed-term parliaments; until then the ruling party could call a general election at any convenient time up to the maximum term of five years. This change would require a constitutional amendment in most nations. Amendments A constitutional amendment is a modification of the constitution of a polity, organization or other type of entity. Amendments are often interwoven into the relevant sections of an existing constitution, directly altering the text. Conversely, they can be appended to the constitution as supplemental additions (codicils), thus changing the frame of government without altering the existing text of the document. Most constitutions require that amendments cannot be enacted unless they have passed a special procedure that is more stringent than that required of ordinary legislation. Methods of amending Some countries are listed under more than one method because alternative procedures may be used. Entrenched clauses An entrenched clause or entrenchment clause of a basic law or constitution is a provision that makes certain amendments either more difficult or impossible to pass, making such amendments inadmissible. Overriding an entrenched clause may require a supermajority, a referendum, or the consent of the minority party. For example, the U.S. Constitution has an entrenched clause that prohibits abolishing equal suffrage of the States within the Senate without their consent. The term eternity clause is used in a similar manner in the constitutions of the Czech Republic, Germany, Turkey, Greece, Italy, Morocco, the Islamic Republic of Iran, Brazil and Norway. India's constitution does not contain specific provisions on entrenched clauses but the basic structure doctrine makes it impossible for certain basic features of the Constitution to be altered or destroyed by the Parliament of India through an amendment. The Constitution of Colombia also lacks explicit entrenched clauses, but has a similar substantive limit on amending its fundamental principles through judicial interpretations. Constitutional rights and duties Constitutions include various rights and duties. These include the following: Duty to pay taxes Duty to serve in the military Duty to work Right to vote Freedom of assembly Freedom of association Freedom of expression Freedom of movement Freedom of thought Freedom of the press Freedom of religion Right to dignity Right to civil marriage Right to petition Right to academic freedom Right to bear arms Right to conscientious objection Right to a fair trial Right to personal development Right to start a family Right to information Right to marriage Right of revolution Right to privacy Right to protect one's reputation Right to renounce citizenship Rights of children Rights of debtors Separation of powers Constitutions usually explicitly divide power between various branches of government. The standard model, described by the Baron de Montesquieu, involves three branches of government: executive, legislative and judicial. Some constitutions include additional branches, such as an auditory branch. Constitutions vary extensively as to the degree of separation of powers between these branches. Accountability In presidential and semi-presidential systems of government, department secretaries/ministers are accountable to the president, who has patronage powers to appoint and dismiss ministers. The president is accountable to the people in an election. In parliamentary systems, Cabinet Ministers are accountable to Parliament, but it is the prime minister who appoints and dismisses them. In the case of the United Kingdom and other countries with a monarchy, it is the monarch who appoints and dismisses ministers, on the advice of the prime minister. In turn the prime minister will resign if the government loses the confidence of the parliament (or a part of it). Confidence can be lost if the government loses a vote of no confidence or, depending on the country, loses a particularly important vote in parliament, such as vote on the budget. When a government loses confidence, it stays in office until a new government is formed; something which normally but not necessarily required the holding of a general election. Other independent institutions Other independent institutions which some constitutions have set out include a central bank, an anti-corruption commission, an electoral commission, a judicial oversight body, a human rights commission, a media commission, an ombudsman, and a truth and reconciliation commission. Power structure Constitutions also establish where sovereignty is located in the state. There are three basic types of distribution of sovereignty according to the degree of centralisation of power: unitary, federal, and confederal. The distinction is not absolute. In a unitary state, sovereignty resides in the state itself, and the constitution determines this. The territory of the state may be divided into regions, but they are not sovereign and are subordinate to the state. In the UK, the constitutional doctrine of Parliamentary sovereignty dictates that sovereignty is ultimately contained at the centre. Some powers have been devolved to Northern Ireland, Scotland, and Wales (but not England). Some unitary states (Spain is an example) devolve more and more power to sub-national governments until the state functions in practice much like a federal state. A federal state has a central structure with at most a small amount of territory mainly containing the institutions of the federal government, and several regions (called states, provinces, etc.) which compose the territory of the whole state. Sovereignty is divided between the centre and the constituent regions. The constitutions of Canada and the United States establish federal states, with power divided between the federal government and the provinces or states. Each of the regions may in turn have its own constitution (of unitary nature). A confederal state comprises again several regions, but the central structure has only limited coordinating power, and sovereignty is located in the regions. Confederal constitutions are rare, and there is often dispute to whether so-called "confederal" states are actually federal. To some extent a group of states which do not constitute a federation as such may by treaties and accords give up parts of their sovereignty to a supranational entity. For example, the countries constituting the European Union have agreed to abide by some Union-wide measures which restrict their absolute sovereignty in some ways, e.g., the use of the metric system of measurement instead of national units previously used. State of emergency Many constitutions allow the declaration under exceptional circumstances of some form of state of emergency during which some rights and guarantees are suspended. This provision can be and has been abused to allow a government to suppress dissent without regard for human rights – see the article on state of emergency. Facade constitutions Italian political theorist Giovanni Sartori noted the existence of national constitutions which are a facade for authoritarian sources of power. While such documents may express respect for human rights or establish an independent judiciary, they may be ignored when the government feels threatened, or never put into practice. An extreme example was the Constitution of the Soviet Union that on paper supported freedom of assembly and freedom of speech; however, citizens who transgressed unwritten limits were summarily imprisoned. The example demonstrates that the protections and benefits of a constitution are ultimately provided not through its written terms but through deference by government and society to its principles. A constitution may change from being real to a facade and back again as democratic and autocratic governments succeed each other. Constitutional courts Constitutions are often, but by no means always, protected by a legal body whose job it is to interpret those constitutions and, where applicable, declare void executive and legislative acts which infringe the constitution. In some countries, such as Germany, this function is carried out by a dedicated constitutional court which performs this (and only this) function. In other countries, such as Ireland, the ordinary courts may perform this function in addition to their other responsibilities. While elsewhere, like in the United Kingdom, the concept of declaring an act to be unconstitutional does not exist. A constitutional violation is an action or legislative act that is judged by a constitutional court to be contrary to the constitution, that is, unconstitutional. An example of constitutional violation by the executive could be a public office holder who acts outside the powers granted to that office by a constitution. An example of constitutional violation by the legislature is an attempt to pass a law that would contradict the constitution, without first going through the proper constitutional amendment process. Some countries, mainly those with uncodified constitutions, have no such courts at all. For example, the United Kingdom has traditionally operated under the principle of parliamentary sovereignty under which the laws passed by United Kingdom Parliament could not be questioned by the courts. See also Basic law, equivalent in some countries, often for a temporary constitution Apostolic constitution (a class of Catholic Church documents) Consent of the governed Constitution of the Roman Republic Constitutional amendment Constitutional court Constitutional crisis Constitutional economics Constitutionalism Corporate constitutional documents International constitutional law Judicial activism Judicial restraint Judicial review Philosophy of law Rule of law Rule according to higher law Judicial philosophies of constitutional interpretation (note: generally specific to United States constitutional law) List of national constitutions Originalism Strict constructionism Textualism Proposed European Union constitution Treaty of Lisbon (adopts same changes, but without constitutional name) United Nations Charter Further reading Zachary Elkins and Tom Ginsburg. 2021. "What Can We Learn from Written Constitutions?" Annual Review of Political Science. References External links Constitute, an indexed and searchable database of all constitutions in force Amendments Project Dictionary of the History of Ideas Constitutionalism Constitutional Law, "Constitutions, bibliography, links" International Constitutional Law: English translations of various national constitutions United Nations Rule of Law: Constitution-making, on the relationship between constitution-making, the rule of law and the United Nations. constitution | Theories, Features, Practices, & Facts | Britannica Constitutionalism | Stanford Encyclopedia of Philosophy Constitutions and Constitutionalism | Encyclopedia.com Sources of law
5254
https://en.wikipedia.org/wiki/Common%20law
Common law
In law, common law (also known as judicial precedent, judge-made law, or case law) is the body of law created by judges and similar quasi-judicial tribunals by virtue of being stated in written opinions. The defining characteristic of common law is that it arises as precedent. Common law courts look to the past decisions of courts to synthesize the legal principles of past cases. Stare decisis, the principle that cases should be decided according to consistent principled rules so that similar facts will yield similar results, lies at the heart of all common law systems. If a court finds that a similar dispute to the present one has been resolved in the past, the court is generally bound to follow the reasoning used in the prior decision. If, however, the court finds that the current dispute is fundamentally distinct from all previous cases (a "matter of first impression"), and legislative statutes are either silent or ambiguous on the question, judges have the authority and duty to resolve the issue. The opinion that a common law judge gives agglomerates with past decisions as precedent to bind future judges and litigants. The common law, so named because it was "common" to all the king's courts across England, originated in the practices of the courts of the English kings in the centuries following the Norman Conquest in 1066. The British Empire later spread the English legal system to its colonies, many of which retain the common law system today. These common law systems are legal systems that give great weight to judicial precedent, and to the style of reasoning inherited from the English legal system. The term "common law", referring to the body of law made by the judiciary, is often distinguished from statutory law and regulations, which are laws adopted by the legislature and executive respectively. In legal systems that follow the common law, judicial precedent stands in contrast to and on equal footing with statutes. The other major legal system used by countries is the civil law, which codifies its legal principles into legal codes and does not treat judicial opinions as binding. Today, one-third of the world's population lives in common law jurisdictions or in mixed legal systems that combine the common law with the civil law, including Antigua and Barbuda, Australia, Bahamas, Bangladesh, Barbados, Belize, Botswana, Burma, Cameroon, Canada (both the federal system and all its provinces except Quebec), Cyprus, Dominica, Fiji, Ghana, Grenada, Guyana, Hong Kong, India, Ireland, Israel, Jamaica, Kenya, Liberia, Malaysia, Malta, Marshall Islands, Micronesia, Namibia, Nauru, New Zealand, Nigeria, Pakistan, Palau, Papua New Guinea, Philippines, Sierra Leone, Singapore, South Africa, Sri Lanka, Trinidad and Tobago, the United Kingdom (including its overseas territories such as Gibraltar), the United States (both the federal system and 49 of its 50 states), and Zimbabwe. Definitions The term common law has many connotations. The first three set out here are the most-common usages within the legal community. Other connotations from past centuries are sometimes seen and are sometimes heard in everyday speech. Common law as opposed to statutory law and regulatory law The first definition of "common law" given in Black's Law Dictionary, 10th edition, 2014, is "The body of law derived from judicial decisions, rather than from statutes or constitutions; [synonym] CASELAW, [contrast] STATUTORY LAW". This usage is given as the first definition in modern legal dictionaries, is characterized as the "most common" usage among legal professionals, and is the usage frequently seen in decisions of courts. In this connotation, "common law" distinguishes the authority that promulgated a law. For example, the law in most Anglo-American jurisdictions includes "statutory law" enacted by a legislature, "regulatory law" (in the U.S.) or "delegated legislation" (in the U.K.) promulgated by executive branch agencies pursuant to delegation of rule-making authority from the legislature, and common law or "case law", i.e., decisions issued by courts (or quasi-judicial tribunals within agencies). This first connotation can be further differentiated into: Publication of decisions, and indexing, is essential to the development of common law, and thus governments and private publishers publish law reports. While all decisions in common law jurisdictions are precedent (at varying levels and scope, as discussed throughout the article on precedent), some become "leading cases" or "landmark decisions" that are cited especially often. Common law legal systems as opposed to civil law legal systems Black's Law Dictionary, 10th ed., definition 2, differentiates "common law" jurisdictions and legal systems from "civil law" or "code" jurisdictions. Common law systems place great weight on court decisions, which are considered "law" with the same force of law as statutes—for nearly a millennium, common law courts have had the authority to make law where no legislative statute exists, and statutes mean what courts interpret them to mean. By contrast, in civil law jurisdictions (the legal tradition that prevails, or is combined with common law, in Europe and most non-Islamic, non-common law countries), courts lack authority to act if there is no statute. Civil law judges tend to give less weight to judicial precedent, which means a civil law judge deciding a given case has more freedom to interpret the text of a statute independently (compared to a common law judge in the same circumstances), and therefore less predictably. For example, the Napoleonic Code expressly forbade French judges to pronounce general principles of law. The role of providing overarching principles, which in common law jurisdictions is provided in judicial opinions, in civil law jurisdictions is filled by giving greater weight to scholarly literature, as explained below. Common law systems trace their history to England, while civil law systems trace their history through the Napoleonic Code back to the of Roman law. A few Western countries use other legal traditions, such as Roman-Dutch law or Scots law, for example. Law as opposed to equity Black's Law Dictionary, 10th ed., definition 4, differentiates "common law" (or just "law") from "equity". Before 1873, England had two complementary court systems: courts of "law" which could only award money damages and recognized only the legal owner of property, and courts of "equity" (courts of chancery) that could issue injunctive relief (that is, a court order to a party to do something, give something to someone, or stop doing something) and recognized trusts of property. This split propagated to many of the colonies, including the United States. The states of Delaware, Mississippi, South Carolina, and Tennessee continue to have divided Courts of Law and Courts of Chancery. In New Jersey, the appellate courts are unified, but the trial courts are organized into a Chancery Division and a Law Division. For most purposes, the U.S. federal system and most states have merged the two courts. Additionally, even before the separate courts were merged, most courts were permitted to apply both law and equity, though under potentially different procedural law. Nonetheless, the historical distinction between "law" and "equity" remains important today when the case involves issues such as the following: Categorizing and prioritizing rights to property—for example, the same article of property often has a "legal title" and an "equitable title", and these two groups of ownership rights may be held by different people. In the United States, determining whether the Seventh Amendment's right to a jury trial applies (a determination of a fact necessary to resolution of a "common law" claim) as opposed to whether the issue will be decided by a judge (issues of what the law is, and all issues relating to equity). The standard of review and degree of deference given by an appellate tribunal to the decision of the lower tribunal under review (issues of law are reviewed de novo, that is, "as if new" from scratch by the appellate tribunal, while most issues of equity are reviewed for "abuse of discretion", that is, with great deference to the tribunal below). The remedies available and rules of procedure to be applied. Courts of equity rely on common law (in the sense of this first connotation) principles of binding precedent. Archaic meanings and historical uses In addition, there are several historical (but now archaic) uses of the term that, while no longer current, provide background context that assists in understanding the meaning of "common law" today. In one usage that is now archaic, but that gives insight into the history of the common law, "common law" referred to the pre-Christian system of law, imported by the pre-literate Saxons to England and upheld into their historical times until 1066, when the Norman conquest overthrew the last Saxon king—i.e., before (it was supposed) there was any consistent, written law to be applied. "Common law" as the term is used today in common law countries contrasts with . While historically the became a secure point of reference in continental European legal systems, in England it was not a point of reference at all. The English Court of Common Pleas dealt with lawsuits in which the monarch had no interest, i.e., between commoners. Black's Law Dictionary, 10th ed., definition 3 is "General law common to a country as a whole, as opposed to special law that has only local application." From at least the 11th century and continuing for several centuries, there were several different circuits in the royal court system, served by itinerant judges who would travel from town to town dispensing the king's justice in "assizes". The term "common law" was used to describe the law held in common between the circuits and the different stops in each circuit. The more widely a particular law was recognized, the more weight it held, whereas purely local customs were generally subordinate to law recognized in a plurality of jurisdictions. Misconceptions and imprecise nonlawyer usages As used by non-lawyers in popular culture, the term "common law" connotes law based on ancient and unwritten universal custom of the people. The "ancient unwritten universal custom" view was the foundation of the first treatises by Blackstone and Coke, and was universal among lawyers and judges from the earliest times to the mid-19th century. However, for 100 years, lawyers and judges have recognized that the "ancient unwritten universal custom" view does not accord with the facts of the origin and growth of the law, and it is not held within the legal profession today. Under the modern view, "common law" is not grounded in "custom" or "ancient usage", but rather acquires force of law instantly (without the delay implied by the term "custom" or "ancient") when pronounced by a higher court, because and to the extent the proposition is stated in judicial opinion. From the earliest times through the late 19th century, the dominant theory was that the common law was a pre-existent law or system of rules, a social standard of justice that existed in the habits, customs, and thoughts of the people. Under this older view, the legal profession considered it no part of a judge's duty to make new or change existing law, but only to expound and apply the old. By the early 20th century, largely at the urging of Oliver Wendell Holmes (as discussed throughout this article), this view had fallen into the minority view: Holmes pointed out that the older view worked undesirable and unjust results, and hampered a proper development of the law. In the century since Holmes, the dominant understanding has been that common law "decisions are themselves law, or rather the rules which the courts lay down in making the decisions constitute law". Holmes wrote in a 1917 opinion, "The common law is not a brooding omnipresence in the sky, but the articulate voice of some sovereign or quasi-sovereign that can be identified." Among legal professionals (lawyers and judges), the change in understanding occurred in the late 19th and early 20th centuries (as explained later in this article), though lay (non-legal) dictionaries were decades behind in recognizing the change. The reality of the modern view, and implausibility of the old "ancient unwritten universal custom" view, can be seen in practical operation: under the pre-1870 view, (a) the "common law" should have been absolutely static over centuries (but it evolved), (b) jurisdictions could not logically diverge from each other (but nonetheless did and do today), (c) a new decision logically needed to operate retroactively (but did not), and (d) there was no standard to decide which English medieval customs should be "law" and which should not. All five tensions resolve under the modern view: (a) the common law evolved to meet the needs of the times (e.g., trial by combat passed out of the law by the 15th century), (b) the common law in different jurisdictions may diverge, (c) new decisions may (but need not) have retroactive operation, and (d) court decisions are effective immediately as they are issued, not years later, or after they become "custom", and questions of what "custom" might have been at some "ancient" time are simply irrelevant. Common law, as the term is used among lawyers in the present day, is not grounded in "custom" or "ancient usage". Common law acquires force of law because it is pronounced by a court (or similar tribunal) in an opinion, not by action or opinion of "the people" or "custom." Common law is not frozen in time, and no longer beholden to 11th-, 13th-, or 17th-century English law. Rather, the common law evolves daily and immediately as courts issue precedential decisions (as explained later in this article), and all parties in the legal system (courts, lawyers, and all others) are responsible for up-to-date knowledge. There is no fixed reference point (for example the 11th or 18th centuries) for the definition of "common law", except in a handful of isolated contexts. Much of what was "customary" in the 13th or 17th or 18th century has no part of the common law today; much of the common law today has no antecedent in those earlier centuries. The common law is not "unwritten". Common law exists in writing—as must any law that is to be applied consistently—in the written decisions of judges. Common law is not the product of "universal consent". Rather, the common law is often anti-majoritarian. People using pseudolegal tactics and arguments have frequently claimed to base their arguments on common law; notably, the radical anti-government sovereign citizens and freemen on the land movements, who deny the legitimacy of their countries' legal systems, base their beliefs on idiosyncratic interpretations of common law. "Common law" has also been used as an alibi by groups such as the far-right American Patriot movement for setting up kangaroo courts in order to conduct vigilante actions or intimidate their opponents. Basic principles of common law Common law adjudication In a common law jurisdiction several stages of research and analysis are required to determine "what the law is" in a given situation. First, one must ascertain the facts. Then, one must locate any relevant statutes and cases. Then one must extract the principles, analogies and statements by various courts of what they consider important to determine how the next court is likely to rule on the facts of the present case. Later decisions, and decisions of higher courts or legislatures carry more weight than earlier cases and those of lower courts. Finally, one integrates all the lines drawn and reasons given, and determines "what the law is". Then, one applies that law to the facts. In practice, common law systems are considerably more complicated than the simplified system described above. The decisions of a court are binding only in a particular jurisdiction, and even within a given jurisdiction, some courts have more power than others. For example, in most jurisdictions, decisions by appellate courts are binding on lower courts in the same jurisdiction, and on future decisions of the same appellate court, but decisions of lower courts are only non-binding persuasive authority. Interactions between common law, constitutional law, statutory law and regulatory law also give rise to considerable complexity. Common law evolves to meet changing social needs and improved understanding Oliver Wendell Holmes Jr. cautioned that "the proper derivation of general principles in both common and constitutional law ... arise gradually, in the emergence of a consensus from a multitude of particularized prior decisions". Justice Cardozo noted the "common law does not work from pre-established truths of universal and inflexible validity to conclusions derived from them deductively", but "[i]ts method is inductive, and it draws its generalizations from particulars". The common law is more malleable than statutory law. First, common law courts are not absolutely bound by precedent, but can (when extraordinarily good reason is shown) reinterpret and revise the law, without legislative intervention, to adapt to new trends in political, legal and social philosophy. Second, the common law evolves through a series of gradual steps, that gradually works out all the details, so that over a decade or more, the law can change substantially but without a sharp break, thereby reducing disruptive effects. In contrast to common law incrementalism, the legislative process is very difficult to get started, as legislatures tend to delay action until a situation is intolerable. For these reasons, legislative changes tend to be large, jarring and disruptive (sometimes positively, sometimes negatively, and sometimes with unintended consequences). One example of the gradual change that typifies evolution of the common law is the gradual change in liability for negligence. The traditional common law rule through most of the 19th century was that a plaintiff could not recover for a defendant's negligent production or distribution of a harmful instrumentality unless the two were in privity of contract. Thus, only the immediate purchaser could recover for a product defect, and if a part was built up out of parts from parts manufacturers, the ultimate buyer could not recover for injury caused by a defect in the part. In an 1842 English case, Winterbottom v. Wright, the postal service had contracted with Wright to maintain its coaches. Winterbottom was a driver for the post. When the coach failed and injured Winterbottom, he sued Wright. The Winterbottom court recognized that there would be "absurd and outrageous consequences" if an injured person could sue any person peripherally involved, and knew it had to draw a line somewhere, a limit on the causal connection between the negligent conduct and the injury. The court looked to the contractual relationships, and held that liability would only flow as far as the person in immediate contract ("privity") with the negligent party. A first exception to this rule arose in 1852, in the case of Thomas v. Winchester, when New York's highest court held that mislabeling a poison as an innocuous herb, and then selling the mislabeled poison through a dealer who would be expected to resell it, put "human life in imminent danger". Thomas relied on this reason to create an exception to the "privity" rule. In 1909, New York held in Statler v. Ray Mfg. Co. that a coffee urn manufacturer was liable to a person injured when the urn exploded, because the urn "was of such a character inherently that, when applied to the purposes for which it was designed, it was liable to become a source of great danger to many people if not carefully and properly constructed". Yet the privity rule survived. In Cadillac Motor Car Co. v. Johnson (decided in 1915 by the federal appeals court for New York and several neighboring states), the court held that a car owner could not recover for injuries from a defective wheel, when the automobile owner had a contract only with the automobile dealer and not with the manufacturer, even though there was "no question that the wheel was made of dead and 'dozy' wood, quite insufficient for its purposes". The Cadillac court was willing to acknowledge that the case law supported exceptions for "an article dangerous in its nature or likely to become so in the course of the ordinary usage to be contemplated by the vendor". However, held the Cadillac court, "one who manufactures articles dangerous only if defectively made, or installed, e.g., tables, chairs, pictures or mirrors hung on the walls, carriages, automobiles, and so on, is not liable to third parties for injuries caused by them, except in case of willful injury or fraud". Finally, in the famous case of MacPherson v. Buick Motor Co., in 1916, Judge Benjamin Cardozo for New York's highest court pulled a broader principle out of these predecessor cases. The facts were almost identical to Cadillac a year earlier: a wheel from a wheel manufacturer was sold to Buick, to a dealer, to MacPherson, and the wheel failed, injuring MacPherson. Judge Cardozo held: Cardozo's new "rule" exists in no prior case, but is inferrable as a synthesis of the "thing of danger" principle stated in them, merely extending it to "foreseeable danger" even if "the purposes for which it was designed" were not themselves "a source of great danger". MacPherson takes some care to present itself as foreseeable progression, not a wild departure. Cardozo continues to adhere to the original principle of Winterbottom, that "absurd and outrageous consequences" must be avoided, and he does so by drawing a new line in the last sentence quoted above: "There must be knowledge of a danger, not merely possible, but probable." But while adhering to the underlying principle that some boundary is necessary, MacPherson overruled the prior common law by rendering the formerly dominant factor in the boundary, that is, the privity formality arising out of a contractual relationship between persons, totally irrelevant. Rather, the most important factor in the boundary would be the nature of the thing sold and the foreseeable uses that downstream purchasers would make of the thing. The example of the evolution of the law of negligence in the preceding paragraphs illustrates two crucial principles: (a) The common law evolves, this evolution is in the hands of judges, and judges have "made law" for hundreds of years. (b) The reasons given for a decision are often more important in the long run than the outcome in a particular case. This is the reason that judicial opinions are usually quite long, and give rationales and policies that can be balanced with judgment in future cases, rather than the bright-line rules usually embodied in statutes. Publication of decisions All law systems rely on written publication of the law, so that it is accessible to all. Common law decisions are published in law reports for use by lawyers, courts and the general public. After the American Revolution, Massachusetts became the first state to establish an official Reporter of Decisions. As newer states needed law, they often looked first to the Massachusetts Reports for authoritative precedents as a basis for their own common law. The United States federal courts relied on private publishers until after the Civil War, and only began publishing as a government function in 1874. West Publishing in Minnesota is the largest private-sector publisher of law reports in the United States. Government publishers typically issue only decisions "in the raw", while private sector publishers often add indexing, including references to the key principles of the common law involved, editorial analysis, and similar finding aids. Interaction of constitution, statute, and executive branch regulation with common law In common law legal systems, the common law is crucial to understanding almost all important areas of law. For example, in England and Wales, in English Canada, and in most states of the United States, the basic law of contracts, torts and property do not exist in statute, but only in common law (though there may be isolated modifications enacted by statute). As another example, the Supreme Court of the United States in 1877, held that a Michigan statute that established rules for solemnization of marriages did not abolish pre-existing common-law marriage, because the statute did not affirmatively require statutory solemnization and was silent as to preexisting common law. In almost all areas of the law (even those where there is a statutory framework, such as contracts for the sale of goods, or the criminal law), legislature-enacted statutes or agency-promulgated regulations generally give only terse statements of general principle, and the fine boundaries and definitions exist only in the interstitial common law. To find out what the precise law is that applies to a particular set of facts, one has to locate precedential decisions on the topic, and reason from those decisions by analogy. In common law jurisdictions (in the sense opposed to "civil law"), legislatures operate under the assumption that statutes will be interpreted against the backdrop of the pre-existing common law. As the United States Supreme Court explained in United States v Texas, 507 U.S. 529 (1993): {{|Just as longstanding is the principle that "[s]tatutes which invade the common law ... are to be read with a presumption favoring the retention of long-established and familiar principles, except when a statutory purpose to the contrary is evident. Isbrandtsen Co. v. Johnson, 343 U.S. 779, 783 (1952); Astoria Federal Savings & Loan Assn. v. Solimino, 501 U.S. 104, 108 (1991). In such cases, Congress does not write upon a clean slate. Astoria, 501 U.S. at 108. In order to abrogate a common-law principle, the statute must "speak directly" to the question addressed by the common law. Mobil Oil Corp. v. Higginbotham, 436 U. S. 618, 625 (1978); Milwaukee v. Illinois, 451 U. S. 304, 315 (1981).}} For example, in most U.S. states, the criminal statutes are primarily codification of pre-existing common law. (Codification is the process of enacting a statute that collects and restates pre-existing law in a single document—when that pre-existing law is common law, the common law remains relevant to the interpretation of these statutes.) In reliance on this assumption, modern statutes often leave a number of terms and fine distinctions unstated—for example, a statute might be very brief, leaving the precise definition of terms unstated, under the assumption that these fine distinctions would be resolved in the future by the courts based upon what they then understand to be the pre-existing common law. (For this reason, many modern American law schools teach the common law of crime as it stood in England in 1789, because that centuries-old English common law is a necessary foundation to interpreting modern criminal statutes.) With the transition from English law, which had common law crimes, to the new legal system under the U.S. Constitution, which prohibited ex post facto laws at both the federal and state level, the question was raised whether there could be common law crimes in the United States. It was settled in the case of United States v. Hudson, which decided that federal courts had no jurisdiction to define new common law crimes, and that there must always be a (constitutionally valid) statute defining the offense and the penalty for it. Still, many states retain selected common law crimes. For example, in Virginia, the definition of the conduct that constitutes the crime of robbery exists only in the common law, and the robbery statute only sets the punishment. Virginia Code section 1-200 establishes the continued existence and vitality of common law principles and provides that "The common law of England, insofar as it is not repugnant to the principles of the Bill of Rights and Constitution of this Commonwealth, shall continue in full force within the same, and be the rule of decision, except as altered by the General Assembly." By contrast to statutory codification of common law, some statutes displace common law, for example to create a new cause of action that did not exist in the common law, or to legislatively overrule the common law. An example is the tort of wrongful death, which allows certain persons, usually a spouse, child or estate, to sue for damages on behalf of the deceased. There is no such tort in English common law; thus, any jurisdiction that lacks a wrongful death statute will not allow a lawsuit for the wrongful death of a loved one. Where a wrongful death statute exists, the compensation or other remedy available is limited to the remedy specified in the statute (typically, an upper limit on the amount of damages). Courts generally interpret statutes that create new causes of action narrowly—that is, limited to their precise terms—because the courts generally recognize the legislature as being supreme in deciding the reach of judge-made law unless such statute should violate some "second order" constitutional law provision (cf. judicial activism). This principle is applied more strongly in fields of commercial law (contracts and the like) where predictability is of relatively higher value, and less in torts, where courts recognize a greater responsibility to "do justice". Where a tort is rooted in common law, all traditionally recognized damages for that tort may be sued for, whether or not there is mention of those damages in the current statutory law. For instance, a person who sustains bodily injury through the negligence of another may sue for medical costs, pain, suffering, loss of earnings or earning capacity, mental and/or emotional distress, loss of quality of life, disfigurement and more. These damages need not be set forth in statute as they already exist in the tradition of common law. However, without a wrongful death statute, most of them are extinguished upon death. In the United States, the power of the federal judiciary to review and invalidate unconstitutional acts of the federal executive branch is stated in the constitution, Article III sections 1 and 2: "The judicial Power of the United States, shall be vested in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish. ... The judicial Power shall extend to all Cases, in Law and Equity, arising under this Constitution, the Laws of the United States, and Treaties made, or which shall be made, under their Authority". The first landmark decision on "the judicial power" was Marbury v. Madison, . Later cases interpreted the "judicial power" of Article III to establish the power of federal courts to consider or overturn any action of Congress or of any state that conflicts with the Constitution. The interactions between decisions of different courts is discussed further in the article on precedent. Further interactions between common law and either statute or regulation are discussed further in the articles on Skidmore deference, Chevron deference, and Auer deference. Overruling precedent—the limits of stare decisis The United States federal courts are divided into twelve regional circuits, each with a circuit court of appeals (plus a thirteenth, the Court of Appeals for the Federal Circuit, which hears appeals in patent cases and cases against the federal government, without geographic limitation). Decisions of one circuit court are binding on the district courts within the circuit and on the circuit court itself, but are only persuasive authority on sister circuits. District court decisions are not binding precedent at all, only persuasive. Most of the U.S. federal courts of appeal have adopted a rule under which, in the event of any conflict in decisions of panels (most of the courts of appeal almost always sit in panels of three), the earlier panel decision is controlling, and a panel decision may only be overruled by the court of appeals sitting en banc (that is, all active judges of the court) or by a higher court. In these courts, the older decision remains controlling when an issue comes up the third time. Other courts, for example, the Court of Customs and Patent Appeals and the Supreme Court, always sit en banc, and thus the later decision controls. These courts essentially overrule all previous cases in each new case, and older cases survive only to the extent they do not conflict with newer cases. The interpretations of these courts—for example, Supreme Court interpretations of the constitution or federal statutes—are stable only so long as the older interpretation maintains the support of a majority of the court. Older decisions persist through some combination of belief that the old decision is right, and that it is not sufficiently wrong to be overruled. In the jurisdictions of England and Wales and of Northern Ireland, since 2009, the Supreme Court of the United Kingdom has the authority to overrule and unify criminal law decisions of lower courts; it is the final court of appeal for civil law cases in all three of the UK jurisdictions, but not for criminal law cases in Scotland, where the High Court of Justiciary has this power instead (except on questions of law relating to reserved matters such as devolution and human rights). From 1966 to 2009, this power lay with the House of Lords, granted by the Practice Statement of 1966. Canada's federal system, described below, avoids regional variability of federal law by giving national jurisdiction to both layers of appellate courts. Common law as a foundation for commercial economies The reliance on judicial opinion is a strength of common law systems, and is a significant contributor to the robust commercial systems in the United Kingdom and United States. Because there is reasonably precise guidance on almost every issue, parties (especially commercial parties) can predict whether a proposed course of action is likely to be lawful or unlawful, and have some assurance of consistency. As Justice Brandeis famously expressed it, "in most matters it is more important that the applicable rule of law be settled than that it be settled right." This ability to predict gives more freedom to come close to the boundaries of the law. For example, many commercial contracts are more economically efficient, and create greater wealth, because the parties know ahead of time that the proposed arrangement, though perhaps close to the line, is almost certainly legal. Newspapers, taxpayer-funded entities with some religious affiliation, and political parties can obtain fairly clear guidance on the boundaries within which their freedom of expression rights apply. In contrast, in jurisdictions with very weak respect for precedent, fine questions of law are redetermined anew each time they arise, making consistency and prediction more difficult, and procedures far more protracted than necessary because parties cannot rely on written statements of law as reliable guides. In jurisdictions that do not have a strong allegiance to a large body of precedent, parties have less a priori guidance (unless the written law is very clear and kept updated) and must often leave a bigger "safety margin" of unexploited opportunities, and final determinations are reached only after far larger expenditures on legal fees by the parties. This is the reason for the frequent choice of the law of the State of New York in commercial contracts, even when neither entity has extensive contacts with New York—and remarkably often even when neither party has contacts with the United States. Commercial contracts almost always include a "choice of law clause" to reduce uncertainty. Somewhat surprisingly, contracts throughout the world (for example, contracts involving parties in Japan, France and Germany, and from most of the other states of the United States) often choose the law of New York, even where the relationship of the parties and transaction to New York is quite attenuated. Because of its history as the United States' commercial center, New York common law has a depth and predictability not (yet) available in any other jurisdictions of the United States. Similarly, American corporations are often formed under Delaware corporate law, and American contracts relating to corporate law issues (merger and acquisitions of companies, rights of shareholders, and so on) include a Delaware choice of law clause, because of the deep body of law in Delaware on these issues. On the other hand, some other jurisdictions have sufficiently developed bodies of law so that parties have no real motivation to choose the law of a foreign jurisdiction (for example, England and Wales, and the state of California), but not yet so fully developed that parties with no relationship to the jurisdiction choose that law. Outside the United States, parties that are in different jurisdictions from each other often choose the law of England and Wales, particularly when the parties are each in former British colonies and members of the Commonwealth. The common theme in all cases is that commercial parties seek predictability and simplicity in their contractual relations, and frequently choose the law of a common law jurisdiction with a well-developed body of common law to achieve that result. Likewise, for litigation of commercial disputes arising out of unpredictable torts (as opposed to the prospective choice of law clauses in contracts discussed in the previous paragraph), certain jurisdictions attract an unusually high fraction of cases, because of the predictability afforded by the depth of decided cases. For example, London is considered the pre-eminent centre for litigation of admiralty cases. This is not to say that common law is better in every situation. For example, civil law can be clearer than case law when the legislature has had the foresight and diligence to address the precise set of facts applicable to a particular situation. For that reason, civil law statutes tend to be somewhat more detailed than statutes written by common law legislatures—but, conversely, that tends to make the statute more difficult to read (the United States tax code is an example). History Origins The common lawso named because it was "common" to all the king's courts across Englandoriginated in the practices of the courts of the English kings in the centuries following the Norman Conquest in 1066. Prior to the Norman Conquest, much of England's legal business took place in the local folk courts of its various shires and hundreds. A variety of other individual courts also existed across the land: urban boroughs and merchant fairs held their own courts, and large landholders also held their own manorial and seigniorial courts as needed. The degree to which common law drew from earlier Anglo-Saxon traditions such as the jury, ordeals, the penalty of outlawry, and writs all of which were incorporated into the Norman common law is still a subject of much discussion. Additionally, the Catholic Church operated its own court system that adjudicated issues of canon law. The main sources for the history of the common law in the Middle Ages are the plea rolls and the Year Books. The plea rolls, which were the official court records for the Courts of Common Pleas and King's Bench, were written in Latin. The rolls were made up in bundles by law term: Hilary, Easter, Trinity, and Michaelmas, or winter, spring, summer, and autumn. They are currently deposited in the UK National Archives, by whose permission images of the rolls for the Courts of Common Pleas, King's Bench, and Exchequer of Pleas, from the 13th century to the 17th, can be viewed online at the Anglo-American Legal Tradition site (The O'Quinn Law Library of the University of Houston Law Center). The doctrine of precedent developed during the 12th and 13th centuries, as the collective judicial decisions that were based in tradition, custom and precedent. The form of reasoning used in common law is known as casuistry or case-based reasoning. The common law, as applied in civil cases (as distinct from criminal cases), was devised as a means of compensating someone for wrongful acts known as torts, including both intentional torts and torts caused by negligence, and as developing the body of law recognizing and regulating contracts. The type of procedure practiced in common law courts is known as the adversarial system; this is also a development of the common law. Medieval English common law In 1154, Henry II became the first Plantagenet king. Among many achievements, Henry institutionalized common law by creating a unified system of law "common" to the country through incorporating and elevating local custom to the national, ending local control and peculiarities, eliminating arbitrary remedies and reinstating a jury system—citizens sworn on oath to investigate reliable criminal accusations and civil claims. The jury reached its verdict through evaluating common local knowledge, not necessarily through the presentation of evidence, a distinguishing factor from today's civil and criminal court systems. At the time, royal government centered on the Curia Regis (king's court), the body of aristocrats and prelates who assisted in the administration of the realm and the ancestor of Parliament, the Star Chamber, and Privy Council. Henry II developed the practice of sending judges (numbering around 20 to 30 in the 1180s) from his Curia Regis to hear the various disputes throughout the country, and return to the court thereafter. The king's itinerant justices would generally receive a writ or commission under the great seal. They would then resolve disputes on an ad hoc basis according to what they interpreted the customs to be. The king's judges would then return to London and often discuss their cases and the decisions they made with the other judges. These decisions would be recorded and filed. In time, a rule, known as stare decisis (also commonly known as precedent) developed, whereby a judge would be bound to follow the decision of an earlier judge; he was required to adopt the earlier judge's interpretation of the law and apply the same principles promulgated by that earlier judge if the two cases had similar facts to one another. Once judges began to regard each other's decisions to be binding precedent, the pre-Norman system of local customs and law varying in each locality was replaced by a system that was (at least in theory, though not always in practice) common throughout the whole country, hence the name "common law". The king's object was to preserve public order, but providing law and order was also extremely profitable–cases on forest use as well as fines and forfeitures can generate "great treasure" for the government. Eyres (a Norman French word for judicial circuit, originating from Latin iter) are more than just courts; they would supervise local government, raise revenue, investigate crimes, and enforce feudal rights of the king. There were complaints of the eyre of 1198 reducing the kingdom to poverty and Cornishmen fleeing to escape the eyre of 1233. Henry II's creation of a powerful and unified court system, which curbed somewhat the power of canonical (church) courts, brought him (and England) into conflict with the church, most famously with Thomas Becket, the Archbishop of Canterbury. The murder of the Archbishop gave rise to a wave of popular outrage against the King. International pressure on Henry grew, and in May 1172 he negotiated a settlement with the papacy in which the King swore to go on crusade as well as effectively overturned the more controversial clauses of the Constitutions of Clarendon. Henry nevertheless continued to exert influence in any ecclesiastical case which interested him and royal power was exercised more subtly with considerable success. The English Court of Common Pleas was established after Magna Carta to try lawsuits between commoners in which the monarch had no interest. Its judges sat in open court in the Great Hall of the king's Palace of Westminster, permanently except in the vacations between the four terms of the Legal year. Judge-made common law operated as the primary source of law for several hundred years, before Parliament acquired legislative powers to create statutory law. It is important to understand that common law is the older and more traditional source of law, and legislative power is simply a layer applied on top of the older common law foundation. Since the 12th century, courts have had parallel and co-equal authority to make law—"legislating from the bench" is a traditional and essential function of courts, which was carried over into the U.S. system as an essential component of the "judicial power" specified by Article III of the U.S. Constitution. Justice Oliver Wendell Holmes Jr. summarized centuries of history in 1917, "judges do and must legislate." In the United States, state courts continue to exercise full common law powers, and create both general common law and interstitial common law. In U.S. federal courts, after Erie R. Co. v. Tompkins, 304 U.S. 64, 78 (1938), the general dividing line is that federal courts can only "interpret" to create interstitial common law not exercise general common law powers. However, that authority to "interpret" can be an expansive power to "make law," especially on Constitutional issues where the Constitutional text is so terse. There are legitimate debates on how the powers of courts and legislatures should be balanced around "interpretation." However, the view that courts lack law-making power is historically inaccurate and constitutionally unsupportable. In England, judges have devised a number of rules as to how to deal with precedent decisions. The early development of case-law in the thirteenth century has been traced to Bracton's On the Laws and Customs of England and led to the yearly compilations of court cases known as Year Books, of which the first extant was published in 1268, the same year that Bracton died. The Year Books are known as the law reports of medieval England, and are a principal source for knowledge of the developing legal doctrines, concepts, and methods in the period from the 13th to the 16th centuries, when the common law developed into recognizable form. Influence of Roman law The term "common law" is often used as a contrast to Roman-derived "civil law", and the fundamental processes and forms of reasoning in the two are quite different. Nonetheless, there has been considerable cross-fertilization of ideas, while the two traditions and sets of foundational principles remain distinct. By the time of the rediscovery of the Roman law in Europe in the 12th and 13th centuries, the common law had already developed far enough to prevent a Roman law reception as it occurred on the continent. However, the first common law scholars, most notably Glanvill and Bracton, as well as the early royal common law judges, had been well accustomed with Roman law. Often, they were clerics trained in the Roman canon law. One of the first and throughout its history one of the most significant treatises of the common law, Bracton's De Legibus et Consuetudinibus Angliae (On the Laws and Customs of England), was heavily influenced by the division of the law in Justinian's Institutes. The impact of Roman law had decreased sharply after the age of Bracton, but the Roman divisions of actions into in rem (typically, actions against a thing or property for the purpose of gaining title to that property; must be filed in a court where the property is located) and in personam (typically, actions directed against a person; these can affect a person's rights and, since a person often owns things, his property too) used by Bracton had a lasting effect and laid the groundwork for a return of Roman law structural concepts in the 18th and 19th centuries. Signs of this can be found in Blackstone's Commentaries on the Laws of England, and Roman law ideas regained importance with the revival of academic law schools in the 19th century. As a result, today, the main systematic divisions of the law into property, contract, and tort (and to some extent unjust enrichment) can be found in the civil law as well as in the common law. Coke and Blackstone The first attempt at a comprehensive compilation of centuries of common law was by Lord Chief Justice Edward Coke, in his treatise, Institutes of the Lawes of England in the 17th century. The next definitive historical treatise on the common law is Commentaries on the Laws of England, written by Sir William Blackstone and first published in 1765–1769. Propagation of the common law to the colonies and Commonwealth by reception statutes A reception statute is a statutory law adopted as a former British colony becomes independent, by which the new nation adopts (i.e. receives) pre-independence common law, to the extent not explicitly rejected by the legislative body or constitution of the new nation. Reception statutes generally consider the English common law dating prior to independence, and the precedent originating from it, as the default law, because of the importance of using an extensive and predictable body of law to govern the conduct of citizens and businesses in a new state. All U.S. states, with the partial exception of Louisiana, have either implemented reception statutes or adopted the common law by judicial opinion. Other examples of reception statutes in the United States, the states of the U.S., Canada and its provinces, and Hong Kong, are discussed in the reception statute article. Yet, adoption of the common law in the newly independent nation was not a foregone conclusion, and was controversial. Immediately after the American Revolution, there was widespread distrust and hostility to anything British, and the common law was no exception. Jeffersonians decried lawyers and their common law tradition as threats to the new republic. The Jeffersonians preferred a legislatively enacted civil law under the control of the political process, rather than the common law developed by judges that—by design—were insulated from the political process. The Federalists believed that the common law was the birthright of Independence: after all, the natural rights to "life, liberty, and the pursuit of happiness" were the rights protected by common law. Even advocates for the common law approach noted that it was not an ideal fit for the newly independent colonies: judges and lawyers alike were severely hindered by a lack of printed legal materials. Before Independence, the most comprehensive law libraries had been maintained by Tory lawyers, and those libraries vanished with the loyalist expatriation, and the ability to print books was limited. Lawyer (later President) John Adams complained that he "suffered very much for the want of books". To bootstrap this most basic need of a common law system—knowable, written law—in 1803, lawyers in Massachusetts donated their books to found a law library. A Jeffersonian newspaper criticized the library, as it would carry forward "all the old authorities practiced in England for centuries back ... whereby a new system of jurisprudence [will be founded] on the high monarchical system [to] become the Common Law of this Commonwealth... [The library] may hereafter have a very unsocial purpose." For several decades after independence, English law still exerted influence over American common law—for example, with Byrne v Boadle (1863), which first applied the res ipsa loquitur doctrine. Decline of Latin maxims and "blind imitation of the past", and adding flexibility to stare decisis Well into the 19th century, ancient maxims played a large role in common law adjudication. Many of these maxims had originated in Roman Law, migrated to England before the introduction of Christianity to the British Isles, and were typically stated in Latin even in English decisions. Many examples are familiar in everyday speech even today, "One cannot be a judge in one's own cause" (see Dr. Bonham's Case), rights are reciprocal to obligations, and the like. Judicial decisions and treatises of the 17th and 18th centuries, such at those of Lord Chief Justice Edward Coke, presented the common law as a collection of such maxims. Reliance on old maxims and rigid adherence to precedent, no matter how old or ill-considered, came under critical discussion in the late 19th century, starting in the United States. Oliver Wendell Holmes Jr. in his famous article, "The Path of the Law", commented, "It is revolting to have no better reason for a rule of law than that so it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past." Justice Holmes noted that study of maxims might be sufficient for "the man of the present", but "the man of the future is the man of statistics and the master of economics". In an 1880 lecture at Harvard, he wrote: In the early 20th century, Louis Brandeis, later appointed to the United States Supreme Court, became noted for his use of policy-driving facts and economics in his briefs, and extensive appendices presenting facts that lead a judge to the advocate's conclusion. By this time, briefs relied more on facts than on Latin maxims. Reliance on old maxims is now deprecated. Common law decisions today reflect both precedent and policy judgment drawn from economics, the social sciences, business, decisions of foreign courts, and the like. The degree to which these external factors should influence adjudication is the subject of active debate, but it is indisputable that judges do draw on experience and learning from everyday life, from other fields, and from other jurisdictions. 1870 through 20th century, and the procedural merger of law and equity As early as the 15th century, it became the practice that litigants who felt they had been cheated by the common law system would petition the King in person. For example, they might argue that an award of damages (at common law (as opposed to equity)) was not sufficient redress for a trespasser occupying their land, and instead request that the trespasser be evicted. From this developed the system of equity, administered by the Lord Chancellor, in the courts of chancery. By their nature, equity and law were frequently in conflict and litigation would frequently continue for years as one court countermanded the other, even though it was established by the 17th century that equity should prevail. In England, courts of law (as opposed to equity) were merged with courts of equity by the Judicature Acts of 1873 and 1875, with equity prevailing in case of conflict. In the United States, parallel systems of law (providing money damages, with cases heard by a jury upon either party's request) and equity (fashioning a remedy to fit the situation, including injunctive relief, heard by a judge) survived well into the 20th century. The United States federal courts procedurally separated law and equity: the same judges could hear either kind of case, but a given case could only pursue causes in law or in equity, and the two kinds of cases proceeded under different procedural rules. This became problematic when a given case required both money damages and injunctive relief. In 1937, the new Federal Rules of Civil Procedure combined law and equity into one form of action, the "civil action". Fed.R.Civ.P. . The distinction survives to the extent that issues that were "common law (as opposed to equity)" as of 1791 (the date of adoption of the Seventh Amendment) are still subject to the right of either party to request a jury, and "equity" issues are decided by a judge. The states of Delaware, Illinois, Mississippi, South Carolina, and Tennessee continue to have divided courts of law and courts of chancery, for example, the Delaware Court of Chancery. In New Jersey, the appellate courts are unified, but the trial courts are organized into a Chancery Division and a Law Division. Common law pleading and its abolition in the early 20th century For centuries, through to the 19th century, the common law acknowledged only specific forms of action, and required very careful drafting of the opening pleading (called a writ) to slot into exactly one of them: debt, detinue, covenant, special assumpsit, general assumpsit, trespass, trover, replevin, case (or trespass on the case), and ejectment. To initiate a lawsuit, a pleading had to be drafted to meet myriad technical requirements: correctly categorizing the case into the correct legal pigeonhole (pleading in the alternative was not permitted), and using specific legal terms and phrases that had been traditional for centuries. Under the old common law pleading standards, a suit by a pro se ("for oneself", without a lawyer) party was all but impossible, and there was often considerable procedural jousting at the outset of a case over minor wording issues. One of the major reforms of the late 19th century and early 20th century was the abolition of common law pleading requirements. A plaintiff can initiate a case by giving the defendant "a short and plain statement" of facts that constitute an alleged wrong. This reform moved the attention of courts from technical scrutiny of words to a more rational consideration of the facts, and opened access to justice far more broadly. Alternatives to common law systems Civil law systems—comparisons and contrasts to common law The main alternative to the common law system is the civil law system, which is used in Continental Europe, and most of Central and South America. Judicial decisions play only a minor role in shaping civil law The primary contrast between the two systems is the role of written decisions and precedent. In common law jurisdictions, nearly every case that presents a bona fide disagreement on the law is resolved in a written opinion. The legal reasoning for the decision, known as ratio decidendi, not only determines the court's judgment between the parties, but also stands as precedent for resolving future disputes. In contrast, civil law decisions typically do not include explanatory opinions, and thus no precedent flows from one decision to the next. In common law systems, a single decided case is binding common law (connotation 1) to the same extent as statute or regulation, under the principle of stare decisis. In contrast, in civil law systems, individual decisions have only advisory, not binding effect. In civil law systems, case law only acquires weight when a long series of cases use consistent reasoning, called jurisprudence constante. Civil law lawyers consult case law to obtain their best prediction of how a court will rule, but comparatively, civil law judges are less bound to follow it. For that reason, statutes in civil law systems are more comprehensive, detailed, and continuously updated, covering all matters capable of being brought before a court. Adversarial system vs. inquisitorial system Common law systems tend to give more weight to separation of powers between the judicial branch and the executive branch. In contrast, civil law systems are typically more tolerant of allowing individual officials to exercise both powers. One example of this contrast is the difference between the two systems in allocation of responsibility between prosecutor and adjudicator. Common law courts usually use an adversarial system, in which two sides present their cases to a neutral judge. For example, in criminal cases, in adversarial systems, the prosecutor and adjudicator are two separate people. The prosecutor is lodged in the executive branch, and conducts the investigation to locate evidence. That prosecutor presents the evidence to a neutral adjudicator, who makes a decision. In contrast, in civil law systems, criminal proceedings proceed under an inquisitorial system in which an examining magistrate serves two roles by first developing the evidence and arguments for one side and then the other during the investigation phase. The examining magistrate then presents the dossier detailing his or her findings to the president of the bench that will adjudicate on the case where it has been decided that a trial shall be conducted. Therefore, the president of the bench's view of the case is not neutral and may be biased while conducting the trial after the reading of the dossier. Unlike the common law proceedings, the president of the bench in the inquisitorial system is not merely an umpire and is entitled to directly interview the witnesses or express comments during the trial, as long as he or she does not express his or her view on the guilt of the accused. The proceeding in the inquisitorial system is essentially by writing. Most of the witnesses would have given evidence in the investigation phase and such evidence will be contained in the dossier under the form of police reports. In the same way, the accused would have already put his or her case at the investigation phase but he or she will be free to change his or her evidence at trial. Whether the accused pleads guilty or not, a trial will be conducted. Unlike the adversarial system, the conviction and sentence to be served (if any) will be released by the trial jury together with the president of the trial bench, following their common deliberation. In contrast, in an adversarial system, on issues of fact, the onus of framing the case rests on the parties, and judges generally decide the case presented to them, rather than acting as active investigators, or actively reframing the issues presented. "In our adversary system, in both civil and criminal cases, in the first instance and on appeal, we follow the principle of party presentation. That is, we rely on the parties to frame the issues for decision and assign to courts the role of neutral arbiter of matters the parties present." This principle applies with force in all issues in criminal matters, and to factual issues: courts seldom engage in fact gathering on their own initiative, but decide facts on the evidence presented (even here, there are exceptions, for "legislative facts" as opposed to "adjudicative facts"). On the other hand, on issues of law, common law courts regularly raise new issues (such as matters of jurisdiction or standing), perform independent research, and reformulate the legal grounds on which to analyze the facts presented to them. The United States Supreme Court regularly decides based on issues raised only in amicus briefs from non-parties. One of the most notable such cases was Erie Railroad v. Tompkins, a 1938 case in which neither party questioned the ruling from the 1842 case Swift v. Tyson that served as the foundation for their arguments, but which led the Supreme Court to overturn Swift during their deliberations. To avoid lack of notice, courts may invite briefing on an issue to ensure adequate notice. However, there are limits—an appeals court may not introduce a theory that contradicts the party's own contentions. There are many exceptions in both directions. For example, most proceedings before U.S. federal and state agencies are inquisitorial in nature, at least the initial stages (e.g., a patent examiner, a social security hearing officer, and so on), even though the law to be applied is developed through common law processes. Contrasting role of treatises and academic writings in common law and civil law systems The role of the legal academy presents a significant "cultural" difference between common law (connotation 2) and civil law jurisdictions. In both systems, treatises compile decisions and state overarching principles that (in the author's opinion) explain the results of the cases. In neither system are treatises considered "law", but the weight given them is nonetheless quite different. In common law jurisdictions, lawyers and judges tend to use these treatises as only "finding aids" to locate the relevant cases. In common law jurisdictions, scholarly work is seldom cited as authority for what the law is. Chief Justice Roberts noted the "great disconnect between the academy and the profession." When common law courts rely on scholarly work, it is almost always only for factual findings, policy justification, or the history and evolution of the law, but the court's legal conclusion is reached through analysis of relevant statutes and common law, seldom scholarly commentary. In contrast, in civil law jurisdictions, courts give the writings of law professors significant weight, partly because civil law decisions traditionally were very brief, sometimes no more than a paragraph stating who wins and who loses. The rationale had to come from somewhere else: the academy often filled that role. Narrowing of differences between common law and civil law The contrast between civil law and common law legal systems has become increasingly blurred, with the growing importance of jurisprudence (similar to case law but not binding) in civil law countries, and the growing importance of statute law and codes in common law countries. Examples of common law being replaced by statute or codified rule in the United States include criminal law (since 1812, U.S. federal courts and most but not all of the states have held that criminal law must be embodied in statute if the public is to have fair notice), commercial law (the Uniform Commercial Code in the early 1960s) and procedure (the Federal Rules of Civil Procedure in the 1930s and the Federal Rules of Evidence in the 1970s). But in each case, the statute sets the general principles, but the interstitial common law process determines the scope and application of the statute. An example of convergence from the other direction is shown in the 1982 decision Srl CILFIT and Lanificio di Gavardo SpA v Ministry of Health (), in which the European Court of Justice held that questions it has already answered need not be resubmitted. This showed how a historically distinctly common law principle is used by a court composed of judges (at that time) of essentially civil law jurisdiction. Other alternatives The former Soviet Bloc and other socialist countries used a socialist law system, although there is controversy as to whether socialist law ever constituted a separate legal system or not. Much of the Muslim world uses legal systems based on Sharia (also called Islamic law). Many churches use a system of canon law. The canon law of the Catholic Church influenced the common law during the medieval period through its preservation of Roman law doctrine such as the presumption of innocence. Common law legal systems in the present day In jurisdictions around the world The common law constitutes the basis of the legal systems of: Australia (both federally and in each of the States and Territories) Bangladesh Belize Brunei Canada (both federal and the individual provinces, with the exception of Quebec) the Caribbean jurisdictions of Antigua and Barbuda, Barbados, Bahamas, Dominica, Grenada, Jamaica, St Vincent and the Grenadines, Saint Kitts and Nevis, Trinidad and Tobago Cyprus Ghana Hong Kong India Ireland Israel Kenya Nigeria Malaysia Malta Myanmar New Zealand Pakistan Philippines Singapore South Africa United Kingdom (in England, Scotland, Wales, and Northern Ireland) United States (both the federal system and the individual states and Territories, with the partial exception of Louisiana and Puerto Rico) and many other generally English-speaking countries or Commonwealth countries (except Scotland, which is bijuridicial, and Malta). Essentially, every country that was colonised at some time by England, Great Britain, or the United Kingdom uses common law except those that were formerly colonised by other nations, such as Quebec (which follows the bijuridicial law or civil code of France in part), South Africa and Sri Lanka (which follow Roman Dutch law), where the prior civil law system was retained to respect the civil rights of the local colonists. Guyana and Saint Lucia have mixed common law and civil law systems. The remainder of this section discusses jurisdiction-specific variants, arranged chronologically. Scotland Scotland is often said to use the civil law system, but it has a unique system that combines elements of an uncodified civil law dating back to the with an element of its own common law long predating the Treaty of Union with England in 1707 (see Legal institutions of Scotland in the High Middle Ages), founded on the customary laws of the tribes residing there. Historically, Scottish common law differed in that the use of precedent was subject to the courts' seeking to discover the principle that justifies a law rather than searching for an example as a precedent, and principles of natural justice and fairness have always played a role in Scots Law. From the 19th century, the Scottish approach to precedent developed into a stare decisis akin to that already established in England thereby reflecting a narrower, more modern approach to the application of case law in subsequent instances. This is not to say that the substantive rules of the common laws of both countries are the same, but in many matters (particularly those of UK-wide interest), they are similar. Scotland shares the Supreme Court with England, Wales and Northern Ireland for civil cases; the court's decisions are binding on the jurisdiction from which a case arises but only influential on similar cases arising in Scotland. This has had the effect of converging the law in certain areas. For instance, the modern UK law of negligence is based on Donoghue v Stevenson, a case originating in Paisley, Scotland. Scotland maintains a separate criminal law system from the rest of the UK, with the High Court of Justiciary being the final court for criminal appeals. The highest court of appeal in civil cases brought in Scotland is now the Supreme Court of the United Kingdom (before October 2009, final appellate jurisdiction lay with the House of Lords). The United States – its states, federal courts, and executive branch agencies (17th century on) The centuries-old authority of the common law courts in England to develop law case by case and to apply statute law—"legislating from the bench"—is a traditional function of courts, which was carried over into the U.S. system as an essential component of the judicial power for states. Justice Oliver Wendell Holmes Jr. summarized centuries of history in 1917, "judges do and must legislate" (in the federal courts, only interstitially, in state courts, to the full limits of common law adjudicatory authority). New York (17th century) The original colony of New Netherland was settled by the Dutch and the law was also Dutch. When the English captured pre-existing colonies they continued to allow the local settlers to keep their civil law. However, the Dutch settlers revolted against the English and the colony was recaptured by the Dutch. In 1664, the colony of New York had two distinct legal systems: on Manhattan Island and along the Hudson River, sophisticated courts modeled on those of the Netherlands were resolving disputes learnedly in accordance with Dutch customary law. On Long Island, Staten Island, and in Westchester, on the other hand, English courts were administering a crude, untechnical variant of the common law carried from Puritan New England and practiced without the intercession of lawyers. When the English finally regained control of New Netherland they imposed common law upon all the colonists, including the Dutch. This was problematic, as the patroon system of land holding, based on the feudal system and civil law, continued to operate in the colony until it was abolished in the mid-19th century. New York began a codification of its law in the 19th century. The only part of this codification process that was considered complete is known as the Field Code applying to civil procedure. The influence of Roman-Dutch law continued in the colony well into the late 19th century. The codification of a law of general obligations shows how remnants of the civil law tradition in New York continued on from the Dutch days. Louisiana (1700s) Under Louisiana's codified system, the Louisiana Civil Code, private law—that is, substantive law between private sector parties—is based on principles of law from continental Europe, with some common law influences. These principles derive ultimately from Roman law, transmitted through French law and Spanish law, as the state's current territory intersects the area of North America colonized by Spain and by France. Contrary to popular belief, the Louisiana code does not directly derive from the Napoleonic Code, as the latter was enacted in 1804, one year after the Louisiana Purchase. However, the two codes are similar in many respects due to common roots. Louisiana's criminal law largely rests on English common law. Louisiana's administrative law is generally similar to the administrative law of the U.S. federal government and other U.S. states. Louisiana's procedural law is generally in line with that of other U.S. states, which in turn is generally based on the U.S. Federal Rules of Civil Procedure. Historically notable among the Louisiana code's differences from common law is the role of property rights among women, particularly in inheritance gained by widows. California (1850s) The U.S. state of California has a system based on common law, but it has codified the law in the manner of civil law jurisdictions. The reason for the enactment of the California Codes in the 19th century was to replace a pre-existing system based on Spanish civil law with a system based on common law, similar to that in most other states. California and a number of other Western states, however, have retained the concept of community property derived from civil law. The California courts have treated portions of the codes as an extension of the common-law tradition, subject to judicial development in the same manner as judge-made common law. (Most notably, in the case Li v. Yellow Cab Co., 13 Cal.3d 804 (1975), the California Supreme Court adopted the principle of comparative negligence in the face of a California Civil Code provision codifying the traditional common-law doctrine of contributory negligence.) United States federal courts (1789 and 1938) The United States federal government (as opposed to the states) has a variant on a common law system. United States federal courts only act as interpreters of statutes and the constitution by elaborating and precisely defining broad statutory language (connotation 1(b) above), but, unlike state courts, do not generally act as an independent source of common law. Before 1938, the federal courts, like almost all other common law courts, decided the law on any issue where the relevant legislature (either the U.S. Congress or state legislature, depending on the issue) had not acted, by looking to courts in the same system, that is, other federal courts, even on issues of state law, and even where there was no express grant of authority from Congress or the Constitution. In 1938, the U.S. Supreme Court in Erie Railroad Co. v. Tompkins 304 U.S. 64, 78 (1938), overruled earlier precedent, and held "There is no federal general common law," thus confining the federal courts to act only as interstitial interpreters of law originating elsewhere. E.g., Texas Industries v. Radcliff, (without an express grant of statutory authority, federal courts cannot create rules of intuitive justice, for example, a right to contribution from co-conspirators). Post-1938, federal courts deciding issues that arise under state law are required to defer to state court interpretations of state statutes, or reason what a state's highest court would rule if presented with the issue, or to certify the question to the state's highest court for resolution. Later courts have limited Erie slightly, to create a few situations where United States federal courts are permitted to create federal common law rules without express statutory authority, for example, where a federal rule of decision is necessary to protect uniquely federal interests, such as foreign affairs, or financial instruments issued by the federal government. See, e.g., Clearfield Trust Co. v. United States, (giving federal courts the authority to fashion common law rules with respect to issues of federal power, in this case negotiable instruments backed by the federal government); see also International News Service v. Associated Press, 248 U.S. 215 (1918) (creating a cause of action for misappropriation of "hot news" that lacks any statutory grounding); but see National Basketball Association v. Motorola, Inc., 105 F.3d 841, 843–44, 853 (2d Cir. 1997) (noting continued vitality of INS "hot news" tort under New York state law, but leaving open the question of whether it survives under federal law). Except on Constitutional issues, Congress is free to legislatively overrule federal courts' common law. United States executive branch agencies (1946) Most executive branch agencies in the United States federal government have some adjudicatory authority. To greater or lesser extent, agencies honor their own precedent to ensure consistent results. Agency decision making is governed by the Administrative Procedure Act of 1946. For example, the National Labor Relations Board issues relatively few regulations, but instead promulgates most of its substantive rules through common law (connotation 1). India, Pakistan, and Bangladesh (19th century and 1948) The law of India, Pakistan, and Bangladesh are largely based on English common law because of the long period of British colonial influence during the period of the British Raj. Ancient India represented a distinct tradition of law, and had a historically independent school of legal theory and practice. The Arthashastra, dating from 400 BCE and the Manusmriti, from 100 CE, were influential treatises in India, texts that were considered authoritative legal guidance. Manu's central philosophy was tolerance and pluralism, and was cited across Southeast Asia. Early in this period, which finally culminated in the creation of the Gupta Empire, relations with ancient Greece and Rome were not infrequent. The appearance of similar fundamental institutions of international law in various parts of the world show that they are inherent in international society, irrespective of culture and tradition. Inter-State relations in the pre-Islamic period resulted in clear-cut rules of warfare of a high humanitarian standard, in rules of neutrality, of treaty law, of customary law embodied in religious charters, in exchange of embassies of a temporary or semi-permanent character. When India became part of the British Empire, there was a break in tradition, and Hindu and Islamic law were supplanted by the common law. After the failed rebellion against the British in 1857, the British Parliament took over control of India from the British East India Company, and British India came under the direct rule of the Crown. The British Parliament passed the Government of India Act 1858 to this effect, which set up the structure of British government in India. It established in Britain the office of the Secretary of State for India through whom the Parliament would exercise its rule, along with a Council of India to aid him. It also established the office of the Governor-General of India along with an Executive Council in India, which consisted of high officials of the British Government. As a result, the present judicial system of the country derives largely from the British system and has little correlation to the institutions of the pre-British era. Post-partition India (1948) Post-partition, India retained its common law system. Much of contemporary Indian law shows substantial European and American influence. Legislation first introduced by the British is still in effect in modified form today. During the drafting of the Indian Constitution, laws from Ireland, the United States, Britain, and France were all synthesized to produce a refined set of Indian laws. Indian laws also adhere to the United Nations guidelines on human rights law and environmental law. Certain international trade laws, such as those on intellectual property, are also enforced in India. Post-partition Pakistan (1948) Post-partition, Pakistan retained its common law system. Post-partition Bangladesh (1968) Post-partition, Bangladesh retained its common law system. Canada (1867) Canada has separate federal and provincial legal systems. Canadian provincial legal systems Each province and territory is considered a separate jurisdiction with respect to case law. Each has its own procedural law in civil matters, statutorily created provincial courts and superior trial courts with inherent jurisdiction culminating in the Court of Appeal of the province. These Courts of Appeal are then subject to the Supreme Court of Canada in terms of appeal of their decisions. All but one of the provinces of Canada use a common law system for civil matters (the exception being Quebec, which uses a French-heritage civil law system for issues arising within provincial jurisdiction, such as property ownership and contracts). Canadian federal legal system Canadian Federal Courts operate under a separate system throughout Canada and deal with narrower range of subject matter than superior courts in each province and territory. They only hear cases on subjects assigned to them by federal statutes, such as immigration, intellectual property, judicial review of federal government decisions, and admiralty. The Federal Court of Appeal is the appellate court for federal courts and hears cases in multiple cities; unlike the United States, the Canadian Federal Court of Appeal is not divided into appellate circuits. Canadian federal statutes must use the terminology of both the common law and civil law for civil matters; this is referred to as legislative bijuralism. Canadian criminal law Criminal law is uniform throughout Canada. It is based on the federal statutory Criminal Code, which in addition to substance also details procedural law. The administration of justice are the responsibilities of the provinces. Canadian criminal law uses a common law system no matter which province a case proceeds. Nicaragua Nicaragua's legal system is also a mixture of the English Common Law and Civil Law. This situation was brought through the influence of British administration of the Eastern half of the Mosquito Coast from the mid-17th century until about 1894, the William Walker period from about 1855 through 1857, US interventions/occupations during the period from 1909 to 1933, the influence of US institutions during the Somoza family administrations (1933 through 1979) and the considerable importation between 1979 and the present of US culture and institutions. Israel (1948) Israel has no formal written constitution. Its basic principles are inherited from the law of the British Mandate of Palestine and thus resemble those of British and American law, namely: the role of courts in creating the body of law and the authority of the supreme court in reviewing and if necessary overturning legislative and executive decisions, as well as employing the adversarial system. However, because Israel has no written constitution, basic laws can be changed by a vote of 61 out of 120 votes in the parliament. One of the primary reasons that the Israeli constitution remains unwritten is the fear by whatever party holds power that creating a written constitution, combined with the common-law elements, would severely limit the powers of the Knesset (which, following the doctrine of parliamentary sovereignty, holds near-unlimited power). Roman Dutch common law Roman Dutch common law is a bijuridical or mixed system of law similar to the common law system in Scotland and Louisiana. Roman Dutch common law jurisdictions include South Africa, Botswana, Lesotho, Namibia, Swaziland, Sri Lanka and Zimbabwe. Many of these jurisdictions recognise customary law, and in some, such as South Africa the Constitution requires that the common law be developed in accordance with the Bill of Rights. Roman Dutch common law is a development of Roman Dutch law by courts in the Roman Dutch common law jurisdictions. During the Napoleonic wars the Kingdom of the Netherlands adopted the French code civil in 1809, however the Dutch colonies in the Cape of Good Hope and Sri Lanka, at the time called Ceylon, were seized by the British to prevent them being used as bases by the French Navy. The system was developed by the courts and spread with the expansion of British colonies in Southern Africa. Roman Dutch common law relies on legal principles set out in Roman law sources such as Justinian's Institutes and Digest, and also on the writing of Dutch jurists of the 17th century such as Grotius and Voet. In practice, the majority of decisions rely on recent precedent. Ghana Ghana follows the English common law tradition which was inherited from the British during her colonisation. Consequently, the laws of Ghana are, for the most part, a modified version of imported law that is continuously adapting to changing socio-economic and political realities of the country. The Bond of 1844 marked the period when the people of Ghana (then Gold Coast) ceded their independence to the British and gave the British judicial authority. Later, the Supreme Court Ordinance of 1876 formally introduced British law, be it the common law or statutory law, in the Gold Coast. Section 14 of the Ordinance formalised the application of the common-law tradition in the country. Ghana, after independence, did not do away with the common law system inherited from the British, and today it has been enshrined in the 1992 Constitution of the country. Chapter four of Ghana's Constitution, entitled "The Laws of Ghana", has in Article 11(1) the list of laws applicable in the state. This comprises (a) the Constitution; (b) enactments made by or under the authority of the Parliament established by the Constitution; (c) any Orders, Rules and Regulations made by any person or authority under a power conferred by the Constitution; (d) the existing law; and (e) the common law. Thus, the modern-day Constitution of Ghana, like those before it, embraced the English common law by entrenching it in its provisions. The doctrine of judicial precedence which is based on the principle of stare decisis as applied in England and other pure common law countries also applies in Ghana. Scholarly works Edward Coke, a 17th-century Lord Chief Justice of the English Court of Common Pleas and a Member of Parliament (MP), wrote several legal texts that collected and integrated centuries of case law. Lawyers in both England and America learned the law from his Institutes and Reports until the end of the 18th century. His works are still cited by common law courts around the world. The next definitive historical treatise on the common law is Commentaries on the Laws of England, written by Sir William Blackstone and first published in 1765–1769. Since 1979, a facsimile edition of that first edition has been available in four paper-bound volumes. Today it has been superseded in the English part of the United Kingdom by Halsbury's Laws of England that covers both common and statutory English law. While he was still on the Massachusetts Supreme Judicial Court, and before being named to the U.S. Supreme Court, Justice Oliver Wendell Holmes Jr. published a short volume called The Common Law, which remains a classic in the field. Unlike Blackstone and the Restatements, Holmes' book only briefly discusses what the law is; rather, Holmes describes the common law process. Law professor John Chipman Gray's The Nature and Sources of the Law, an examination and survey of the common law, is also still commonly read in U.S. law schools. In the United States, Restatements of various subject matter areas (Contracts, Torts, Judgments, and so on.), edited by the American Law Institute, collect the common law for the area. The ALI Restatements are often cited by American courts and lawyers for propositions of uncodified common law, and are considered highly persuasive authority, just below binding precedential decisions. The Corpus Juris Secundum is an encyclopedia whose main content is a compendium of the common law and its variations throughout the various state jurisdictions. Scots common law covers matters including murder and theft, and has sources in custom, in legal writings and previous court decisions. The legal writings used are called Institutional Texts and come mostly from the 17th, 18th and 19th centuries. Examples include Craig, Jus Feudale (1655) and Stair, The Institutions of the Law of Scotland (1681). See also Outline of law Common law national legal systems today List of common law national legal systems Common vs. civil laws Civil law Common law offences Development of English legal system and case law Books of authority Lists of case law Early common law systems Anglo-Saxon law Brehon law, or Irish law Doom book, or Code of Alfred the Great Time immemorial Stages of common law trials Arraignment Grand jury Jury trial Common law in specific areas Common law as applied to matrimony Alimony Common-law marriage Employment Faithless servant Slavery Slavery at common law References Further reading Chapters 1–6. Milsom, S.F.C., A Natural History of the Common Law. Columbia University Press (2003) Milsom, S.F.C., Historical Foundations of the Common Law (2nd ed.). Lexis Law Publishing (Va), (1981) External links The History of the Common Law of England, and An analysis of the civil part of the law, Matthew Hale The History of English Law before the Time of Edward I, Pollock and Maitland Select Writs. (F.W.Maitland) Common-law Pleading: its history and principles, R.Ross Perry, (Boston, 1897) ; also available at The Climate Change and Public Health Law Site The Principle of stare decisis American Law Register The Australian Institute of Comparative Legal Systems The International Institute for Law and Strategic Studies (IILSS) New South Wales Legislation Historical Laws of Hong Kong Online – University of Hong Kong Libraries, Digital Initiatives Maxims of Common Law from Bouvier's 1856 Law Dictionary Legal history Legal systems
5255
https://en.wikipedia.org/wiki/Civil%20law
Civil law
Civil law may refer to: Civil law (common law), the part of law that concerns private citizens and legal persons Civil law (legal system), or continental law, a legal system originating in continental Europe and based on Roman law Private law, the branch of law in a civil law legal system that concerns relations among private individuals Municipal law, the domestic law of a state, as opposed to international law See also Civil code Civil (disambiguation) Ius civile, Latin for "civil law" Common law (disambiguation) Criminal law
5257
https://en.wikipedia.org/wiki/Court%20of%20appeals%20%28disambiguation%29
Court of appeals (disambiguation)
A court of appeals is generally an appellate court. Court of Appeals may refer to: Israeli Military Court of Appeals (Italy) Court of Appeals of the Philippines High Court of Appeals of Turkey Court of Appeals (Vatican City) United States Courts of appeals Court of Appeals for the Armed Forces Court of Appeals for Veterans Claims Court of Appeals for the Federal Circuit Court of Appeals for the District of Columbia Circuit Court of Appeals for the First Circuit Court of Appeals for the Second Circuit Court of Appeals for the Third Circuit Court of Appeals for the Fourth Circuit Court of Appeals for the Fifth Circuit Court of Appeals for the Sixth Circuit Court of Appeals for the Seventh Circuit Court of Appeals for the Eighth Circuit Court of Appeals for the Ninth Circuit Court of Appeals for the Tenth Circuit Court of Appeals for the Eleventh Circuit Emergency Court of Appeals Temporary Emergency Court of Appeals (defunct) Alabama Court of Appeals (existed until 1969) Alaska Court of Appeals Arizona Court of Appeals Arkansas Court of Appeals Colorado Court of Appeals District of Columbia Court of Appeals Georgia Court of Appeals Hawaii Intermediate Court of Appeals Idaho Court of Appeals Illinois Court of Appeals Indiana Court of Appeals Iowa Court of Appeals Kansas Court of Appeals Kentucky Court of Appeals Louisiana Court of Appeals Maryland Court of Appeals Michigan Court of Appeals Minnesota Court of Appeals Mississippi Court of Appeals Missouri Court of Appeals Nebraska Court of Appeals New Mexico Court of Appeals New York Court of Appeals North Carolina Court of Appeals North Dakota Court of Appeals Ohio Seventh District Court of Appeals Ohio Eleventh District Court of Appeals Oregon Court of Appeals South Carolina Court of Appeals Tennessee Court of Appeals Texas Courts of Appeals Fifth Court of Appeals Utah Court of Appeals Court of Appeals of Virginia Washington Court of Appeals Supreme Court of Appeals of West Virginia Wisconsin Court of Appeals See also Court of Appeal (disambiguation) Court of Criminal Appeal (disambiguation) Appeal State court (United States)#Nomenclature List of legal topics Federal Court of Appeals (disambiguation)
5259
https://en.wikipedia.org/wiki/Common%20descent
Common descent
Common descent is a concept in evolutionary biology applicable when one species is the ancestor of two or more species later in time. According to modern evolutionary biology, all living beings could be descendants of a unique ancestor commonly referred to as the last universal common ancestor (LUCA) of all life on Earth. Common descent is an effect of speciation, in which multiple species derive from a single ancestral population. The more recent the ancestral population two species have in common, the more closely are they related. The most recent common ancestor of all currently living organisms is the last universal ancestor, which lived about 3.9 billion years ago. The two earliest pieces of evidence for life on Earth are graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. All currently living organisms on Earth share a common genetic heritage, though the suggestion of substantial horizontal gene transfer during early evolution has led to questions about the monophyly (single ancestry) of life. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian. Universal common descent through an evolutionary process was first proposed by the British naturalist Charles Darwin in the concluding sentence of his 1859 book On the Origin of Species: History The idea that all living things (including things considered non-living by science) are related is a recurring theme in many indigenous worldviews across the world. Later on, in the 1740s, the French mathematician Pierre Louis Maupertuis arrived at the idea that all organisms had a common ancestor, and had diverged through random variation and natural selection. In Essai de cosmologie (1750), Maupertuis noted: May we not say that, in the fortuitous combination of the productions of Nature, since only those creatures could survive in whose organizations a certain degree of adaptation was present, there is nothing extraordinary in the fact that such adaptation is actually found in all these species which now exist? Chance, one might say, turned out a vast number of individuals; a small proportion of these were organized in such a manner that the animals' organs could satisfy their needs. A much greater number showed neither adaptation nor order; these last have all perished.... Thus the species which we see today are but a small part of all those that a blind destiny has produced. In 1790, the philosopher Immanuel Kant wrote in Kritik der Urteilskraft (Critique of Judgment) that the similarity of animal forms implies a common original type, and thus a common parent. In 1794, Charles Darwin's grandfather, Erasmus Darwin asked: [W]ould it be too bold to imagine, that in the great length of time, since the earth began to exist, perhaps millions of ages before the commencement of the history of mankind, would it be too bold to imagine, that all warm-blooded animals have arisen from one living filament, which endued with animality, with the power of acquiring new parts attended with new propensities, directed by irritations, sensations, volitions, and associations; and thus possessing the faculty of continuing to improve by its own inherent activity, and of delivering down those improvements by generation to its posterity, world without end? Charles Darwin's views about common descent, as expressed in On the Origin of Species, were that it was probable that there was only one progenitor for all life forms: Therefore I should infer from analogy that probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed. But he precedes that remark by, "Analogy would lead me one step further, namely, to the belief that all animals and plants have descended from some one prototype. But analogy may be a deceitful guide." And in the subsequent edition, he asserts rather, "We do not know all the possible transitional gradations between the simplest and the most perfect organs; it cannot be pretended that we know all the varied means of Distribution during the long lapse of years, or that we know how imperfect the Geological Record is. Grave as these several difficulties are, in my judgment they do not overthrow the theory of descent from a few created forms with subsequent modification". Common descent was widely accepted amongst the scientific community after Darwin's publication. In 1907, Vernon Kellogg commented that "practically no naturalists of position and recognized attainment doubt the theory of descent." In 2008, biologist T. Ryan Gregory noted that: No reliable observation has ever been found to contradict the general notion of common descent. It should come as no surprise, then, that the scientific community at large has accepted evolutionary descent as a historical reality since Darwin’s time and considers it among the most reliably established and fundamentally important facts in all of science. Evidence Common biochemistry All known forms of life are based on the same fundamental biochemical organization: genetic information encoded in DNA, transcribed into RNA, through the effect of protein- and RNA-enzymes, then translated into proteins by (highly similar) ribosomes, with ATP, NADPH and others as energy sources. Analysis of small sequence differences in widely shared substances such as cytochrome c further supports universal common descent. Some 23 proteins are found in all organisms, serving as enzymes carrying out core functions like DNA replication. The fact that only one such set of enzymes exists is convincing evidence of a single ancestry. 6,331 genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian. Common genetic code The genetic code (the "translation table" according to which DNA information is translated into amino acids, and hence proteins) is nearly identical for all known lifeforms, from bacteria and archaea to animals and plants. The universality of this code is generally regarded by biologists as definitive evidence in favor of universal common descent. The way that codons (DNA triplets) are mapped to amino acids seems to be strongly optimised. Richard Egel argues that in particular the hydrophobic (non-polar) side-chains are well organised, suggesting that these enabled the earliest organisms to create peptides with water-repelling regions able to support the essential electron exchange (redox) reactions for energy transfer. Selectively neutral similarities Similarities which have no adaptive relevance cannot be explained by convergent evolution, and therefore they provide compelling support for universal common descent. Such evidence has come from two areas: amino acid sequences and DNA sequences. Proteins with the same three-dimensional structure need not have identical amino acid sequences; any irrelevant similarity between the sequences is evidence for common descent. In certain cases, there are several codons (DNA triplets) that code redundantly for the same amino acid. Since many species use the same codon at the same place to specify an amino acid that can be represented by more than one codon, that is evidence for their sharing a recent common ancestor. Had the amino acid sequences come from different ancestors, they would have been coded for by any of the redundant codons, and since the correct amino acids would already have been in place, natural selection would not have driven any change in the codons, however much time was available. Genetic drift could change the codons, but it would be extremely unlikely to make all the redundant codons in a whole sequence match exactly across multiple lineages. Similarly, shared nucleotide sequences, especially where these are apparently neutral such as the positioning of introns and pseudogenes, provide strong evidence of common ancestry. Other similarities Biologists often point to the universality of many aspects of cellular life as supportive evidence to the more compelling evidence listed above. These similarities include the energy carrier adenosine triphosphate (ATP), and the fact that all amino acids found in proteins are left-handed. It is, however, possible that these similarities resulted because of the laws of physics and chemistry - rather than through universal common descent - and therefore resulted in convergent evolution. In contrast, there is evidence for homology of the central subunits of transmembrane ATPases throughout all living organisms, especially how the rotating elements are bound to the membrane. This supports the assumption of a LUCA as a cellular organism, although primordial membranes may have been semipermeable and evolved later to the membranes of modern bacteria, and on a second path to those of modern archaea also. Phylogenetic trees Another important piece of evidence is from detailed phylogenetic trees (i.e., "genealogic trees" of species) mapping out the proposed divisions and common ancestors of all living species. In 2010, Douglas L. Theobald published a statistical analysis of available genetic data, mapping them to phylogenetic trees, that gave "strong quantitative support, by a formal test, for the unity of life." Traditionally, these trees have been built using morphological methods, such as appearance, embryology, etc. Recently, it has been possible to construct these trees using molecular data, based on similarities and differences between genetic and protein sequences. All these methods produce essentially similar results, even though most genetic variation has no influence over external morphology. That phylogenetic trees based on different types of information agree with each other is strong evidence of a real underlying common descent. Objections Gene exchange clouds phylogenetic analysis Theobald noted that substantial horizontal gene transfer could have occurred during early evolution. Bacteria today remain capable of gene exchange between distantly-related lineages. This weakens the basic assumption of phylogenetic analysis, that similarity of genomes implies common ancestry, because sufficient gene exchange would allow lineages to share much of their genome whether or not they shared an ancestor (monophyly). This has led to questions about the single ancestry of life. However, biologists consider it very unlikely that completely unrelated proto-organisms could have exchanged genes, as their different coding mechanisms would have resulted only in garble rather than functioning systems. Later, however, many organisms all derived from a single ancestor could readily have shared genes that all worked in the same way, and it appears that they have. Convergent evolution If early organisms had been driven by the same environmental conditions to evolve similar biochemistry convergently, they might independently have acquired similar genetic sequences. Theobald's "formal test" was accordingly criticised by Takahiro Yonezawa and colleagues for not including consideration of convergence. They argued that Theobald's test was insufficient to distinguish between the competing hypotheses. Theobald has defended his method against this claim, arguing that his tests distinguish between phylogenetic structure and mere sequence similarity. Therefore, Theobald argued, his results show that "real universally conserved proteins are homologous." RNA world The possibility is mentioned, above, that all living organisms may be descended from an original single-celled organism with a DNA genome, and that this implies a single origin for life. Although such a universal common ancestor may have existed, such a complex entity is unlikely to have arisen spontaneously from non-life and thus a cell with a DNA genome cannot reasonably be regarded as the “origin” of life. To understand the “origin” of life, it has been proposed that DNA based cellular life descended from relatively simple pre-cellular self-replicating RNA molecules able to undergo natural selection. During the course of evolution, this RNA world was replaced by the evolutionary emergence of the DNA world. A world of independently self-replicating RNA genomes apparently no longer exists (RNA viruses are dependent on host cells with DNA genomes). Because the RNA world is apparently gone, it is not clear how scientific evidence could be brought to bear on the question of whether there was a single “origin” of life event from which all life descended. See also The Ancestor's Tale Urmetazoan Bibliography The book is available from The Complete Work of Charles Darwin Online. Retrieved 2015-11-23. Retrieved 2015-11-23. Notes References External links 29+ Evidences for Macroevolution: The Scientific Case for Common Descent from the TalkOrigins Archive. The Tree of Life Web Project Evolutionary biology Descent Most recent common ancestors
5261
https://en.wikipedia.org/wiki/Celtic%20music
Celtic music
Celtic music is a broad grouping of music genres that evolved out of the folk music traditions of the Celtic people of Northwestern Europe (the modern Celtic nations). It refers to both orally-transmitted traditional music and recorded music and the styles vary considerably to include everything from traditional music to a wide range of hybrids. Description and definition Celtic music means two things mainly. First, it is the music of the people that identify themselves as Celts. Secondly, it refers to whatever qualities may be unique to the music of the Celtic nations. Many notable Celtic musicians such as Alan Stivell and Paddy Moloney claim that the different Celtic music genres have a lot in common. These following melodic practices may be used widely across the different variants of Celtic Music: It is common for the melodic line to move up and down the primary chords in many Celtic songs. There are a number of possible reasons for this: Melodic variation can be easily introduced. Melodic variation is widely used in Celtic music, especially by the pipes and harp. It is easier to anticipate the direction that the melody will take, so that harmony either composed or improvised can be introduced: clichéd cadences that are essential for impromptu harmony are also more easily formed. The relatively wider tonal intervals in some songs make it possible for stress accents within the poetic line to be more in keeping with the local Celtic accent. Across just one Celtic group. By more than one Celtic language population belonging to different Celtic groups. These two latter usage patterns may simply be remnants of formerly widespread melodic practices. Often, the term Celtic music is applied to the music of Ireland and Scotland because both lands have produced well-known distinctive styles which actually have genuine commonality and clear mutual influences. The definition is further complicated by the fact that Irish independence has allowed Ireland to promote 'Celtic' music as a specifically Irish product. However, these are modern geographical references to a people who share a common Celtic ancestry and consequently, a common musical heritage. These styles are known because of the importance of Irish and Scottish people in the English speaking world, especially in the United States, where they had a profound impact on American music, particularly bluegrass and country music. The music of Wales, Cornwall, the Isle of Man, Brittany, Galician traditional music (Spain) and music of Portugal are also considered Celtic music, the tradition being particularly strong in Brittany, where Celtic festivals large and small take place throughout the year, and in Wales, where the ancient eisteddfod tradition has been revived and flourishes. Additionally, the musics of ethnically Celtic peoples abroad are vibrant, especially in Canada and the United States. In Canada the provinces of Atlantic Canada are known for being a home of Celtic music, most notably on the islands of Newfoundland, Cape Breton and Prince Edward Island. The traditional music of Atlantic Canada is heavily influenced by the Irish, Scottish and Acadian ethnic makeup of much of the region's communities. In some parts of Atlantic Canada, such as Newfoundland, Celtic music is as or more popular than in the old country. Further, some older forms of Celtic music that are rare in Scotland and Ireland today, such as the practice of accompanying a fiddle with a piano, or the Gaelic spinning songs of Cape Breton remain common in the Maritimes. Much of the music of this region is Celtic in nature, but originates in the local area and celebrates the sea, seafaring, fishing and other primary industries. Instruments associated with Celtic Music include the Celtic harp, uilleann pipes or Great Highland bagpipe, fiddle, tin whistle, flute, bodhrán, bones, concertina, accordion and a recent addition, the Irish bouzouki. Divisions In Celtic Music: A Complete Guide, June Skinner Sawyers acknowledges six Celtic nationalities divided into two groups according to their linguistic heritage. The Q-Celtic nationalities are the Irish, Scottish and Manx peoples, while the P-Celtic groups are the Cornish, Bretons and Welsh peoples. Musician Alan Stivell uses a similar dichotomy, between the Gaelic (Irish/Scottish/Manx) and the Brythonic (Breton/Welsh/Cornish) branches, which differentiate "mostly by the extended range (sometimes more than two octaves) of Irish and Scottish melodies and the closed range of Breton and Welsh melodies (often reduced to a half-octave), and by the frequent use of the pure pentatonic scale in Gaelic music." There is also tremendous variation between Celtic regions. Ireland, Scotland, Wales, Cornwall, and Brittany have living traditions of language and music, and there has been a recent major revival of interest in Celtic heritage in the Isle of Man. Galicia has a Celtic language revival movement to revive the Q-Celtic Gallaic language used into Roman times., which is not an attested language unlike Celtiberian. A Brythonic language may have been spoken in parts of Galicia and Asturias into early Medieval times brought by Britons fleeing the Anglo-Saxon invasions via Brittany., but here again there are several hypotheses and very little traces of it : lack of archeological, linguistic evidence and documents. The Romance language currently spoken in Galicia, Galician (Galego) is closely related to the Portuguese language used mainly in Brazil and Portugal and in many ways closer to Latin than other Romance languages. Galician music is claimed to be Celtic. The same is true of the music of Asturias, Cantabria, and that of Northern Portugal (some say even traditional music from Central Portugal can be labeled Celtic). Breton artist Alan Stivell was one of the earliest musicians to use the word Celtic and Keltia in his marketing materials, starting in the early 1960s as part of the worldwide folk music revival of that era with the term quickly catching on with other artists worldwide. Today, the genre is well established and incredibly diverse. Forms There are musical genres and styles specific to each Celtic country, due in part to the influence of individual song traditions and the characteristics of specific languages: Celtic traditional music Music of Ireland Music of Scotland Music of Wales Strathspeys are specific to Highland Scotland, for example, and it has been hypothesized that they mimic the rhythms of the Scottish Gaelic language. Reels Pibroch Cerdd Dant (string music) or Canu Penillion (verse singing) is the art of vocal improvisation over a given melody in Welsh musical tradition. It is an important competition in eisteddfodau. The singer or (small) choir sings a counter melody over a harp melody. Waulking song Puirt à beul Kan ha diskan Sean-nós singing Celtic hip hop Celtic rock Celtic metal Celtic punk Celtic fusion Progressive music Folk music Festivals See list of Celtic festivals for a more complete list of Celtic festivals by country, including music festivals. Festivals focused largely or partly on Celtic music can be found at :Category:Celtic music festivals. The modern Celtic music scene involves a large number of music festivals, as it has traditionally. Some of the most prominent festivals focused solely on music include: Festival Internacional do Mundo Celta de Ortigueira (Ortigueira, Galicia, Spain) Festival Intercéltico de Avilés (Avilés, Asturies, Spain) Folixa na Primavera (Mieres, Asturies, Spain) Festival Celta Internacional Reino de León, (León, Spain) Festival Internacional de Música Celta de Collado Villalba (Collado Villalba, Spain) Yn Chruinnaght (Isle of Man) Celtic Connections (Glasgow, Scotland) Hebridean Celtic Festival (Stornoway, Scotland) Fleadh ceol na hÉireann (Tullamore, Ireland) Festival Intercéltico de Sendim (Sendim, Portugal) Galaicofolia (Esposende, Portugal) Festival Folk Celta Ponte da Barca (Ponte da Barca, Portugal) Douro Celtic Fest (Vila Nova de Gaia, Portugal) Festival Interceltique de Lorient (Lorient, France) Festival del Kan ar Bobl (Lorient, France) Festival de Cornouaille (Quimper, France) Les Nuits Celtiques du Stade de France (Paris, France) Montelago Celtic Night (Colfiorito, Macerata, Italy) Triskell International Celtic Festival (Trieste, Italy) Festival celtique de Québec or Québec city celtic festival, (Quebec city, Quebec, Canada) Festival Mémoire et Racines (Saint-Charles-Borromée, Quebec, Canada) Celtic Colours (Cape Breton, Nova Scotia, Canada) Paganfest (Tour through Europe) Celtic fusion The oldest musical tradition which fits under the label of Celtic fusion originated in the rural American south in the early colonial period and incorporated English, Scottish, Irish, Welsh, German, and African influences. Variously referred to as roots music, American folk music, or old-time music, this tradition has exerted a strong influence on all forms of American music, including country, blues, and rock and roll. In addition to its lasting effects on other genres, it marked the first modern large-scale mixing of musical traditions from multiple ethnic and religious communities within the Celtic diaspora. In the 1960s several bands put forward modern adaptations of Celtic music pulling influences from several of the Celtic nations at once to create a modern pan-celtic sound. A few of those include bagadoù (Breton pipe bands), Fairport Convention, Pentangle, Steeleye Span and Horslips. In the 1970s Clannad made their mark initially in the folk and traditional scene, and then subsequently went on to bridge the gap between traditional Celtic and pop music in the 1980s and 1990s, incorporating elements from new-age, smooth jazz, and folk rock. Traces of Clannad's legacy can be heard in the music of many artists, including Altan, Anúna, Capercaillie, the Corrs, Dexys Midnight Runners, Enya, Loreena McKennitt, Riverdance, Donna Taggart, and U2. The solo music of Clannad's lead singer, Moya Brennan (often referred to as the First Lady of Celtic Music) has further enhanced this influence. Later, beginning in 1982 with the Pogues' invention of Celtic folk-punk and Stockton's Wing blend of Irish traditional and Pop, Rock and Reggae, there has been a movement to incorporate Celtic influences into other genres of music. Bands like Flogging Molly, Black 47, Dropkick Murphys, the Young Dubliners, the Tossers introduced a hybrid of Celtic rock, punk, reggae, hardcore and other elements in the 1990s that has become popular with Irish-American youth. Today there are Celtic-influenced subgenres of virtually every type of popular music including electronica, rock, metal, punk, hip hop, reggae, new-age, Latin, Andean and pop. Collectively these modern interpretations of Celtic music are sometimes referred to as Celtic fusion. Other modern adaptations Outside of America, the first deliberate attempts to create a "Pan-Celtic music" were made by the Breton Taldir Jaffrennou, having translated songs from Ireland, Scotland, and Wales into Breton between the two world wars. One of his major works was to bring "Hen Wlad Fy Nhadau" (the Welsh national anthem) back in Brittany and create lyrics in Breton. Eventually this song became "Bro goz va zadoù" ("Old land of my fathers") and is the most widely accepted Breton anthem. In the 70s, the Breton Alan Cochevelou (future Alan Stivell) began playing a mixed repertoire from the main Celtic countries on the Celtic harp his father created. Probably the most successful all-inclusive Celtic music composition in recent years is Shaun Daveys composition The Pilgrim. This suite depicts the journey of St. Colum Cille through the Celtic nations of Ireland, Scotland, the Isle of Man, Wales, Cornwall, Brittany and Galicia. The suite which includes a Scottish pipe band, Irish and Welsh harpists, Galician gaitas, Irish uilleann pipes, the bombardes of Brittany, two vocal soloists and a narrator is set against a background of a classical orchestra and a large choir. Modern music may also be termed "Celtic" because it is written and recorded in a Celtic language, regardless of musical style. Many of the Celtic languages have experienced resurgences in modern years, spurred on partly by the action of artists and musicians who have embraced them as hallmarks of identity and distinctness. In 1971, the Irish band Skara Brae recorded its only LP (simply called Skara Brae), all songs in Irish. In 1978 Runrig recorded an album in Scottish Gaelic. In 1992 Capercaillie recorded "A Prince Among Islands", the first Scottish Gaelic language record to reach the UK top 40. In 1996, a song in Breton represented France in the 41st Eurovision Song Contest, the first time in history that France had a song without a word in French. Since about 2005, Oi Polloi (from Scotland) have recorded in Scottish Gaelic. Mill a h-Uile Rud (a Scottish Gaelic punk band from Seattle) recorded in the language in 2004. Several contemporary bands have Welsh language songs, such as Ceredwen, which fuses traditional instruments with trip hop beats, the Super Furry Animals, Fernhill, and so on (see the Music of Wales article for more Welsh and Welsh-language bands). The same phenomenon occurs in Brittany, where many singers record songs in Breton, traditional or modern (hip hop, rap, and so on.). See also Folk music of Ireland Music of Brittany Music of Cornwall Galician traditional music Music of the Isle of Man Music of Scotland Music of Wales Music of Portugal Traditional Gaelic music References External links Celtic melody library Free sheet music on CelticScores.com Free sheet music, chords, midis at Vashon Celtic Tunes European music Culture of Ireland
5267
https://en.wikipedia.org/wiki/Constellation
Constellation
A constellation is an area on the celestial sphere in which a group of visible stars forms a perceived pattern or outline, typically representing an animal, mythological subject, or inanimate object. The origins of the earliest constellations likely go back to prehistory. People used them to relate stories of their beliefs, experiences, creation, or mythology. Different cultures and countries invented their own constellations, some of which lasted into the early 20th century before today's constellations were internationally recognized. The recognition of constellations has changed significantly over time. Many changed in size or shape. Some became popular, only to drop into obscurity. Some were limited to a single culture or nation. Naming constellations also helped astronomers and navigators identify stars more easily. Twelve (or thirteen) ancient constellations belong to the zodiac (straddling the ecliptic, which the Sun, Moon, and planets all traverse). The origins of the zodiac remain historically uncertain; its astrological divisions became prominent 400 BC in Babylonian or Chaldean astronomy. Constellations appear in Western culture via Greece and are mentioned in the works of Hesiod, Eudoxus and Aratus. The traditional 48 constellations, consisting of the Zodiac and 36 more (now 38, following the division of Argo Navis into three constellations) are listed by Ptolemy, a Greco-Roman astronomer from Alexandria, Egypt, in his Almagest. The formation of constellations was the subject of extensive mythology, most notably in the Metamorphoses of the Latin poet Ovid. Constellations in the far southern sky were added from the 15th century until the mid-18th century when European explorers began traveling to the Southern Hemisphere. Due to Roman and European transmission, each constellation has a Latin name. In 1922, the International Astronomical Union (IAU) formally accepted the modern list of 88 constellations, and in 1928 adopted official constellation boundaries that together cover the entire celestial sphere. Any given point in a celestial coordinate system lies in one of the modern constellations. Some astronomical naming systems include the constellation where a given celestial object is found to convey its approximate location in the sky. The Flamsteed designation of a star, for example, consists of a number and the genitive form of the constellation's name. Other star patterns or groups called asterisms are not constellations under the formal definition, but are also used by observers to navigate the night sky. Asterisms may be several stars within a constellation, or they may share stars with more than one constellation. Examples of asterisms include the teapot within the constellation Sagittarius, or the big dipper in the constellation of Ursa Major. Terminology The word constellation comes from the Late Latin term , which can be translated as "set of stars"; it came into use in Middle English during the 14th century. The Ancient Greek word for constellation is ἄστρον (astron). These terms historically referred to any recognisable pattern of stars whose appearance was associated with mythological characters or creatures, earthbound animals, or objects. Over time, among European astronomers, the constellations became clearly defined and widely recognised. Today, there are 88 IAU designated constellations. A constellation or star that never sets below the horizon when viewed from a particular latitude on Earth is termed circumpolar. From the North Pole or South Pole, all constellations south or north of the celestial equator are circumpolar. Depending on the definition, equatorial constellations may include those that lie between declinations 45° north and 45° south, or those that pass through the declination range of the ecliptic or zodiac ranging between 23½° north, the celestial equator, and 23½° south. Stars in constellations can appear near each other in the sky, but they usually lie at a variety of distances away from the Earth. Since each star has its own independent motion, all constellations will change slowly over time. After tens to hundreds of thousands of years, familiar outlines will become unrecognizable. Astronomers can predict the past or future constellation outlines by measuring individual stars' common proper motions or cpm by accurate astrometry and their radial velocities by astronomical spectroscopy. Identification The 88 constellations recognized by the International Astronomical Union as well as those that cultures have recognized throughout history are imagined figures and shapes derived from the patterns of stars in the observable sky. Many officially recognized constellations are based on the imaginations of ancient, Near Eastern and Mediterranean mythologies. H.A. Rey, who wrote popular books on astronomy, pointed out the imaginative nature of the constellations and their mythological and artistic basis, and the practical use of identifying them through definite images, according to the classical names they were given. History of the early constellations Lascaux Caves, southern France It has been suggested that the 17,000-year-old cave paintings in Lascaux, southern France, depict star constellations such as Taurus, Orion's Belt, and the Pleiades. However, this view is not generally accepted among scientists. Mesopotamia Inscribed stones and clay writing tablets from Mesopotamia (in modern Iraq) dating to 3000 BC provide the earliest generally accepted evidence for humankind's identification of constellations. It seems that the bulk of the Mesopotamian constellations were created within a relatively short interval from around 1300 to 1000 BC. Mesopotamian constellations appeared later in many of the classical Greek constellations. Ancient Near East The oldest Babylonian catalogues of stars and constellations date back to the beginning of the Middle Bronze Age, most notably the Three Stars Each texts and the MUL.APIN, an expanded and revised version based on more accurate observation from around 1000 BC. However, the numerous Sumerian names in these catalogues suggest that they built on older, but otherwise unattested, Sumerian traditions of the Early Bronze Age. The classical Zodiac is a revision of Neo-Babylonian constellations from the 6th century BC. The Greeks adopted the Babylonian constellations in the 4th century BC. Twenty Ptolemaic constellations are from the Ancient Near East. Another ten have the same stars but different names. Biblical scholar E. W. Bullinger interpreted some of the creatures mentioned in the books of Ezekiel and Revelation as the middle signs of the four-quarters of the Zodiac, with the Lion as Leo, the Bull as Taurus, the Man representing Aquarius, and the Eagle standing in for Scorpio. The biblical Book of Job also makes reference to a number of constellations, including "bier", "fool" and "heap" (Job 9:9, 38:31–32), rendered as "Arcturus, Orion and Pleiades" by the KJV, but ‘Ayish "the bier" actually corresponding to Ursa Major. The term Mazzaroth , translated as a garland of crowns, is a hapax legomenon in Job 38:32, and it might refer to the zodiacal constellations. Classical antiquity There is only limited information on ancient Greek constellations, with some fragmentary evidence being found in the Works and Days of the Greek poet Hesiod, who mentioned the "heavenly bodies". Greek astronomy essentially adopted the older Babylonian system in the Hellenistic era, first introduced to Greece by Eudoxus of Cnidus in the 4th century BC. The original work of Eudoxus is lost, but it survives as a versification by Aratus, dating to the 3rd century BC. The most complete existing works dealing with the mythical origins of the constellations are by the Hellenistic writer termed pseudo-Eratosthenes and an early Roman writer styled pseudo-Hyginus. The basis of Western astronomy as taught during Late Antiquity and until the Early Modern period is the Almagest by Ptolemy, written in the 2nd century. In the Ptolemaic Kingdom, native Egyptian tradition of anthropomorphic figures represented the planets, stars, and various constellations. Some of these were combined with Greek and Babylonian astronomical systems culminating in the Zodiac of Dendera; it remains unclear when this occurred, but most were placed during the Roman period between 2nd to 4th centuries AD. The oldest known depiction of the zodiac showing all the now familiar constellations, along with some original Egyptian constellations, decans, and planets. Ptolemy's Almagest remained the standard definition of constellations in the medieval period both in Europe and in Islamic astronomy. Ancient China Ancient China had a long tradition of observing celestial phenomena. Nonspecific Chinese star names, later categorized in the twenty-eight mansions, have been found on oracle bones from Anyang, dating back to the middle Shang dynasty. These constellations are some of the most important observations of Chinese sky, attested from the 5th century BC. Parallels to the earliest Babylonian (Sumerian) star catalogues suggest that the ancient Chinese system did not arise independently. Three schools of classical Chinese astronomy in the Han period are attributed to astronomers of the earlier Warring States period. The constellations of the three schools were conflated into a single system by Chen Zhuo, an astronomer of the 3rd century (Three Kingdoms period). Chen Zhuo's work has been lost, but information on his system of constellations survives in Tang period records, notably by Qutan Xida. The oldest extant Chinese star chart dates to that period and was preserved as part of the Dunhuang Manuscripts. Native Chinese astronomy flourished during the Song dynasty, and during the Yuan dynasty became increasingly influenced by medieval Islamic astronomy (see Treatise on Astrology of the Kaiyuan Era). As maps were prepared during this period on more scientific lines, they were considered as more reliable. A well-known map from the Song period is the Suzhou Astronomical Chart, which was prepared with carvings of stars on the planisphere of the Chinese sky on a stone plate; it is done accurately based on observations, and it shows the supernova of the year of 1054 in Taurus. Influenced by European astronomy during the late Ming dynasty, charts depicted more stars but retained the traditional constellations. Newly observed stars were incorporated as supplementary to old constellations in the southern sky, which did not depict the traditional stars recorded by ancient Chinese astronomers. Further improvements were made during the later part of the Ming dynasty by Xu Guangqi and Johann Adam Schall von Bell, the German Jesuit and was recorded in Chongzhen Lishu (Calendrical Treatise of Chongzhen period, 1628). Traditional Chinese star maps incorporated 23 new constellations with 125 stars of the southern hemisphere of the sky based on the knowledge of Western star charts; with this improvement, the Chinese Sky was integrated with the World astronomy. Ancient Greece A lot of well-known constellations also have histories that connect to ancient Greece. Early modern astronomy Historically, the origins of the constellations of the northern and southern skies are distinctly different. Most northern constellations date to antiquity, with names based mostly on Classical Greek legends. Evidence of these constellations has survived in the form of star charts, whose oldest representation appears on the statue known as the Farnese Atlas, based perhaps on the star catalogue of the Greek astronomer Hipparchus. Southern constellations are more modern inventions, sometimes as substitutes for ancient constellations (e.g. Argo Navis). Some southern constellations had long names that were shortened to more usable forms; e.g. Musca Australis became simply Musca. Some of the early constellations were never universally adopted. Stars were often grouped into constellations differently by different observers, and the arbitrary constellation boundaries often led to confusion as to which constellation a celestial object belonged. Before astronomers delineated precise boundaries (starting in the 19th century), constellations generally appeared as ill-defined regions of the sky. Today they now follow officially accepted designated lines of right ascension and declination based on those defined by Benjamin Gould in epoch 1875.0 in his star catalogue Uranometria Argentina. The 1603 star atlas "Uranometria" of Johann Bayer assigned stars to individual constellations and formalized the division by assigning a series of Greek and Latin letters to the stars within each constellation. These are known today as Bayer designations. Subsequent star atlases led to the development of today's accepted modern constellations. Origin of the southern constellations The southern sky, below about −65° declination, was only partially catalogued by ancient Babylonians, Egyptians, Greeks, Chinese, and Persian astronomers of the north. The knowledge that northern and southern star patterns differed goes back to Classical writers, who describe, for example, the African circumnavigation expedition commissioned by Egyptian Pharaoh Necho II in c. 600 BC and those of Hanno the Navigator in c. 500 BC. The history of southern constellations is not straightforward. Different groupings and different names were proposed by various observers, some reflecting national traditions or designed to promote various sponsors. Southern constellations were important from the 14th to 16th centuries, when sailors used the stars for celestial navigation. Italian explorers who recorded new southern constellations include Andrea Corsali, Antonio Pigafetta, and Amerigo Vespucci. Many of the 88 IAU-recognized constellations in this region first appeared on celestial globes developed in the late 16th century by Petrus Plancius, based mainly on observations of the Dutch navigators Pieter Dirkszoon Keyser and Frederick de Houtman. These became widely known through Johann Bayer's star atlas Uranometria of 1603. Fourteen more were created in 1763 by the French astronomer Nicolas Louis de Lacaille, who also split the ancient constellation Argo Navis into three; these new figures appeared in his star catalogue, published in 1756. Several modern proposals have not survived. The French astronomers Pierre Lemonnier and Joseph Lalande, for example, proposed constellations that were once popular but have since been dropped. The northern constellation Quadrans Muralis survived into the 19th century (when its name was attached to the Quadrantid meteor shower), but is now divided between Boötes and Draco. 88 modern constellations A list of 88 constellations was produced for the International Astronomical Union in 1922. It is roughly based on the traditional Greek constellations listed by Ptolemy in his Almagest in the 2nd century and Aratus' work Phenomena, with early modern modifications and additions (most importantly introducing constellations covering the parts of the southern sky unknown to Ptolemy) by Petrus Plancius (1592, 1597/98 and 1613), Johannes Hevelius (1690) and Nicolas Louis de Lacaille (1763), who introduced fourteen new constellations. Lacaille studied the stars of the southern hemisphere from 1751 until 1752 from the Cape of Good Hope, when he was said to have observed more than 10,000 stars using a refracting telescope with an aperture of . In 1922, Henry Norris Russell produced a list of 88 constellations with three-letter abbreviations for them. However, these constellations did not have clear borders between them. In 1928, the International Astronomical Union (IAU) formally accepted 88 modern constellations, with contiguous boundaries along vertical and horizontal lines of right ascension and declination developed by Eugene Delporte that, together, cover the entire celestial sphere; this list was finally published in 1930. Where possible, these modern constellations usually share the names of their Graeco-Roman predecessors, such as Orion, Leo or Scorpius. The aim of this system is area-mapping, i.e. the division of the celestial sphere into contiguous fields. Out of the 88 modern constellations, 36 lie predominantly in the northern sky, and the other 52 predominantly in the southern. The boundaries developed by Delporte used data that originated back to epoch B1875.0, which was when Benjamin A. Gould first made his proposal to designate boundaries for the celestial sphere, a suggestion on which Delporte based his work. The consequence of this early date is that because of the precession of the equinoxes, the borders on a modern star map, such as epoch J2000, are already somewhat skewed and no longer perfectly vertical or horizontal. This effect will increase over the years and centuries to come. Symbols The constellations have no official symbols, though those of the ecliptic may take the signs of the zodiac. Symbols for the other modern constellations, as well as older ones that still occur in modern nomenclature, have occasionally been published. Dark cloud constellations The Great Rift, a series of dark patches in the Milky Way, is more visible and striking in the southern hemisphere than in the northern. It vividly stands out when conditions are otherwise so dark that the Milky Way's central region casts shadows on the ground. Some cultures have discerned shapes in these patches and have given names to these "dark cloud constellations". Members of the Inca civilization identified various dark areas or dark nebulae in the Milky Way as animals and associated their appearance with the seasonal rains. Australian Aboriginal astronomy also describes dark cloud constellations, the most famous being the "emu in the sky" whose head is formed by the Coalsack, a dark nebula, instead of the stars. List of dark cloud constellations Great Rift (astronomy) Emu in the sky Cygnus Rift Serpens–Aquila Rift Dark Horse (astronomy) Rho Ophiuchi cloud complex See also Celestial cartography Constellation family Former constellations IAU designated constellations Lists of stars by constellation Constellations listed by Johannes Hevelius Constellations listed by Lacaille Constellations listed by Petrus Plancius Constellations listed by Ptolemy References Further reading Mythology, lore, history, and archaeoastronomy Allen, Richard Hinckley. (1899) Star-Names And Their Meanings, G. E. Stechert, New York, hardcover; reprint 1963 as Star Names: Their Lore and Meaning, Dover Publications, Inc., Mineola, NY, softcover. Olcott, William Tyler. (1911); Star Lore of All Ages, G. P. Putnam's Sons, New York, hardcover; reprint 2004 as Star Lore: Myths, Legends, and Facts, Dover Publications, Inc., Mineola, NY, softcover. Kelley, David H. and Milone, Eugene F. (2004) Exploring Ancient Skies: An Encyclopedic Survey of Archaeoastronomy, Springer, hardcover. Ridpath, Ian. (2018) Star Tales 2nd ed., Lutterworth Press, softcover. Staal, Julius D. W. (1988) The New Patterns in the Sky: Myths and Legends of the Stars, McDonald & Woodward Publishing Co., hardcover, softcover. Atlases and celestial maps General and nonspecialized – entire celestial heavens Becvar, Antonin. Atlas Coeli. Published as Atlas of the Heavens, Sky Publishing Corporation, Cambridge, MA, with coordinate grid transparency overlay. Norton, Arthur Philip. (1910) Norton's Star Atlas, 20th Edition 2003 as Norton's Star Atlas and Reference Handbook, edited by Ridpath, Ian, Pi Press, , hardcover. National Geographic Society. (1957, 1970, 2001, 2007) The Heavens (1970), Cartographic Division of the National Geographic Society (NGS), Washington, DC, two-sided large map chart depicting the constellations of the heavens; as a special supplement to the August 1970 issue of National Geographic. Forerunner map as A Map of The Heavens, as a special supplement to the December 1957 issue. Current version 2001 (Tirion), with 2007 reprint. Sinnott, Roger W. and Perryman, Michael A.C. (1997) Millennium Star Atlas, Epoch 2000.0, Sky Publishing Corporation, Cambridge, MA, and European Space Agency (ESA), ESTEC, Noordwijk, The Netherlands. Subtitle: "An All-Sky Atlas Comprising One Million Stars to Visual Magnitude Eleven from the Hipparcos and Tycho Catalogues and Ten Thousand Nonstellar Objects". 3 volumes, hardcover, . Vol. 1, 0–8 Hours (Right Ascension), hardcover; Vol. 2, 8–16 Hours, hardcover; Vol. 3, 16–24 Hours, hardcover. Softcover version available. Supplemental separate purchasable coordinate grid transparent overlays. Tirion, Wil; et al. (1987) Uranometria 2000.0, Willmann-Bell, Inc., Richmond, VA, 3 volumes, hardcover. Vol. 1 (1987): "The Northern Hemisphere to −6°", by Wil Tirion, Barry Rappaport, and George Lovi, hardcover, printed boards. Vol. 2 (1988): "The Southern Hemisphere to +6°", by Wil Tirion, Barry Rappaport and George Lovi, hardcover, printed boards. Vol. 3 (1993) as a separate added work: The Deep Sky Field Guide to Uranometria 2000.0, by Murray Cragin, James Lucyk, and Barry Rappaport, hardcover, printed boards. 2nd Edition 2001 as collective set of 3 volumes – Vol. 1: Uranometria 2000.0 Deep Sky Atlas, by Wil Tirion, Barry Rappaport, and Will Remaklus, hardcover, printed boards; Vol. 2: Uranometria 2000.0 Deep Sky Atlas, by Wil Tirion, Barry Rappaport, and Will Remaklus, hardcover, printed boards; Vol. 3: Uranometria 2000.0 Deep Sky Field Guide by Murray Cragin and Emil Bonanno, , hardcover, printed boards. Tirion, Wil and Sinnott, Roger W. (1998) Sky Atlas 2000.0, various editions. 2nd Deluxe Edition, Cambridge University Press, Cambridge, England. Northern celestial hemisphere and north circumpolar region Becvar, Antonin. (1962) Atlas Borealis 1950.0, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Praha, Czechoslovakia, 1st Edition, elephant folio hardcover, with small transparency overlay coordinate grid square and separate paper magnitude legend ruler. 2nd Edition 1972 and 1978 reprint, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Prague, Czechoslovakia, and Sky Publishing Corporation, Cambridge, MA, oversize folio softcover spiral-bound, with transparency overlay coordinate grid ruler. Equatorial, ecliptic, and zodiacal celestial sky Becvar, Antonin. (1958) Atlas Eclipticalis 1950.0, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Praha, Czechoslovakia, 1st Edition, elephant folio hardcover, with small transparency overlay coordinate grid square and separate paper magnitude legend ruler. 2nd Edition 1974, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Prague, Czechoslovakia, and Sky Publishing Corporation, Cambridge, MA, oversize folio softcover spiral-bound, with transparency overlay coordinate grid ruler. Southern celestial hemisphere and south circumpolar region Becvar, Antonin. Atlas Australis 1950.0, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Praha, Czechoslovakia, 1st Edition, hardcover, with small transparency overlay coordinate grid square and separate paper magnitude legend ruler. 2nd Edition, Czechoslovak Academy of Sciences (Ceskoslovenske Akademie Ved), Prague, Czechoslovakia, and Sky Publishing Corporation, Cambridge, MA, oversize folio softcover spiral-bound, with transparency overlay coordinate grid ruler. Catalogs Becvar, Antonin. (1959) Atlas Coeli II Katalog 1950.0, Praha, 1960 Prague. Published 1964 as Atlas of the Heavens – II Catalogue 1950.0, Sky Publishing Corporation, Cambridge, MA Hirshfeld, Alan and Sinnott, Roger W. (1982) Sky Catalogue 2000.0, Cambridge University Press and Sky Publishing Corporation, 1st Edition, 2 volumes. both vols., and vol. 1. "Volume 1: Stars to Magnitude 8.0", (Cambridge) and hardcover, softcover. Vol. 2 (1985) – "Volume 2: Double Stars, Variable Stars, and Nonstellar Objects", (Cambridge) hardcover, (Cambridge) softcover. 2nd Edition (1991) with additional third author François Ochsenbein, 2 volumes, . Vol. 1: (Cambridge) hardcover; (Cambridge) softcover . Vol. 2 (1999): (Cambridge) softcover and 0-933346-38-7 softcover – reprint of 1985 edition. Yale University Observatory. (1908, et al.) Catalogue of Bright Stars, New Haven, CN. Referred to commonly as "Bright Star Catalogue". Various editions with various authors historically, the longest term revising author as (Ellen) Dorrit Hoffleit. 1st Edition 1908. 2nd Edition 1940 by Frank Schlesinger and Louise F. Jenkins. 3rd Edition (1964), 4th Edition, 5th Edition (1991), and 6th Edition (pending posthumous) by Hoffleit. External links IAU: The Constellations, including high quality maps. Atlascoelestis, di Felice Stoppa. Celestia free 3D realtime space-simulation (OpenGL) Stellarium realtime sky rendering program (OpenGL) Strasbourg Astronomical Data Center Files on official IAU constellation boundaries Studies of Occidental Constellations and Star Names to the Classical Period: An Annotated Bibliography Table of Constellations Online Text: Hyginus, Astronomica translated by Mary Grant Greco-Roman constellation myths Neave Planetarium Adobe Flash interactive web browser planetarium and stardome with realistic movement of stars and the planets. Audio – Cain/Gay (2009) Astronomy Cast Constellations The Greek Star-Map short essay by Gavin White Bucur D. The network signature of constellation line figures. PLOS ONE 17(7): e0272270 (2022). A comparative analysis on the structure of constellation line figures across 56 sky cultures. Constellations Celestial cartography Constellations Concepts in astronomy
5269
https://en.wikipedia.org/wiki/Character
Character
Character or Characters may refer to: Arts, entertainment, and media Literature Character (novel), a 1936 Dutch novel by Ferdinand Bordewijk Characters (Theophrastus), a classical Greek set of character sketches attributed to Theophrastus Music Characters (John Abercrombie album), 1977 Character (Dark Tranquillity album), 2005 Character (Julia Kent album), 2013 Character (Rachael Sage album), 2020 Characters (Stevie Wonder album), 1987 Types of entity Character (arts), an agent within a work of art, including literature, drama, cinema, opera, etc. Character sketch or character, a literary description of a character type Game character (disambiguation), various types of characters in a video game or role playing game Player character, as above but who is controlled or whose actions are directly chosen by a player Non-player character, as above but not player-controlled, frequently abbreviated as NPC Other uses in arts, entertainment, and media Character (film), a 1997 Dutch film based on Bordewijk's novel Charaktery, a monthly magazine in Poland Netflix Presents: The Characters, an improvised sketch comedy show on Netflix Mathematics Character (mathematics), a homomorphism from a group to a field Characterization (mathematics), the logical equivalency between objects of two different domains. Character theory, the mathematical theory of special kinds of characters associated to group representations Dirichlet character, a type of character in number theory Multiplicative character, a homomorphism from a group to the multiplicative subgroup of a field Morality and social science Character education, a US term for values education Character structure, a person's traits Moral character, an evaluation of a particular individual's durable moral qualities Symbols Character (symbol), a sign or symbol Character (computing), a unit of information roughly corresponding to a grapheme Other uses Character (biology), the abstraction of an observable physical or biochemical trait of an organism Character (income tax), a type of income for tax purposes in the US Sacramental character, a Catholic teaching Neighbourhood character, the look and feel of a built environment See also (character) (disambiguation) Virtual character (disambiguation)
5270
https://en.wikipedia.org/wiki/Car%20%28disambiguation%29
Car (disambiguation)
A car is a wheeled motor vehicle used for transporting passengers. Car(s), CAR(s), or The Car(s) may also refer to: Computing C.a.R. (Z.u.L.), geometry software CAR and CDR, commands in LISP computer programming Clock with Adaptive Replacement, a page replacement algorithm Computer-assisted reporting Computer-assisted reviewing Economics Capital adequacy ratio, a ratio of a bank's capital to its risk Cost accrual ratio, an accounting formula Cumulative abnormal return Cumulative average return, a financial concept related to the time value of money Film and television Cars (franchise), a Disney/Pixar film series Cars (film), a 2006 computer-animated film from Disney and Pixar The Car (1977 film), an American horror film Car, a BBC Two television ident first aired in 1993 (see BBC Two '1991–2001' idents) The Car (1997 film), a Malayalam film "The Car" (The Assistants episode) Literature Car (magazine), a British auto-enthusiast publication The Car (novel), a novel by Gary Paulsen Military Canadian Airborne Regiment, a Canadian Forces formation Colt Automatic Rifle, a 5.56mm NATO firearm Combat Action Ribbon, a United States military decoration U.S. Army Combat Arms Regimental System, a 1950s reorganisation of the regiments of the US Army Conflict Armament Research, a UK-based investigative organization that tracks the supply of armaments into conflict-affected areas Music The Cars, an American band Albums Peter Gabriel (1977 album) or Car The Cars (album), a 1978 album by The Cars Cars (soundtrack), the soundtrack to the 2006 film Cars (Now, Now Every Children album), 2009 Cars, a 2011 album by Kris Delmhorst C.A.R. (album), a 2012 album by Serengeti The Car (album), a 2022 album by Arctic Monkeys Songs "The Car" (song), a song by Jeff Carson "Cars" (song), a 1979 single by Gary Numan "Car", a 1994 song by Built to Spill from There's Nothing Wrong with Love Paintings Cars (painting), a series of paintings by Andy Warhol The Car (Brack), a 1955 painting by John Brack People Car (surname) Cars (surname) Places Car, Azerbaijan, a village Čar, a village in Serbia Cars, Gironde, France, a commune Les Cars, Haute-Vienne, France, a commune Central African Republic Central Asian Republics Cordillera Administrative Region, Philippines County Carlow, Ireland, Chapman code Science Canonical anticommutation relation Carina (constellation) Chimeric antigen receptor, artificial T cell receptors Coherent anti-Stokes Raman spectroscopy Constitutive androstane receptor Cortisol awakening response, on waking from sleep Coxsackievirus and adenovirus receptor, a protein Sports Carolina Hurricanes, a National Hockey League team Carolina Panthers, a National Football League team Club Always Ready, a Bolivian football club from La Paz Rugby Africa, formerly known as Confederation of African Rugby Transportation Railroad car Canada Atlantic Railway, 1879–1914 Canadian Atlantic Railway, 1986–1994 Carlisle railway station's station code Car, the cab of an elevator Car, a tram, streetcar, or trolley car Other uses Car Car (Greek myth), one or two figures in Greek mythology Car language, an Austroasiatic language of the Nicobar Islands in the eastern Indian Ocean car, ISO 639-2 and ISO 639-3 codes of the Carib language, spoken by the Kalina people of South America Cars (video game), a 2006 video game based on the film Chimeric antigen receptor, a type of protein engineered to give T cells the ability to target a specific protein CAR Canadian Aviation Regulations Avis Budget Group (Nasdaq: CAR) Central apparatus room, an equipment room found at broadcasting facilities Children of the American Revolution, a genealogical society or Action Committee for Renewal, a political party of Togo Council for Aboriginal Reconciliation, body founded by the Australian Government in 1991 as part of its Reconciliation in Australia policy Council for Aboriginal Rights (1951–1980s), Victoria, Australia Criminal Appeal Reports, law reports in the United Kingdom See also Carr (disambiguation) CARS (disambiguation) Le Car (disambiguation) iCar
5272
https://en.wikipedia.org/wiki/Printer%20%28computing%29
Printer (computing)
In computing, a printer is a peripheral machine which makes a persistent representation of graphics or text, usually on paper. While most output is human-readable, bar code printers are an example of an expanded use for printers. Different types of printers include 3D printers, inkjet printers, laser printers, and thermal printers. History The first computer printer designed was a mechanically driven apparatus by Charles Babbage for his difference engine in the 19th century; however, his mechanical printer design was not built until 2000. The first patented printing mechanism for applying a marking medium to a recording medium or more particularly an electrostatic inking apparatus and a method for electrostatically depositing ink on controlled areas of a receiving medium, was in 1962 by C. R. Winston, Teletype Corporation, using continuous inkjet printing. The ink was a red stamp-pad ink manufactured by Phillips Process Company of Rochester, NY under the name Clear Print. This patent (US3060429) led to the Teletype Inktronic Printer product delivered to customers in late 1966. The first compact, lightweight digital printer was the EP-101, invented by Japanese company Epson and released in 1968, according to Epson. The first commercial printers generally used mechanisms from electric typewriters and Teletype machines. The demand for higher speed led to the development of new systems specifically for computer use. In the 1980s there were daisy wheel systems similar to typewriters, line printers that produced similar output but at much higher speed, and dot-matrix systems that could mix text and graphics but produced relatively low-quality output. The plotter was used for those requiring high-quality line art like blueprints. The introduction of the low-cost laser printer in 1984, with the first HP LaserJet, and the addition of PostScript in next year's Apple LaserWriter set off a revolution in printing known as desktop publishing. Laser printers using PostScript mixed text and graphics, like dot-matrix printers, but at quality levels formerly available only from commercial typesetting systems. By 1990, most simple printing tasks like fliers and brochures were now created on personal computers and then laser printed; expensive offset printing systems were being dumped as scrap. The HP Deskjet of 1988 offered the same advantages as a laser printer in terms of flexibility, but produced somewhat lower-quality output (depending on the paper) from much less-expensive mechanisms. Inkjet systems rapidly displaced dot-matrix and daisy-wheel printers from the market. By the 2000s, high-quality printers of this sort had fallen under the $100 price point and became commonplace. The rapid improvement of internet email through the 1990s and into the 2000s has largely displaced the need for printing as a means of moving documents, and a wide variety of reliable storage systems means that a "physical backup" is of little benefit today. Starting around 2010, 3D printing became an area of intense interest, allowing the creation of physical objects with the same sort of effort as an early laser printer required to produce a brochure. As of the 2020s, 3D printing has become a widespread hobby due to the abundance of cheap 3D printer kits, with the most common process being Fused deposition modeling. Types Personal printers are mainly designed to support individual users, and may be connected to only a single computer. These printers are designed for low-volume, short-turnaround print jobs, requiring minimal setup time to produce a hard copy of a given document. However, they are generally slow devices ranging from 6 to around 25 pages per minute (ppm), and the cost per page is relatively high. However, this is offset by the on-demand convenience. Some printers can print documents stored on memory cards or from digital cameras and scanners. Networked or shared printers are "designed for high-volume, high-speed printing". They are usually shared by many users on a network and can print at speeds of 45 to around 100 ppm. The Xerox 9700 could achieve 120 ppm. An ID Card printer is used for printing plastic ID cards. These can now be customised with important features such as holographic overlays, HoloKotes and watermarks. This is either a direct to card printer (the more feasible option, or a retransfer printer. A virtual printer is a piece of computer software whose user interface and API resembles that of a printer driver, but which is not connected with a physical computer printer. A virtual printer can be used to create a file which is an image of the data which would be printed, for archival purposes or as input to another program, for example to create a PDF or to transmit to another system or user. A barcode printer is a computer peripheral for printing barcode labels or tags that can be attached to, or printed directly on, physical objects. Barcode printers are commonly used to label cartons before shipment, or to label retail items with UPCs or EANs. A 3D printer is a device for making a three-dimensional object from a 3D model or other electronic data source through additive processes in which successive layers of material (including plastics, metals, food, cement, wood, and other materials) are laid down under computer control. It is called a printer by analogy with an inkjet printer which produces a two-dimensional document by a similar process of depositing a layer of ink on paper. ID Card printers A card printer is an electronic desktop printer with single card feeders which print and personalize plastic cards. In this respect they differ from, for example, label printers which have a continuous supply feed. Card dimensions are usually 85.60 × 53.98 mm, standardized under ISO/IEC 7810 as ID-1. This format is also used in EC-cards, telephone cards, credit cards, driver's licenses and health insurance cards. This is commonly known as the bank card format. Card printers are controlled by corresponding printer drivers or by means of a specific programming language. Generally card printers are designed with laminating, striping, and punching functions, and use desktop or web-based software. The hardware features of a card printer differentiate a card printer from the more traditional printers, as ID cards are usually made of PVC plastic and require laminating and punching. Different card printers can accept different card thickness and dimensions. The principle is the same for practically all card printers: the plastic card is passed through a thermal print head at the same time as a color ribbon. The color from the ribbon is transferred onto the card through the heat given out from the print head. The standard performance for card printing is 300 dpi (300 dots per inch, equivalent to 11.8 dots per mm). There are different printing processes, which vary in their detail: Thermal transfer Mainly used to personalize pre-printed plastic cards in monochrome. The color is "transferred" from the (monochrome) color ribbon onto the card. Dye sublimation This process uses four panels of color according to the CMYK color ribbon. The card to be printed passes under the print head several times each time with the corresponding ribbon panel. Each color in turn is diffused (sublimated) directly onto the card. Thus it is possible to produce a high depth of color (up to 16 million shades) on the card. Afterwards a transparent overlay (O) also known as a topcoat (T) is placed over the card to protect it from mechanical wear and tear and to render the printed image UV resistant. Reverse image technology The standard for high-security card applications that use contact and contactless smart chip cards. The technology prints images onto the underside of a special film that fuses to the surface of a card through heat and pressure. Since this process transfers dyes and resins directly onto a smooth, flexible film, the print-head never comes in contact with the card surface itself. As such, card surface interruptions such as smart chips, ridges caused by internal RFID antennae and debris do not affect print quality. Even printing over the edge is possible. Thermal rewrite print process In contrast to the majority of other card printers, in the thermal rewrite process the card is not personalized through the use of a color ribbon, but by activating a thermal sensitive foil within the card itself. These cards can be repeatedly personalized, erased and rewritten. The most frequent use of these are in chip-based student identity cards, whose validity changes every semester. Common printing problems: Many printing problems are caused by physical defects in the card material itself, such as deformation or warping of the card that is fed into the machine in the first place. Printing irregularities can also result from chip or antenna embedding that alters the thickness of the plastic and interferes with the printer's effectiveness. Other issues are often caused by operator errors, such as users attempting to feed non-compatible cards into the card printer, while other printing defects may result from environmental abnormalities such as dirt or contaminants on the card or in the printer. Reverse transfer printers are less vulnerable to common printing problems than direct-to-card printers, since with these printers the card does not come into direct contact with the printhead. Variations in card printers: Broadly speaking there are three main types of card printers, differing mainly by the method used to print onto the card. They are: Near to Edge. This term designates the cheapest type of printing by card printers. These printers print up to 5 mm from the edge of the card stock. Direct to Card, also known as "Edge to Edge Printing". The print-head comes in direct contact with the card. This printing type is the most popular nowadays, mostly due to cost factor. The majority of identification card printers today are of this type. Reverse Transfer, also known as "High Definition Printing" or "Over the Edge Printing". The print-head prints to a transfer film backwards (hence the reverse) and then the printed film is rolled onto the card with intense heat (hence the transfer). The term "over the edge" is due to the fact that when the printer prints onto the film it has a "bleed", and when rolled onto the card the bleed extends to completely over the edge of the card, leaving no border. Different ID Card Printers use different encoding techniques to facilitate disparate business environments and to support security initiatives. Known encoding techniques are: Contact Smart Card – The Contact Smart Cards use RFID technology and require direct contact to a conductive plate to register admission or transfer of information. The transmission of commands, data, and card status held between the two physical contact points. Contactless Smart Card – Contactless Smart Cards exhibit integrated circuit that can store and process data while communicating with the terminal via Radio Frequency. Unlike Contact Smart Card, contact less cards feature intelligent re-writable microchip that can be transcribed through radio waves. HiD Proximity – HID's proximity technology allows fast, accurate reading while offering card or key tag read ranges from 4” to 24” inches (10 cm to 60.96 cm), dependent on the type of proximity reader being used. Since these cards and key tags do not require physical contact with the reader, they are virtually maintenance and wear-free. ISO Magnetic Stripe - A magnetic stripe card is a type of card capable of storing data by modifying the magnetism of tiny iron-based magnetic particles on a band of magnetic material on the card. The magnetic stripe, sometimes called swipe card or magstripe, is read by physical contact and swiping past a magnetic reading head. Software There are basically two categories of card printer software: desktop-based, and web-based (online). The biggest difference between the two is whether or not a customer has a printer on their network that is capable of printing identification cards. If a business already owns an ID card printer, then a desktop-based badge maker is probably suitable for their needs. Typically, large organizations who have high employee turnover will have their own printer. A desktop-based badge maker is also required if a company needs their IDs make instantly. An example of this is the private construction site that has restricted access. However, if a company does not already have a local (or network) printer that has the features they need, then the web-based option is a perhaps a more affordable solution. The web-based solution is good for small businesses that do not anticipate a lot of rapid growth, or organizations who either can not afford a card printer, or do not have the resources to learn how to set up and use one. Generally speaking, desktop-based solutions involve software, a database (or spreadsheet) and can be installed on a single computer or network. Other options Alongside the basic function of printing cards, card printers can also read and encode magnetic stripes as well as contact and contact free RFID chip cards (smart cards). Thus card printers enable the encoding of plastic cards both visually and logically. Plastic cards can also be laminated after printing. Plastic cards are laminated after printing to achieve a considerable increase in durability and a greater degree of counterfeit prevention. Some card printers come with an option to print both sides at the same time, which cuts down the time taken to print and less margin of error. In such printers one side of id card is printed and then the card is flipped in the flip station and other side is printed. Applications Alongside the traditional uses in time attendance and access control (in particular with photo personalization), countless other applications have been found for plastic cards, e.g. for personalized customer and members' cards, for sports ticketing and in local public transport systems for the production of season tickets, for the production of school and college identity cards as well as for the production of national ID cards. Technology The choice of print technology has a great effect on the cost of the printer and cost of operation, speed, quality and permanence of documents, and noise. Some printer technologies do not work with certain types of physical media, such as carbon paper or transparencies. A second aspect of printer technology that is often forgotten is resistance to alteration: liquid ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface. Cheques can be printed with liquid ink or on special cheque paper with toner anchorage so that alterations may be detected. The machine-readable lower portion of a cheque must be printed using MICR toner or ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly. Modern print technology The following printing technologies are routinely found in modern printers: Toner-based printers A laser printer rapidly produces high quality text and graphics. As with digital photocopiers and multifunction printers (MFPs), laser printers employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer's photoreceptor. Another toner-based printer is the LED printer which uses an array of LEDs instead of a laser to cause toner adhesion to the print drum. Liquid inkjet printers Inkjet printers operate by propelling variably sized droplets of liquid ink onto almost any sized page. They are the most common type of computer printer used by consumers. Solid ink printers Solid ink printers, also known as phase-change ink or hot-melt ink printers, are a type of thermal transfer printer, graphics sheet printer or 3D printer . They use solid sticks, crayons, pearls or granular ink materials. Common inks are CMYK-colored ink, similar in consistency to candle wax, which are melted and fed into a piezo crystal operated print-head. A Thermal transfer printhead jets the liquid ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is immediately transferred, or transfixed, to the page. Solid ink printers are most commonly used as color office printers and are excellent at printing on transparencies and other non-porous media. Solid ink is also called phase-change or hot-melt ink was first used by Data Products and Howtek, Inc., in 1984. Solid ink printers can produce excellent results with text and images. Some solid ink printers have evolved to print 3D models, for example, Visual Impact Corporation of Windham, NH was started by retired Howtek employee, Richard Helinski whose 3D patents US4721635 and then US5136515 was licensed to Sanders Prototype, Inc., later named Solidscape, Inc. Acquisition and operating costs are similar to laser printers. Drawbacks of the technology include high energy consumption and long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on, as the wax tends to repel inks from pens, and are difficult to feed through automatic document feeders, but these traits have been significantly reduced in later models. This type of thermal transfer printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line. Previously, solid ink printers were manufactured by Tektronix, but Tektronix sold the printing business to Xerox in 2001. Dye-sublimation printers A dye-sublimation printer (or dye-sub printer) is a printer that employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper, or canvas. The process is usually to lay one color at a time using a ribbon that has color panels. Dye-sub printers are intended primarily for high-quality color applications, including color photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers. Thermal printers Thermal printers work by selectively heating regions of special heat-sensitive paper. Monochrome thermal printers are used in cash registers, ATMs, gasoline dispensers and some older inexpensive fax machines. Colors can be achieved with special papers and different temperatures and heating rates for different colors; these colored sheets are not required in black-and-white output. One example is Zink (a portmanteau of "zero ink"). Obsolete and special-purpose printing technologies The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use. Impact printers Impact printers rely on a forcible impact to transfer ink to the media. The impact printer uses a print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper (similar to the action of a typewriter), or, less commonly, hits the back of the paper, pressing the paper against the ink ribbon (the IBM 1403 for example). All but the dot matrix printer rely on the use of fully formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome, or sometimes two-color, printing in a single typeface at one time, although bolding and underlining of text could be done by "overstriking", that is, printing two or more impressions either in the same character position or slightly offset. Impact printers varieties include typewriter-derived printers, teletypewriter-derived printers, daisywheel printers, dot matrix printers, and line printers. Dot-matrix printers remain in common use in businesses where multi-part forms are printed. An overview of impact printing contains a detailed description of many of the technologies used. Typewriter-derived printers Several different computer printers were simply computer-controllable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric-based printers were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known "golf ball" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second. Teletypewriter-derived printers The common teleprinter could easily be interfaced with the computer and became very popular except for those computers manufactured by IBM. Some models used a "typebox" that was positioned, in the X- and Y-axes, by a mechanism, and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS. Daisy wheel printers Daisy wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because they could produce text which was as clear and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per second. Dot-matrix printers The term dot matrix printer is used for impact printers that use a matrix of small pins to transfer ink to the page. The advantage of dot matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type). Dot-matrix printers can be broadly divided into two major classes: Ballistic wire printers Stored energy printers Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head. In the 1970s and '80s, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers normally had either 9 or 24 pins on the print head (early 7 pin printers also existed, which did not print descenders). There was a period during the early home computer era when a range of printers were manufactured under many brands such as the Commodore VIC-1525 using the Seikosha Uni-Hammer system. This used a single solenoid with an oblique striker that would be actuated 7 times for each column of 7 vertical pixels while the head was moving at a constant speed. The angle of the striker would align the dots vertically even though the head had moved one dot spacing in the time. The vertical dot position was controlled by a synchronized longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically seven dot spacings in the time it took to print one pixel column. 24-pin print heads were able to print at a higher quality and started to offer additional type styles and were marketed as Near Letter Quality by some vendors. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favour for general use. Some dot matrix printers, such as the NEC P6300, can be upgraded to print in color. This is achieved through the use of a four-color ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Color graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, color graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode. Dot matrix printers are still commonly used in low-cost, low-quality applications such as cash registers, or in demanding, very high volume applications like invoice printing. Impact printing, unlike laser printing, allows the pressure of the print head to be applied to a stack of two or more forms to print multi-part documents such as sales invoices and credit card receipts using continuous stationery with carbonless copy paper. It also has security advantages as ink impressed into a paper matrix by force is harder to erase invisibly. Dot-matrix printers were being superseded even as receipt printers after the end of the twentieth century. Line printers Line printers print an entire line of text at a time. Four principal designs exist. Drum printers, where a horizontally mounted rotating drum carries the entire character set of the printer repeated in each printable character position. The IBM 1132 printer is an example of a drum printer. Drum printers are also found in adding machines and other numeric printers (POS), the dimensions are compact as only a dozen characters need to be supported. Chain or train printers, where the character set is arranged multiple times around a linked chain or a set of character slugs in a track traveling horizontally past the print line. The IBM 1403 is perhaps the most popular and comes in both chain and train varieties. The band printer is a later variant where the characters are embossed on a flexible steel band. The LP27 from Digital Equipment Corporation is a band printer. Bar printers, where the character set is attached to a solid bar that moves horizontally along the print line, such as the IBM 1443. A fourth design, used mainly on very early printers such as the IBM 402, features independent type bars, one for each printable position. Each bar contains the character set to be printed. The bars move vertically to position the character to be printed in front of the print hammer. In each case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper. Each system could have slight timing issues, which could cause minor misalignment of the resulting printed characters. For drum or typebar printers, this appeared as vertical misalignment, with characters being printed slightly above or below the rest of the line. In chain or bar printers, the misalignment was horizontal, with printed characters being crowded closer together or farther apart. This was much less noticeable to human vision than vertical misalignment, where characters seemed to bounce up and down in the line, so they were considered as higher quality print. Comb printers, also called line matrix printers, represent the fifth major design. These printers are a hybrid of dot matrix printing and line printing. In these printers, a comb of hammers prints a portion of a row of pixels at one time, such as every eighth pixel. By shifting the comb back and forth slightly, the entire pixel row can be printed, continuing the example, in just eight cycles. The paper then advances, and the next pixel row is printed. Because far less motion is involved than in a conventional dot matrix printer, these printers are very fast compared to dot matrix printers and are competitive in speed with formed-character line printers while also being able to print dot matrix graphics. The Printronix P7000 series of line matrix printers are still manufactured as of 2013. Line printers are the fastest of all impact printers and are used for bulk printing in large computer centres. A line printer can print at 1100 lines per minute or faster, frequently printing pages more rapidly than many current laser printers. On the other hand, the mechanical components of line printers operate with tight tolerances and require regular preventive maintenance (PM) to produce a top quality print. They are virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many operating systems, which use the abbreviations "lp", "lpr", or "LPT" to refer to printers. Liquid ink electrostatic printers Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print head according to the image of the document. The paper is passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. This process was developed from the process of electrostatic copying. Color reproduction is very accurate, and because there is no heating the scale distortion is less than ±0.1%. (All laser printers have an accuracy of ±1%.) Worldwide, most survey offices used this printer before color inkjet plotters become popular. Liquid ink electrostatic printers were mostly available in width and also 6 color printing. These were also used to print large billboards. It was first introduced by Versatec, which was later bought by Xerox. 3M also used to make these printers. Plotters Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se) and special purpose pens that are mechanically run over the paper to create text and images. Since the pens output continuous lines, they were able to produce technical drawings of higher resolution than was achievable with dot-matrix technology. Some plotters used roll-fed paper, and therefore had a minimal restriction on the size of the output in one dimension. These plotters were capable of producing quite sizable drawings. Other printers A number of other sorts of printers are important for historical reasons, or for special purpose uses. Digital minilab (photographic paper) Electrolytic printers Spark printer Barcode printer multiple technologies, including: thermal printing, inkjet printing, and laser printing barcodes Billboard / sign paint spray printers Laser etching (product packaging) industrial printers Microsphere (special paper) Attributes Connectivity Printers can be connected to computers in many ways: directly by a dedicated data cable such as the USB, through a short-range radio like Bluetooth, a local area network using cables (such as the Ethernet) or radio (such as WiFi), or on a standalone basis without a computer, using a memory card or other portable data storage device. Printer control languages Most printers other than line printers accept control characters or unique character sequences to control various printer functions. These may range from shifting from lower to upper case or from black to red ribbon on typewriter printers to switching fonts and changing character sizes and colors on raster printers. Early printer controls were not standardized, with each manufacturer's equipment having its own set. The IBM Personal Printer Data Stream (PPDS) became a commonly used command set for dot-matrix printers. Today, most printers accept one or more page description languages (PDLs). Laser printers with greater processing power frequently offer support for variants of Hewlett-Packard's Printer Command Language (PCL), PostScript or XML Paper Specification. Most inkjet devices support manufacturer proprietary PDLs such as ESC/P. The diversity in mobile platforms have led to various standardization efforts around device PDLs such as the Printer Working Group (PWG's) PWG Raster. Printing speed The speed of early printers was measured in units of characters per minute (cpm) for character printers, or lines per minute (lpm) for line printers. Modern printers are measured in pages per minute (ppm). These measures are used primarily as a marketing tool, and are not as well standardised as toner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially color images. Speeds in ppm usually apply to A4 paper in most countries in the world, and letter paper size, about 6% shorter, in North America. Printing mode The data received by a printer may be: A string of characters A bitmapped image A vector image A computer program written in a page description language, such as PCL or PostScript Some printers can process all four types of data, others not. Character printers, such as daisy wheel printers, can handle only plain text data or rather simple point plots. Pen plotters typically process vector images. Inkjet based plotters can adequately reproduce all four. Modern printing technology, such as laser printers and inkjet printers, can adequately reproduce all four. This is especially true of printers equipped with support for PCL or PostScript, which includes the vast majority of printers produced today. Today it is possible to print everything (even plain text) by sending ready bitmapped images to the printer. This allows better control over formatting, especially among machines from different vendors. Many printer drivers do not use the text mode at all, even if the printer is capable of it. Monochrome, color and photo printers A monochrome printer can only produce monochrome images, with only shades of a single color. Most printers can produce only two colors, black (ink) and white (no ink). With half-tonning techniques, however, such a printer can produce acceptable grey-scale images too A color printer can produce images of multiple colors. A photo printer is a color printer that can produce images that mimic the color range (gamut) and resolution of prints made from photographic film. Page yield The page yield is the number of pages that can be printed from a toner cartridge or ink cartridge—before the cartridge needs to be refilled or replaced. The actual number of pages yielded by a specific cartridge depends on a number of factors. For a fair comparison, many laser printer manufacturers use the ISO/IEC 19752 process to measure the toner cartridge yield. Economics In order to fairly compare operating expenses of printers with a relatively small ink cartridge to printers with a larger, more expensive toner cartridge that typically holds more toner and so prints more pages before the cartridge needs to be replaced, many people prefer to estimate operating expenses in terms of cost per page (CPP). Retailers often apply the "razor and blades" model: a company may sell a printer at cost and make profits on the ink cartridge, paper, or some other replacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sell compatible ink cartridges. To protect their business model, several manufacturers invest heavily in developing new cartridge technology and patenting it. Other manufacturers, in reaction to the challenges from using this business model, choose to make more money on printers and less on ink, promoting the latter through their advertising campaigns. Finally, this generates two clearly different proposals: "cheap printer – expensive ink" or "expensive printer – cheap ink". Ultimately, the consumer decision depends on their reference interest rate or their time preference. From an economics viewpoint, there is a clear trade-off between cost per copy and cost of the printer. Printer steganography Printer steganography is a type of steganography – "hiding data within data" – produced by color printers, including Brother, Canon, Dell, Epson, HP, IBM, Konica Minolta, Kyocera, Lanier, Lexmark, Ricoh, Toshiba and Xerox brand color laser printers, where tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps. Manufacturers and market share As of 2020-2021, the largest worldwide vendor of printers is Hewlett-Packard, followed by Canon, Brother, Seiko Epson and Kyocera. Other known vendors include NEC, Ricoh, Xerox, Lexmark, OKI, Sharp, Konica Minolta, Samsung, Kodak, Dell, Toshiba, Star Micronics, Citizen and Panasonic. See also Campus card Cardboard modeling Dye-sublimation printer History of printing Label printer List of printer companies Print (command) Printer driver Print screen Print server Printer friendly (also known as a printable version) Printer point Printer (publishing) Printmaking Smart card Typewriter ribbon 3D printing References External links Computer printers Office equipment Typography Articles containing video clips
5278
https://en.wikipedia.org/wiki/Copyright
Copyright
A copyright is a type of intellectual property that gives its owner the exclusive right to copy, distribute, adapt, display, and perform a creative work, usually for a limited time. The creative work may be in a literary, artistic, educational, or musical form. Copyright is intended to protect the original expression of an idea in the form of a creative work, but not the idea itself. A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States. Some jurisdictions require "fixing" copyrighted works in a tangible form. It is often shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders. These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution. Copyrights can be granted by public law and are in that case considered "territorial rights". This means that copyrights granted by the law of a certain state do not extend beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works "cross" national borders or national rights are inconsistent. Typically, the public law duration of a copyright expires 50 to 100 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities to establishing copyright, others recognize copyright in any completed work, without a formal registration. When the copyright of a work expires, it enters the public domain. History Background The concept of copyright developed after the printing press came into use in Europe in the 15th and 16th centuries. The printing press made it much cheaper to produce works, but as there was initially no copyright law, anyone could buy or rent a press and print any text. Popular new works were immediately re-set and re-published by competitors, so printers needed a constant stream of new material. Fees paid to authors for new works were high, and significantly supplemented the incomes of many academics. Printing brought profound social changes. The rise in literacy across Europe led to a dramatic increase in the demand for reading matter. Prices of reprints were low, so publications could be bought by poorer people, creating a mass audience. In German language markets before the advent of copyright, technical materials, like popular fiction, were inexpensive and widely available; it has been suggested this contributed to Germany's industrial and economic success. After copyright law became established (in 1710 in England and Scotland, and in the 1840s in German-speaking areas) the low-price mass market vanished, and fewer, more expensive editions were published; distribution of scientific and technical information was greatly reduced. Conception The concept of copyright first developed in England. In reaction to the printing of "scandalous books and pamphlets", the English Parliament passed the Licensing of the Press Act 1662, which required all intended publications to be registered with the government-approved Stationers' Company, giving the Stationers the right to regulate what material could be printed. The Statute of Anne, enacted in 1710 in England and Scotland, provided the first legislation to protect copyrights (but not authors' rights). The Copyright Act of 1814 extended more rights for authors but did not protect British from reprinting in the US. The Berne International Copyright Convention of 1886 finally provided protection for authors among the countries who signed the agreement, although the US did not join the Berne Convention until 1989. In the US, the Constitution grants Congress the right to establish copyright and patent laws. Shortly after the Constitution was passed, Congress enacted the Copyright Act of 1790, modeling it after the Statute of Anne. While the national law protected authors’ published works, authority was granted to the states to protect authors’ unpublished works. The most recent major overhaul of copyright in the US, the 1976 Copyright Act, extended federal copyright to works as soon as they are created and "fixed", without requiring publication or registration. State law continues to apply to unpublished works that are not otherwise copyrighted by federal law. This act also changed the calculation of copyright term from a fixed term (then a maximum of fifty-six years) to "life of the author plus 50 years". These changes brought the US closer to conformity with the Berne Convention, and in 1989 the United States further revised its copyright law and joined the Berne Convention officially. Copyright laws allow products of creative human activities, such as literary and artistic production, to be preferentially exploited and thus incentivized. Different cultural attitudes, social organizations, economic models and legal frameworks are seen to account for why copyright emerged in Europe and not, for example, in Asia. In the Middle Ages in Europe, there was generally a lack of any concept of literary property due to the general relations of production, the specific organization of literary production and the role of culture in society. The latter refers to the tendency of oral societies, such as that of Europe in the medieval period, to view knowledge as the product and expression of the collective, rather than to see it as individual property. However, with copyright laws, intellectual production comes to be seen as a product of an individual, with attendant rights. The most significant point is that patent and copyright laws support the expansion of the range of creative human activities that can be commodified. This parallels the ways in which capitalism led to the commodification of many aspects of social life that earlier had no monetary or economic value per se. Copyright has developed into a concept that has a significant effect on nearly every modern industry, including not just literary work, but also forms of creative work such as sound recordings, films, photographs, software, and architecture. National copyrights Often seen as the first real copyright law, the 1709 British Statute of Anne gave the publishers rights for a fixed period, after which the copyright expired. The act also alluded to individual rights of the artist. It began, "Whereas Printers, Booksellers, and other Persons, have of late frequently taken the Liberty of Printing ... Books, and other Writings, without the Consent of the Authors ... to their very great Detriment, and too often to the Ruin of them and their Families:". A right to benefit financially from the work is articulated, and court rulings and legislation have recognized a right to control the work, such as ensuring that the integrity of it is preserved. An irrevocable right to be recognized as the work's creator appears in some countries' copyright laws. The Copyright Clause of the United States, Constitution (1787) authorized copyright legislation: "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries." That is, by guaranteeing them a period of time in which they alone could profit from their works, they would be enabled and encouraged to invest the time required to create them, and this would be good for society as a whole. A right to profit from the work has been the philosophical underpinning for much legislation extending the duration of copyright, to the life of the creator and beyond, to their heirs. The original length of copyright in the United States was 14 years, and it had to be explicitly applied for. If the author wished, they could apply for a second 14‑year monopoly grant, but after that the work entered the public domain, so it could be used and built upon by others. Copyright law was enacted rather late in German states, and the historian Eckhard Höffner argues that the absence of copyright laws in the early 19th century encouraged publishing, was profitable for authors, led to a proliferation of books, enhanced knowledge, and was ultimately an important factor in the ascendency of Germany as a power during that century. However, empirical evidence derived from the exogenous differential introduction of copyright in Napoleonic Italy shows that "basic copyrights increased both the number and the quality of operas, measured by their popularity and durability". International copyright treaties The 1886 Berne Convention first established recognition of copyrights among sovereign nations, rather than merely bilaterally. Under the Berne Convention, copyrights for creative works do not have to be asserted or declared, as they are automatically in force at creation: an author need not "register" or "apply for" a copyright in countries adhering to the Berne Convention. As soon as a work is "fixed", that is, written or recorded on some physical medium, its author is automatically entitled to all copyrights in the work, and to any derivative works unless and until the author explicitly disclaims them, or until the copyright expires. The Berne Convention also resulted in foreign authors being treated equivalently to domestic authors, in any country signed onto the Convention. The UK signed the Berne Convention in 1887 but did not implement large parts of it until 100 years later with the passage of the Copyright, Designs and Patents Act 1988. Specially, for educational and scientific research purposes, the Berne Convention provides the developing countries issue compulsory licenses for the translation or reproduction of copyrighted works within the limits prescribed by the Convention. This was a special provision that had been added at the time of 1971 revision of the Convention, because of the strong demands of the developing countries. The United States did not sign the Berne Convention until 1989. The United States and most Latin American countries instead entered into the Buenos Aires Convention in 1910, which required a copyright notice on the work (such as all rights reserved), and permitted signatory nations to limit the duration of copyrights to shorter and renewable terms. The Universal Copyright Convention was drafted in 1952 as another less demanding alternative to the Berne Convention, and ratified by nations such as the Soviet Union and developing nations. The regulations of the Berne Convention are incorporated into the World Trade Organization's TRIPS agreement (1995), thus giving the Berne Convention effectively near-global application. In 1961, the United International Bureaux for the Protection of Intellectual Property signed the Rome Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organizations. In 1996, this organization was succeeded by the founding of the World Intellectual Property Organization, which launched the 1996 WIPO Performances and Phonograms Treaty and the 2002 WIPO Copyright Treaty, which enacted greater restrictions on the use of technology to copy works in the nations that ratified it. The Trans-Pacific Partnership includes intellectual Property Provisions relating to copyright. Copyright laws are standardized somewhat through these international conventions such as the Berne Convention and Universal Copyright Convention. These multilateral treaties have been ratified by nearly all countries, and international organizations such as the European Union or World Trade Organization require their member states to comply with them. Obtaining protection Ownership The original holder of the copyright may be the employer of the author rather than the author themself if the work is a "work for hire". For example, in English law the Copyright, Designs and Patents Act 1988 provides that if a copyrighted work is made by an employee in the course of that employment, the copyright is automatically owned by the employer which would be a "Work for Hire". Typically, the first owner of a copyright is the person who created the work i.e. the author. But when more than one person creates the work, then a case of joint authorship can be made provided some criteria are met. Eligible works Copyright may apply to a wide range of creative, intellectual, or artistic forms, or "works". Specifics vary by jurisdiction, but these can include poems, theses, fictional characters, plays and other literary works, motion pictures, choreography, musical compositions, sound recordings, paintings, drawings, sculptures, photographs, computer software, radio and television broadcasts, and industrial designs. Graphic designs and industrial designs may have separate or overlapping laws applied to them in some jurisdictions. Copyright does not cover ideas and information themselves, only the form or manner in which they are expressed. For example, the copyright to a Mickey Mouse cartoon restricts others from making copies of the cartoon or creating derivative works based on Disney's particular anthropomorphic mouse, but does not prohibit the creation of other works about anthropomorphic mice in general, so long as they are different enough to not be judged copies of Disney's. Note additionally that Mickey Mouse is not copyrighted because characters cannot be copyrighted; rather, Steamboat Willie is copyrighted and Mickey Mouse, as a character in that copyrighted work, is afforded protection. Originality Typically, a work must meet minimal standards of originality in order to qualify for copyright, and the copyright expires after a set period of time (some jurisdictions may allow this to be extended). Different countries impose different tests, although generally the requirements are low; in the United Kingdom there has to be some "skill, labour, and judgment" that has gone into it. In Australia and the United Kingdom it has been held that a single word is insufficient to comprise a copyright work. However, single words or a short string of words can sometimes be registered as a trademark instead. Copyright law recognizes the right of an author based on whether the work actually is an original creation, rather than based on whether it is unique; two authors may own copyright on two substantially identical works, if it is determined that the duplication was coincidental, and neither was copied from the other. Registration In all countries where the Berne Convention standards apply, copyright is automatic, and need not be obtained through official registration with any government office. Once an idea has been reduced to tangible form, for example by securing it in a fixed medium (such as a drawing, sheet music, photograph, a videotape, or a computer file), the copyright holder is entitled to enforce their exclusive rights. However, while registration is not needed to exercise copyright, in jurisdictions where the laws provide for registration, it serves as prima facie evidence of a valid copyright and enables the copyright holder to seek statutory damages and attorney's fees. (In the US, registering after an infringement only enables one to receive actual damages and lost profits.) A widely circulated strategy to avoid the cost of copyright registration is referred to as the poor man's copyright. It proposes that the creator send the work to themself in a sealed envelope by registered mail, using the postmark to establish the date. This technique has not been recognized in any published opinions of the United States courts. The United States Copyright Office says the technique is not a substitute for actual registration. The United Kingdom Intellectual Property Office discusses the technique and notes that the technique (as well as commercial registries) does not constitute dispositive proof that the work is original or establish who created the work. Fixing The Berne Convention allows member countries to decide whether creative works must be "fixed" to enjoy copyright. Article 2, Section 2 of the Berne Convention states: "It shall be a matter for legislation in the countries of the Union to prescribe that works in general or any specified categories of works shall not be protected unless they have been fixed in some material form." Some countries do not require that a work be produced in a particular form to obtain copyright protection. For instance, Spain, France, and Australia do not require fixation for copyright protection. The United States and Canada, on the other hand, require that most works must be "fixed in a tangible medium of expression" to obtain copyright protection. US law requires that the fixation be stable and permanent enough to be "perceived, reproduced or communicated for a period of more than transitory duration". Similarly, Canadian courts consider fixation to require that the work be "expressed to some extent at least in some material form, capable of identification and having a more or less permanent endurance". Note this provision of US law: c) Effect of Berne Convention.—No right or interest in a work eligible for protection under this title may be claimed by virtue of, or in reliance upon, the provisions of the Berne Convention, or the adherence of the United States thereto. Any rights in a work eligible for protection under this title that derive from this title, other Federal or State statutes, or the common law, shall not be expanded or reduced by virtue of, or in reliance upon, the provisions of the Berne Convention, or the adherence of the United States thereto. Copyright notice Before 1989, United States law required the use of a copyright notice, consisting of the copyright symbol (©, the letter C inside a circle), the abbreviation "Copr.", or the word "Copyright", followed by the year of the first publication of the work and the name of the copyright holder. Several years may be noted if the work has gone through substantial revisions. The proper copyright notice for sound recordings of musical or other audio works is a sound recording copyright symbol (℗, the letter P inside a circle), which indicates a sound recording copyright, with the letter P indicating a "phonorecord". In addition, the phrase All rights reserved which indicates that the copyright holder reserves, or holds for their own use was once required to assert copyright, but that phrase is now legally obsolete. Almost everything on the Internet has some sort of copyright attached to it. Whether these things are watermarked, signed, or have any other sort of indication of the copyright is a different story however. In 1989 the United States enacted the Berne Convention Implementation Act, amending the 1976 Copyright Act to conform to most of the provisions of the Berne Convention. As a result, the use of copyright notices has become optional to claim copyright, because the Berne Convention makes copyright automatic. However, the lack of notice of copyright using these marks may have consequences in terms of reduced damages in an infringement lawsuit – using notices of this form may reduce the likelihood of a defense of "innocent infringement" being successful. Enforcement Copyrights are generally enforced by the holder in a civil law court, but there are also criminal infringement statutes in some jurisdictions. While central registries are kept in some countries which aid in proving claims of ownership, registering does not necessarily prove ownership, nor does the fact of copying (even without permission) necessarily prove that copyright was infringed. Criminal sanctions are generally aimed at serious counterfeiting activity, but are now becoming more commonplace as copyright collectives such as the RIAA are increasingly targeting the file sharing home Internet user. Thus far, however, most such cases against file sharers have been settled out of court. (See Legal aspects of file sharing) In most jurisdictions the copyright holder must bear the cost of enforcing copyright. This will usually involve engaging legal representation, administrative or court costs. In light of this, many copyright disputes are settled by a direct approach to the infringing party in order to settle the dispute out of court. "... by 1978, the scope was expanded to apply to any 'expression' that has been 'fixed' in any medium, this protection granted automatically whether the maker wants it or not, no registration required." Copyright infringement For a work to be considered to infringe upon copyright, its use must have occurred in a nation that has domestic copyright laws or adheres to a bilateral treaty or established international convention such as the Berne Convention or WIPO Copyright Treaty. Improper use of materials outside of legislation is deemed "unauthorized edition", not copyright infringement. Statistics regarding the effects of copyright infringement are difficult to determine. Studies have attempted to determine whether there is a monetary loss for industries affected by copyright infringement by predicting what portion of pirated works would have been formally purchased if they had not been freely available. Other reports indicate that copyright infringement does not have an adverse effect on the entertainment industry, and can have a positive effect. In particular, a 2014 university study concluded that free music content, accessed on YouTube, does not necessarily hurt sales, instead has the potential to increase sales. According to the IP Commission Report the annual cost of intellectual property theft to the US economy "continues to exceed $225 billion in counterfeit goods, pirated software, and theft of trade secrets and could be as high as $600 billion." A 2019 study sponsored by the US Chamber of Commerce Global Innovation Policy Center (GIPC), in partnership with NERA Economic Consulting "estimates that global online piracy costs the U.S. economy at least $29.2 billion in lost revenue each year." An August 2021 report by the Digital Citizens Alliance states that "online criminals who offer stolen movies, TV shows, games, and live events through websites and apps are reaping $1.34 billion in annual advertising revenues." This comes as a result of users visiting pirate websites who are then subjected to pirated content, malware, and fraud. Rights granted According to World Intellectual Property Organisation, copyright protects two types of rights. Economic rights allow right owners to derive financial reward from the use of their works by others. Moral rights allow authors and creators to take certain actions to preserve and protect their link with their work. The author or creator may be the owner of the economic rights or those rights may be transferred to one or more copyright owners. Many countries do not allow the transfer of moral rights. Economic rights With any kind of property, its owner may decide how it is to be used, and others can use it lawfully only if they have the owner's permission, often through a license. The owner's use of the property must, however, respect the legally recognised rights and interests of other members of society. So the owner of a copyright-protected work may decide how to use the work, and may prevent others from using it without permission. National laws usually grant copyright owners exclusive rights to allow third parties to use their works, subject to the legally recognised rights and interests of others. Most copyright laws state that authors or other right owners have the right to authorise or prevent certain acts in relation to a work. Right owners can authorise or prohibit: reproduction of the work in various forms, such as printed publications or sound recordings; distribution of copies of the work; public performance of the work; broadcasting or other communication of the work to the public; translation of the work into other languages; and adaptation of the work, such as turning a novel into a screenplay. Moral rights Moral rights are concerned with the non-economic rights of a creator. They protect the creator's connection with a work as well as the integrity of the work. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. In some EU countries, such as France, moral rights last indefinitely. In the UK, however, moral rights are finite. That is, the right of attribution and the right of integrity last only as long as the work is in copyright. When the copyright term comes to an end, so too do the moral rights in that work. This is just one reason why the moral rights regime within the UK is often regarded as weaker or inferior to the protection of moral rights in continental Europe and elsewhere in the world. The Berne Convention, in Article 6bis, requires its members to grant authors the following rights: the right to claim authorship of a work (sometimes called the right of paternity or the right of attribution); and the right to object to any distortion or modification of a work, or other derogatory action in relation to a work, which would be prejudicial to the author's honour or reputation (sometimes called the right of integrity). These and other similar rights granted in national laws are generally known as the moral rights of authors. The Berne Convention requires these rights to be independent of authors’ economic rights. Moral rights are only accorded to individual authors and in many national laws they remain with the authors even after the authors have transferred their economic rights. This means that even where, for example, a film producer or publisher owns the economic rights in a work, in many jurisdictions the individual author continues to have moral rights. Recently, as a part of the debates being held at the US Copyright Office on the question of inclusion of Moral Rights as a part of the framework of the Copyright Law in United States, the Copyright Office concluded that many diverse aspects of the current moral rights patchwork – including copyright law's derivative work right, state moral rights statutes, and contract law – are generally working well and should not be changed. Further, the Office concludes that there is no need for the creation of a blanket moral rights statute at this time. However, there are aspects of the US moral rights patchwork that could be improved to the benefit of individual authors and the copyright system as a whole. The Copyright Law in the United States, several exclusive rights are granted to the holder of a copyright, as are listed below: protection of the work; to determine and decide how, and under what conditions, the work may be marketed, publicly displayed, reproduced, distributed, etc. to produce copies or reproductions of the work and to sell those copies; (including, typically, electronic copies) to import or export the work; to create derivative works; (works that adapt the original work) to perform or display the work publicly; to sell or cede these rights to others; to transmit or display by radio, video or internet. The basic right when a work is protected by copyright is that the holder may determine and decide how and under what conditions the protected work may be used by others. This includes the right to decide to distribute the work for free. This part of copyright is often overseen. The phrase "exclusive right" means that only the copyright holder is free to exercise those rights, and others are prohibited from using the work without the holder's permission. Copyright is sometimes called a "negative right", as it serves to prohibit certain people (e.g., readers, viewers, or listeners, and primarily publishers and would be publishers) from doing something they would otherwise be able to do, rather than permitting people (e.g., authors) to do something they would otherwise be unable to do. In this way it is similar to the unregistered design right in English law and European law. The rights of the copyright holder also permit him/her to not use or exploit their copyright, for some or all of the term. There is, however, a critique which rejects this assertion as being based on a philosophical interpretation of copyright law that is not universally shared. There is also debate on whether copyright should be considered a property right or a moral right. UK copyright law gives creators both economic rights and moral rights. While ‘copying’ someone else's work without permission may constitute an infringement of their economic rights, that is, the reproduction right or the right of communication to the public, whereas, ‘mutilating’ it might infringe the creator's moral rights. In the UK, moral rights include the right to be identified as the author of the work, which is generally identified as the right of attribution, and the right not to have your work subjected to ‘derogatory treatment’, that is the right of integrity. Indian copyright law is at parity with the international standards as contained in TRIPS. The Indian Copyright Act, 1957, pursuant to the amendments in 1999, 2002 and 2012, fully reflects the Berne Convention and the Universal Copyrights Convention, to which India is a party. India is also a party to the Geneva Convention for the Protection of Rights of Producers of Phonograms and is an active member of the World Intellectual Property Organization (WIPO) and United Nations Educational, Scientific and Cultural Organization (UNESCO). The Indian system provides both the economic and moral rights under different provisions of its Indian Copyright Act of 1957. Duration Copyright subsists for a variety of lengths in different jurisdictions. The length of the term can depend on several factors, including the type of work (e.g. musical composition, novel), whether the work has been published, and whether the work was created by an individual or a corporation. In most of the world, the default length of copyright is the life of the author plus either 50 or 70 years. In the United States, the term for most existing works is a fixed number of years after the date of creation or publication. Under most countries' laws (for example, the United States and the United Kingdom), copyrights expire at the end of the calendar year in which they would otherwise expire. The length and requirements for copyright duration are subject to change by legislation, and since the early 20th century there have been a number of adjustments made in various countries, which can make determining the duration of a given copyright somewhat difficult. For example, the United States used to require copyrights to be renewed after 28 years to stay in force, and formerly required a copyright notice upon first publication to gain coverage. In Italy and France, there were post-wartime extensions that could increase the term by approximately 6 years in Italy and up to about 14 in France. Many countries have extended the length of their copyright terms (sometimes retroactively). International treaties establish minimum terms for copyrights, but individual countries may enforce longer terms than those. In the United States, all books and other works, except for sound recordings, published before 1928 have expired copyrights and are in the public domain. The applicable date for sound recordings in the United States is before 1923. In addition, works published before 1964 that did not have their copyrights renewed 28 years after first publication year also are in the public domain. Hirtle points out that the great majority of these works (including 93% of the books) were not renewed after 28 years and are in the public domain. Books originally published outside the US by non-Americans are exempt from this renewal requirement, if they are still under copyright in their home country. But if the intended exploitation of the work includes publication (or distribution of derivative work, such as a film based on a book protected by copyright) outside the US, the terms of copyright around the world must be considered. If the author has been dead more than 70 years, the work is in the public domain in most, but not all, countries. In 1998, the length of a copyright in the United States was increased by 20 years under the Copyright Term Extension Act. This legislation was strongly promoted by corporations which had valuable copyrights which otherwise would have expired, and has been the subject of substantial criticism on this point. Limitations and exceptions In many jurisdictions, copyright law makes exceptions to these restrictions when the work is copied for the purpose of commentary or other related uses. United States copyright law does not cover names, titles, short phrases or listings (such as ingredients, recipes, labels, or formulas). However, there are protections available for those areas copyright does not cover, such as trademarks and patents. Idea–expression dichotomy and the merger doctrine The idea–expression divide differentiates between ideas and expression, and states that copyright protects only the original expression of ideas, and not the ideas themselves. This principle, first clarified in the 1879 case of Baker v. Selden, has since been codified by the Copyright Act of 1976 at 17 U.S.C. § 102(b). The first-sale doctrine and exhaustion of rights Copyright law does not restrict the owner of a copy from reselling legitimately obtained copies of copyrighted works, provided that those copies were originally produced by or with the permission of the copyright holder. It is therefore legal, for example, to resell a copyrighted book or CD. In the United States this is known as the first-sale doctrine, and was established by the courts to clarify the legality of reselling books in second-hand bookstores. Some countries may have parallel importation restrictions that allow the copyright holder to control the aftermarket. This may mean for example that a copy of a book that does not infringe copyright in the country where it was printed does infringe copyright in a country into which it is imported for retailing. The first-sale doctrine is known as exhaustion of rights in other countries and is a principle which also applies, though somewhat differently, to patent and trademark rights. It is important to note that the first-sale doctrine permits the transfer of the particular legitimate copy involved. It does not permit making or distributing additional copies. In Kirtsaeng v. John Wiley & Sons, Inc., in 2013, the United States Supreme Court held in a 6–3 decision that the first-sale doctrine applies to goods manufactured abroad with the copyright owner's permission and then imported into the US without such permission. The case involved a plaintiff who imported Asian editions of textbooks that had been manufactured abroad with the publisher-plaintiff's permission. The defendant, without permission from the publisher, imported the textbooks and resold on eBay. The Supreme Court's holding severely limits the ability of copyright holders to prevent such importation. In addition, copyright, in most cases, does not prohibit one from acts such as modifying, defacing, or destroying one's own legitimately obtained copy of a copyrighted work, so long as duplication is not involved. However, in countries that implement moral rights, a copyright holder can in some cases successfully prevent the mutilation or destruction of a work that is publicly visible. Fair use and fair dealing Copyright does not prohibit all copying or replication. In the United States, the fair use doctrine, codified by the Copyright Act of 1976 as 17 U.S.C. Section 107, permits some copying and distribution without permission of the copyright holder or payment to same. The statute does not clearly define fair use, but instead gives four non-exclusive factors to consider in a fair use analysis. Those factors are: the purpose and character of one's use; the nature of the copyrighted work; what amount and proportion of the whole work was taken; the effect of the use upon the potential market for or value of the copyrighted work. In the United Kingdom and many other Commonwealth countries, a similar notion of fair dealing was established by the courts or through legislation. The concept is sometimes not well defined; however in Canada, private copying for personal use has been expressly permitted by statute since 1999. In Alberta (Education) v. Canadian Copyright Licensing Agency (Access Copyright), 2012 SCC 37, the Supreme Court of Canada concluded that limited copying for educational purposes could also be justified under the fair dealing exemption. In Australia, the fair dealing exceptions under the Copyright Act 1968 (Cth) are a limited set of circumstances under which copyrighted material can be legally copied or adapted without the copyright holder's consent. Fair dealing uses are research and study; review and critique; news reportage and the giving of professional advice (i.e. legal advice). Under current Australian law, although it is still a breach of copyright to copy, reproduce or adapt copyright material for personal or private use without permission from the copyright owner, owners of a legitimate copy are permitted to "format shift" that work from one medium to another for personal, private use, or to "time shift" a broadcast work for later, once and only once, viewing or listening. Other technical exemptions from infringement may also apply, such as the temporary reproduction of a work in machine readable form for a computer. In the United States the AHRA (Audio Home Recording Act Codified in Section 10, 1992) prohibits action against consumers making noncommercial recordings of music, in return for royalties on both media and devices plus mandatory copy-control mechanisms on recorders. Later acts amended US Copyright law so that for certain purposes making 10 copies or more is construed to be commercial, but there is no general rule permitting such copying. Indeed, making one complete copy of a work, or in many cases using a portion of it, for commercial purposes will not be considered fair use. The Digital Millennium Copyright Act prohibits the manufacture, importation, or distribution of devices whose intended use, or only significant commercial use, is to bypass an access or copy control put in place by a copyright owner. An appellate court has held that fair use is not a defense to engaging in such distribution. EU copyright laws recognise the right of EU member states to implement some national exceptions to copyright. Examples of those exceptions are: photographic reproductions on paper or any similar medium of works (excluding sheet music) provided that the rightholders receives fair compensation; reproduction made by libraries, educational establishments, museums or archives, which are non-commercial; archival reproductions of broadcasts; uses for the benefit of people with a disability; for demonstration or repair of equipment; for non-commercial research or private study; when used in parody. Accessible copies It is legal in several countries including the United Kingdom and the United States to produce alternative versions (for example, in large print or braille) of a copyrighted work to provide improved access to a work for blind and visually impaired people without permission from the copyright holder. Religious Service Exemption In the US there is a Religious Service Exemption (1976 law, section 110[3]), namely "performance of a non-dramatic literary or musical work or of a dramatico-musical work of a religious nature or display of a work, in the course of services at a place of worship or other religious assembly" shall not constitute infringement of copyright. Useful articles In Canada, items deemed useful articles such as clothing designs are exempted from copyright protection under the Copyright Act if reproduced more than 50 times. Fast fashion brands may reproduce clothing designs from smaller companies without violating copyright protections. Transfer, assignment and licensing A copyright, or aspects of it (e.g. reproduction alone, all but moral rights), may be assigned or transferred from one party to another. For example, a musician who records an album will often sign an agreement with a record company in which the musician agrees to transfer all copyright in the recordings in exchange for royalties and other considerations. The creator (and original copyright holder) benefits, or expects to, from production and marketing capabilities far beyond those of the author. In the digital age of music, music may be copied and distributed at minimal cost through the Internet; however, the record industry attempts to provide promotion and marketing for the artist and their work so it can reach a much larger audience. A copyright holder need not transfer all rights completely, though many publishers will insist. Some of the rights may be transferred, or else the copyright holder may grant another party a non-exclusive license to copy or distribute the work in a particular region or for a specified period of time. A transfer or licence may have to meet particular formal requirements in order to be effective, for example under the Australian Copyright Act 1968 the copyright itself must be expressly transferred in writing. Under the US Copyright Act, a transfer of ownership in copyright must be memorialized in a writing signed by the transferor. For that purpose, ownership in copyright includes exclusive licenses of rights. Thus exclusive licenses, to be effective, must be granted in a written instrument signed by the grantor. No special form of transfer or grant is required. A simple document that identifies the work involved and the rights being granted is sufficient. Non-exclusive grants (often called non-exclusive licenses) need not be in writing under US law. They can be oral or even implied by the behavior of the parties. Transfers of copyright ownership, including exclusive licenses, may and should be recorded in the U.S. Copyright Office. (Information on recording transfers is available on the Office's web site.) While recording is not required to make the grant effective, it offers important benefits, much like those obtained by recording a deed in a real estate transaction. Copyright may also be licensed. Some jurisdictions may provide that certain classes of copyrighted works be made available under a prescribed statutory license (e.g. musical works in the United States used for radio broadcast or performance). This is also called a compulsory license, because under this scheme, anyone who wishes to copy a covered work does not need the permission of the copyright holder, but instead merely files the proper notice and pays a set fee established by statute (or by an agency decision under statutory guidance) for every copy made. Failure to follow the proper procedures would place the copier at risk of an infringement suit. Because of the difficulty of following every individual work, copyright collectives or collecting societies and performing rights organizations (such as ASCAP, BMI, and SESAC) have been formed to collect royalties for hundreds (thousands and more) works at once. Though this market solution bypasses the statutory license, the availability of the statutory fee still helps dictate the price per work collective rights organizations charge, driving it down to what avoidance of procedural hassle would justify. Free licenses Copyright licenses known as open or free licenses seek to grant several rights to licensees, either for a fee or not. Free in this context is not as much of a reference to price as it is to freedom. What constitutes free licensing has been characterised in a number of similar definitions, including by order of longevity the Free Software Definition, the Debian Free Software Guidelines, the Open Source Definition and the Definition of Free Cultural Works. Further refinements to these definitions have resulted in categories such as copyleft and permissive. Common examples of free licences are the GNU General Public License, BSD licenses and some Creative Commons licenses. Founded in 2001 by James Boyle, Lawrence Lessig, and Hal Abelson, the Creative Commons (CC) is a non-profit organization which aims to facilitate the legal sharing of creative works. To this end, the organization provides a number of generic copyright license options to the public, gratis. These licenses allow copyright holders to define conditions under which others may use a work and to specify what types of use are acceptable. Terms of use have traditionally been negotiated on an individual basis between copyright holder and potential licensee. Therefore, a general CC license outlining which rights the copyright holder is willing to waive enables the general public to use such works more freely. Six general types of CC licenses are available (although some of them are not properly free per the above definitions and per Creative Commons' own advice). These are based upon copyright-holder stipulations such as whether they are willing to allow modifications to the work, whether they permit the creation of derivative works and whether they are willing to permit commercial use of the work. approximately 130 million individuals had received such licenses. Criticism Some sources are critical of particular aspects of the copyright system. This is known as a debate over copynorms. Particularly to the background of uploading content to internet platforms and the digital exchange of original work, there is discussion about the copyright aspects of downloading and streaming, the copyright aspects of hyperlinking and framing. Concerns are often couched in the language of digital rights, digital freedom, database rights, open data or censorship. Discussions include Free Culture, a 2004 book by Lawrence Lessig. Lessig coined the term permission culture to describe a worst-case system. Good Copy Bad Copy (documentary) and RiP!: A Remix Manifesto, discuss copyright. Some suggest an alternative compensation system. In Europe consumers are acting up against the raising costs of music, film and books, and as a result Pirate Parties have been created. Some groups reject copyright altogether, taking an anti-copyright stance. The perceived inability to enforce copyright online leads some to advocate ignoring legal statutes when on the web. Public domain Copyright, like other intellectual property rights, is subject to a statutorily determined term. Once the term of a copyright has expired, the formerly copyrighted work enters the public domain and may be used or exploited by anyone without obtaining permission, and normally without payment. However, in paying public domain regimes the user may still have to pay royalties to the state or to an authors' association. Courts in common law countries, such as the United States and the United Kingdom, have rejected the doctrine of a common law copyright. Public domain works should not be confused with works that are publicly available. Works posted in the internet, for example, are publicly available, but are not generally in the public domain. Copying such works may therefore violate the author's copyright. See also Adelphi Charter Artificial scarcity Authors' rights and related rights, roughly equivalent concepts in civil law countries Conflict of laws Copyfraud Copyleft Copyright abolition Copyright Alliance Copyright alternatives Copyright for Creativity Copyright in architecture in the United States Copyright on the content of patents and in the context of patent prosecution Criticism of copyright Criticism of intellectual property Directive on Copyright in the Digital Single Market (European Union) Copyright infringement Copyright Remedy Clarification Act (CRCA) Digital rights management Digital watermarking Entertainment law Freedom of panorama Information literacies Intellectual property protection of typefaces List of Copyright Acts List of copyright case law Literary property Model release Paracopyright Philosophy of copyright Photography and the law Pirate Party Printing patent, a precursor to copyright Private copying levy Production music Rent-seeking Reproduction fees Samizdat Software copyright Threshold pledge system World Book and Copyright Day References Further reading Ellis, Sara R. Copyrighting Couture: An Examination of Fashion Design Protection and Why the DPPA and IDPPPA are a Step Towards the Solution to Counterfeit Chic, 78 Tenn. L. Rev. 163 (2010), available at Copyrighting Couture: An Examination of Fashion Design Protection and Why the DPPA and IDPPPA are a Step Towards the Solution to Counterfeit Chic. Ghosemajumder, Shuman. Advanced Peer-Based Technology Business Models. MIT Sloan School of Management, 2002. Lehman, Bruce: Intellectual Property and the National Information Infrastructure (Report of the Working Group on Intellectual Property Rights, 1995) Lindsey, Marc: Copyright Law on Campus. Washington State University Press, 2003. . Mazzone, Jason. Copyfraud. SSRN McDonagh, Luke. Is Creative use of Musical Works without a licence acceptable under Copyright? International Review of Intellectual Property and Competition Law (IIC) 4 (2012) 401–426, available at SSRN Rife, by Martine Courant. Convention, Copyright, and Digital Writing (Southern Illinois University Press; 2013) 222 pages; Examines legal, pedagogical, and other aspects of online authorship. Shipley, David E. "Thin But Not Anorexic: Copyright Protection for Compilations and Other Fact Works" UGA Legal Studies Research Paper No. 08-001; Journal of Intellectual Property Law, Vol. 15, No. 1, 2007. Silverthorne, Sean. Music Downloads: Pirates- or Customers? . Harvard Business School Working Knowledge, 2004. Sorce Keller, Marcello. "Originality, Authenticity and Copyright", Sonus, VII(2007), no. 2, pp. 77–85. Rose, M. (1993), Authors and Owners: The Invention of Copyright, London: Harvard University Press Loewenstein, J. (2002), The Author's Due: Printing and the Prehistory of Copyright, London: University of Chicago Press. External links A simplified guide. WIPOLex from WIPO; global database of treaties and statutes relating to intellectual property Copyright Berne Convention: Country List List of the 164 members of the Berne Convention for the protection of literary and artistic works Copyright and State Sovereign Immunity, U.S. Copyright Office The Multi-Billion-Dollar Piracy Industry with Tom Galvin of Digital Citizens Alliance, The Illusion of More Podcast Education Copyright Cortex A Bibliography on the Origins of Copyright and Droit d'Auteur MIT OpenCourseWare 6.912 Introduction to Copyright Law Free self-study course with video lectures as offered during the January 2006, Independent Activities Period (IAP) US Copyright Law of the United States Documents, US Government Compendium of Copyright Practices (3rd ed.) United States Copyright Office Copyright from UCB Libraries GovPubs Early Copyright Records From the Rare Book and Special Collections Division at the Library of Congress UK Copyright: Detailed information at the UK Intellectual Property Office Fact sheet P-01: UK copyright law (Issued April 2000, amended 25 November 2020) at the UK Copyright Service Data management Intellectual property law Monopoly (economics) Product management Public records Intangible assets
5282
https://en.wikipedia.org/wiki/Catalan%20language
Catalan language
Catalan (; autonym: , ), known in the Valencian Community and Carche as Valencian (autonym: ), is a Western Romance language. It is the official language of Andorra, and an official language of two autonomous communities in eastern Spain: Catalonia and the Balearic Islands. It is also an official language in Valencia, where it is called Valencian. It has semi-official status in the Italian comune of Alghero, and it is spoken in the Pyrénées-Orientales department of France and in two further areas in eastern Spain: the eastern strip of Aragon and the Carche area in the Region of Murcia. The Catalan-speaking territories are often called the or "Catalan Countries". The language evolved from Vulgar Latin in the Middle Ages around the eastern Pyrenees. Nineteenth-century Spain saw a Catalan literary revival, culminating in the early 1900s. Etymology and pronunciation The word Catalan is derived from the territorial name of Catalonia, itself of disputed etymology. The main theory suggests that (Latin Gathia Launia) derives from the name Gothia or Gauthia ("Land of the Goths"), since the origins of the Catalan counts, lords and people were found in the March of Gothia, whence Gothland > Gothlandia > Gothalania > Catalonia theoretically derived. In English, the term referring to a person first appears in the mid 14th century as Catelaner, followed in the 15th century as Catellain (from French). It is attested a language name since at least 1652. The word Catalan can be pronounced in English as , or . The endonym is pronounced in the Eastern Catalan dialects, and in the Western dialects. In the Valencian Community and Carche, the term is frequently used instead. Thus, the name "Valencian", although often employed for referring to the varieties specific to the Valencian Community and Carche, is also used by Valencians as a name for the language as a whole, synonymous with "Catalan". Both uses of the term have their respective entries in the dictionaries by the Acadèmia Valenciana de la Llengua and the Institut d'Estudis Catalans. See also status of Valencian below. History Middle Ages By the 9th century, Catalan had evolved from Vulgar Latin on both sides of the eastern end of the Pyrenees, as well as the territories of the Roman province of Hispania Tarraconensis to the south. From the 8th century onwards the Catalan counts extended their territory southwards and westwards at the expense of the Muslims, bringing their language with them. This process was given definitive impetus with the separation of the County of Barcelona from the Carolingian Empire in 988. In the 11th century, documents written in macaronic Latin begin to show Catalan elements, with texts written almost completely in Romance appearing by 1080. Old Catalan shared many features with Gallo-Romance, diverging from Old Occitan between the 11th and 14th centuries. During the 11th and 12th centuries the Catalan rulers expanded southward to the Ebro river, and in the 13th century they conquered the Land of Valencia and the Balearic Islands. The city of Alghero in Sardinia was repopulated with Catalan speakers in the 14th century. The language also reached Murcia, which became Spanish-speaking in the 15th century. In the Low Middle Ages, Catalan went through a golden age, reaching a peak of maturity and cultural richness. Examples include the work of Majorcan polymath Ramon Llull (1232–1315), the Four Great Chronicles (13th–14th centuries), and the Valencian school of poetry culminating in Ausiàs March (1397–1459). By the 15th century, the city of Valencia had become the sociocultural center of the Crown of Aragon, and Catalan was present all over the Mediterranean world. During this period, the Royal Chancery propagated a highly standardized language. Catalan was widely used as an official language in Sicily until the 15th century, and in Sardinia until the 17th. During this period, the language was what Costa Carreras terms "one of the 'great languages' of medieval Europe". Martorell's outstanding novel of chivalry Tirant lo Blanc (1490) shows a transition from Medieval to Renaissance values, something that can also be seen in Metge's work. The first book produced with movable type in the Iberian Peninsula was printed in Catalan. Start of the modern era Spain With the union of the crowns of Castille and Aragon in 1479, the Spanish kings ruled over different kingdoms, each with its own cultural, linguistic and political particularities, and they had to swear by the laws of each territory before the respective parliaments. But after the War of the Spanish Succession, Spain became an absolute monarchy under Philip V, which led to the assimilation of the Crown of Aragon by the Crown of Castile through the Nueva Planta decrees, as a first step in the creation of the Spanish nation-state; as in other contemporary European states, this meant the imposition of the political and cultural characteristics of the dominant groups. Since the political unification of 1714, Spanish assimilation policies towards national minorities have been a constant. The process of assimilation began with secret instructions to the corregidores of the Catalan territory: they "will take the utmost care to introduce the Castilian language, for which purpose he will give the most temperate and disguised measures so that the effect is achieved, without the care being noticed." From there, actions in the service of assimilation, discreet or aggressive, were continued, and reached to the last detail, such as, in 1799, the Royal Certificate forbidding anyone to "represent, sing and dance pieces that were not in Spanish." Anyway, the use of Spanish gradually became more prestigious and marked the start of the decline of Catalan. Starting in the 16th century, Catalan literature came under the influence of Spanish, and the nobles, part of the urban and literary classes became bilingual. France With the Treaty of the Pyrenees (1659), Spain ceded the northern part of Catalonia to France, and soon thereafter the local Catalan varieties came under the influence of French, which in 1700 became the sole official language of the region. Shortly after the French Revolution (1789), the French First Republic prohibited official use of, and enacted discriminating policies against, the regional languages of France, such as Catalan, Alsatian, Breton, Occitan, Flemish, and Basque. France: 19th to 20th century Following the French establishment of the colony of Algeria from 1830 onward, it received several waves of Catalan-speaking settlers. People from the Spanish Alicante province settled around Oran, whereas Algiers received immigration from Northern Catalonia and Menorca. Their speech was known as patuet. By 1911, the number of Catalan speakers was around 100,000. After the declaration of independence of Algeria in 1962, almost all the Catalan speakers fled to Northern Catalonia (as Pieds-Noirs) or Alacant. The government of France formally recognizes only French as an official language. Nevertheless, on 10 December 2007, the General Council of the Pyrénées-Orientales officially recognized Catalan as one of the languages of the department and seeks to further promote it in public life and education. Spain: 18th to 20th century In 1807, the Statistics Office of the French Ministry of the Interior asked the prefects for an official survey on the limits of the French language. The survey found that in Roussillon, almost only Catalan was spoken, and since Napoleon wanted to incorporate Catalonia into France, as happened in 1812, the consul in Barcelona was also asked. He declared that Catalan "is taught in schools, it is printed and spoken, not only among the lower class, but also among people of first quality, also in social gatherings, as in visits and congresses", indicating that it was spoken everywhere "with the exception of the royal courts". He also indicated that Catalan was spoken "in the Kingdom of Valencia, in the islands of Mallorca, Menorca, Ibiza, Sardinia, Corsica and much of Sicily, in the Vall d "Aran and Cerdaña". The defeat of the pro-Habsburg coalition in the War of Spanish Succession (1714) initiated a series of laws which, among other centralizing measures, imposed the use of Spanish in legal documentation all over Spain. Because of this, use of the Catalan language declined into the 18th century. However, the 19th century saw a Catalan literary revival (), which has continued up to the present day. This period starts with Aribau's Ode to the Homeland (1833); followed in the second half of the 19th century, and the early 20th by the work of Verdaguer (poetry), Oller (realist novel), and Guimerà (drama). In the 19th century, the region of Carche, in the province of Murcia was repopulated with Valencian speakers. Catalan spelling was standardized in 1913 and the language became official during the Second Spanish Republic (1931–1939). The Second Spanish Republic saw a brief period of tolerance, with most restrictions against Catalan lifted. The Generalitat (the autonomous government of Catalonia, established during the Republic in 1931) made a normal use of Catalan in its administration and put efforts to promote it at social level, including in schools and the University of Barcelona. The Catalan language and culture were still vibrant during the Spanish Civil War (1936–1939), but were crushed at an unprecedented level throughout the subsequent decades due to Francoist dictatorship (1939–1975), which abolished the official status of Catalan and imposed the use of Spanish in schools and in public administration in all of Spain, while banning the use of Catalan in them. Between 1939 and 1943 newspapers and book printing in Catalan almost disappeared. Francisco Franco's desire for a homogenous Spanish population resonated with some Catalans in favor of his regime, primarily members of the upper class, who began to reject the use of Catalan. Despite all of these hardships, Catalan continued to be used privately within households, and it was able to survive Franco's dictatorship. At the end of World War II, however, some of the harsh mesures began to be lifted and, while Spanish language remained the sole promoted one, limited number of Catalan literature began to be tolerated. Several prominent Catalan authors resisted the suppression through literature. Private initiative contests were created to reward works in Catalan, among them Joan Martorell prize (1947), Víctor Català prize (1953) Carles Riba award (1950), or the Honor Award of Catalan Letters (1969). The first Catalan-language TV show was broadcast in 1964. At the same time, oppression of the Catalan language and identity was carried out in schools, through governmental bodies, and in religious centers. In addition to the loss of prestige for Catalan and its prohibition in schools, migration during the 1950s into Catalonia from other parts of Spain also contributed to the diminished use of the language. These migrants were often unaware of the existence of Catalan, and thus felt no need to learn or use it. Catalonia was the economic powerhouse of Spain, so these migrations continued to occur from all corners of the country. Employment opportunities were reduced for those who were not bilingual. Daily newspapers remained exclusively in Spanish until after Franco's death, when the first one in Catalan since the end of the Civil War, Avui, began to be published in 1976. Present day Since the Spanish transition to democracy (1975–1982), Catalan has been institutionalized as an official language, language of education, and language of mass media; all of which have contributed to its increased prestige. In Catalonia, there is an unparalleled large bilingual European non-state linguistic community. The teaching of Catalan is mandatory in all schools, but it is possible to use Spanish for studying in the public education system of Catalonia in two situations – if the teacher assigned to a class chooses to use Spanish, or during the learning process of one or more recently arrived immigrant students. There is also some intergenerational shift towards Catalan. More recently, several Spanish political forces have tried to increase the use of Spanish in the Catalan educational system. As a result, in May 2022 the Spanish Supreme Court urged the Catalan regional government to enforce a measure by which 25% of all lessons must be taught in Spanish. According to the Statistical Institute of Catalonia, in 2013 the Catalan language is the second most commonly used in Catalonia, after Spanish, as a native or self-defining language: 7% of the population self-identifies with both Catalan and Spanish equally, 36.4% with Catalan and 47.5% only Spanish. In 2003 the same studies concluded no language preference for self-identification within the population above 15 years old: 5% self-identified with both languages, 44.3% with Catalan and 47.5% with Spanish. To promote use of Catalan, the Generalitat de Catalunya (Catalonia's official Autonomous government) spends part of its annual budget on the promotion of the use of Catalan in Catalonia and in other territories, with entities such as (Consortium for Linguistic Normalization) In Andorra, Catalan has always been the sole official language. Since the promulgation of the 1993 constitution, several policies favoring Catalan have been enforced, like Catalan medium education. On the other hand, there are several language shift processes currently taking place. In the Northern Catalonia area of France, Catalan has followed the same trend as the other minority languages of France, with most of its native speakers being 60 or older (as of 2004). Catalan is studied as a foreign language by 30% of the primary education students, and by 15% of the secondary. The cultural association promotes a network of community-run schools engaged in Catalan language immersion programs. In Alicante province, Catalan is being replaced by Spanish and in Alghero by Italian. There is also well ingrained diglossia in the Valencian Community, Ibiza, and to a lesser extent, in the rest of the Balearic islands. During the 20th century many Catalans emigrated or went into exile to Venezuela, Mexico, Cuba, Argentina, and other South American countries. They formed a large number of Catalan colonies that today continue to maintain the Catalan language. They also founded many Catalan casals (associations). Classification and relationship with other Romance languages One classification of Catalan is given by Pèire Bèc: Romance languages Italo-Western languages Western Romance languages Gallo-Iberian languages Gallo-Romance languages Occitano-Romance languages Catalan language However, the ascription of Catalan to the Occitano-Romance branch of Gallo-Romance languages is not shared by all linguists and philologists, particularly among Spanish ones, such as Ramón Menéndez Pidal. Catalan bears varying degrees of similarity to the linguistic varieties subsumed under the cover term Occitan language (see also differences between Occitan and Catalan and Gallo-Romance languages). Thus, as it should be expected from closely related languages, Catalan today shares many traits with other Romance languages. Relationship with other Romance languages Some include Catalan in Occitan, as the linguistic distance between this language and some Occitan dialects (such as the Gascon language) is similar to the distance among different Occitan dialects. Catalan was considered a dialect of Occitan until the end of the 19th century and still today remains its closest relative. Catalan shares many traits with the other neighboring Romance languages (Occitan, French, Italian, Sardinian as well as Spanish and Portuguese among others). However, despite being spoken mostly on the Iberian Peninsula, Catalan has marked differences with the Iberian Romance group (Spanish and Portuguese) in terms of pronunciation, grammar, and especially vocabulary; it shows instead its closest affinity with languages native to France and northern Italy, particularly Occitan and to a lesser extent Gallo-Romance (Franco-Provençal, French, Gallo-Italian). According to Ethnologue, the lexical similarity between Catalan and other Romance languages is: 87% with Italian; 85% with Portuguese and Spanish; 76% with Ladin and Romansh; 75% with Sardinian; and 73% with Romanian. During much of its history, and especially during the Francoist dictatorship (1939–1975), the Catalan language was ridiculed as a mere dialect of Spanish. This view, based on political and ideological considerations, has no linguistic validity. Spanish and Catalan have important differences in their sound systems, lexicon, and grammatical features, placing the language in features closer to Occitan (and French). There is evidence that, at least from the 2nd century , the vocabulary and phonology of Roman Tarraconensis was different from the rest of Roman Hispania. Differentiation arose generally because Spanish, Asturian, and Galician-Portuguese share certain peripheral archaisms (Spanish , Asturian and Portuguese vs. Catalan , Occitan "to boil") and innovatory regionalisms (Sp , Ast vs. Cat , Oc "bullock"), while Catalan has a shared history with the Western Romance innovative core, especially Occitan. Like all Romance languages, Catalan has a handful of native words which are unique to it, or rare elsewhere. These include: verbs: 'to fasten; transfix' > 'to compose, write up', > 'to combine, conjugate', > 'to wake; awaken', 'to thicken; crowd together' > 'to save, keep', > 'to miss, yearn, pine for', 'to investigate, track' > Old Catalan enagar 'to incite, induce', > OCat ujar 'to exhaust, fatigue', > 'to appease, mollify', > 'to reject, refuse'; nouns: > 'pomace', > 'reedmace', > 'catarrh', > 'snowdrift', > 'ardor, passion', > 'brake', > 'avalanche', > 'edge, border', 'sawfish' > pestriu > 'thresher shark, smooth hound; ray', 'live coal' > 'spark', > tardaó > 'autumn'. The Gothic superstrate produced different outcomes in Spanish and Catalan. For example, Catalan "mud" and "to roast", of Germanic origin, contrast with Spanish and , of Latin origin; whereas Catalan "spinning wheel" and "temple", of Latin origin, contrast with Spanish and , of Germanic origin. The same happens with Arabic loanwords. Thus, Catalan "large earthenware jar" and "tile", of Arabic origin, contrast with Spanish and , of Latin origin; whereas Catalan "oil" and "olive", of Latin origin, contrast with Spanish and . However, the Arabic element in Spanish is generally much more prevalent. Situated between two large linguistic blocks (Iberian Romance and Gallo-Romance), Catalan has many unique lexical choices, such as "to miss somebody", "to calm somebody down", and "reject". Geographic distribution Catalan-speaking territories Traditionally Catalan-speaking territories are sometimes called the (Catalan Countries), a denomination based on cultural affinity and common heritage, that has also had a subsequent political interpretation but no official status. Various interpretations of the term may include some or all of these regions. Number of speakers The number of people known to be fluent in Catalan varies depending on the sources used. A 2004 study did not count the total number of speakers, but estimated a total of 9–9.5 million by matching the percentage of speakers to the population of each area where Catalan is spoken. The web site of the Generalitat de Catalunya estimated that as of 2004 there were 9,118,882 speakers of Catalan. These figures only reflect potential speakers; today it is the native language of only 35.6% of the Catalan population. According to Ethnologue, Catalan had 4.1 million native speakers and 5.1 million second-language speakers in 2021. According to a 2011 study the total number of Catalan speakers is over 9.8 million, with 5.9 million residing in Catalonia. More than half of them speak Catalan as a second language, with native speakers being about 4.4 million of those (more than 2.8 in Catalonia). Very few Catalan monoglots exist; basically, virtually all of the Catalan speakers in Spain are bilingual speakers of Catalan and Spanish, with a sizable population of Spanish-only speakers of immigrant origin (typically born outside Catalonia or whose parents were both born outside Catalonia) existing in the major Catalan urban areas as well. In Roussillon, only a minority of French Catalans speak Catalan nowadays, with French being the majority language for the inhabitants after a continued process of language shift. According to a 2019 survey by the Catalan government, 31.5% of the inhabitants of Catalonia have Catalan as first language at home whereas 52.7% have Spanish, 2.8% both Catalan and Spanish and 10.8% other languages. Spanish is the most spoken language in Barcelona (according to the linguistic census held by the Government of Catalonia in 2013) and it is understood almost universally. According to this census of 2013 Catalan is also very commonly spoken in the city of 1,501,262: it is understood by 95% of the population, while 72.3% over the age of 2 can speak it (1,137,816), 79% can read it (1,246.555), and 53% can write it (835,080). The proportion in Barcelona who can speak it, 72.3%, is lower than that of the overall Catalan population, of whom 81.2% over the age of 15 speak the language. Knowledge of Catalan has increased significantly in recent decades thanks to a language immersion educational system. An important social characteristic of the Catalan language is that all the areas where it is spoken are bilingual in practice: together with the French language in Roussillon, with Italian in Alghero, with Spanish and French in Andorra and with Spanish in the rest of the territories. 1. The number of people who understand Catalan includes those who can speak it. 2. Figures relate to all self-declared capable speakers, not just native speakers. Level of knowledge (% of the population 15 years old and older). Social use (% of the population 15 years old and older). Native language Phonology Catalan phonology varies by dialect. Notable features include: Marked contrast of the vowel pairs and , as in other Western Romance languages, other than Spanish. Lack of diphthongization of Latin short , , as in Galician and Portuguese, but unlike French, Spanish, or Italian. Abundance of diphthongs containing , as in Galician and Portuguese. In contrast to other Romance languages, Catalan has many monosyllabic words, and these may end in a wide variety of consonants, including some consonant clusters. Additionally, Catalan has final obstruent devoicing, which gives rise to an abundance of such couplets as ("male friend") vs. ("female friend"). Central Catalan pronunciation is considered to be standard for the language. The descriptions below are mostly representative of this variety. For the differences in pronunciation between the different dialects, see the section on pronunciation of dialects in this article. Vowels Catalan has inherited the typical vowel system of Vulgar Latin, with seven stressed phonemes: , a common feature in Western Romance, with the exception of Spanish. Balearic also has instances of stressed . Dialects differ in the different degrees of vowel reduction, and the incidence of the pair . In Central Catalan, unstressed vowels reduce to three: ; ; remains distinct. The other dialects have different vowel reduction processes (see the section pronunciation of dialects in this article). Consonants The consonant system of Catalan is rather conservative. has a velarized allophone in syllable coda position in most dialects. However, is velarized irrespective of position in Eastern dialects like Majorcan and standard Eastern Catalan. occurs in Balearic, Algherese, standard Valencian and some areas in southern Catalonia. It has merged with elsewhere. Voiced obstruents undergo final-obstruent devoicing: . Voiced stops become lenited to approximants in syllable onsets, after continuants: > , > , > . Exceptions include after lateral consonants, and after . In coda position, these sounds are realized as stops, except in some Valencian dialects where they are lenited. There is some confusion in the literature about the precise phonetic characteristics of , , , . Some sources describe them as "postalveolar". Others as "back alveolo-palatal", implying that the characters would be more accurate. However, in all literature only the characters for palato-alveolar affricates and fricatives are used, even when the same sources use for other languages like Polish and Chinese. The distribution of the two rhotics and closely parallels that of Spanish. Between vowels, the two contrast, but they are otherwise in complementary distribution: in the onset of the first syllable in a word, appears unless preceded by a consonant. Dialects vary in regards to rhotics in the coda with Western Catalan generally featuring and Central Catalan dialects featuring a weakly trilled unless it precedes a vowel-initial word in the same prosodic unit, in which case appears. In careful speech, , , may be geminated. Geminated may also occur. Some analyze intervocalic as the result of gemination of a single rhotic phoneme. This is similar to the common analysis of Spanish and Portuguese rhotics. Phonological evolution Sociolinguistics Catalan sociolinguistics studies the situation of Catalan in the world and the different varieties that this language presents. It is a subdiscipline of Catalan philology and other affine studies and has as an objective to analyze the relation between the Catalan language, the speakers and the close reality (including the one of other languages in contact). Preferential subjects of study Dialects of Catalan Variations of Catalan by class, gender, profession, age and level of studies Process of linguistic normalization Relations between Catalan and Spanish or French Perception on the language of Catalan speakers and non-speakers Presence of Catalan in several fields: tagging, public function, media, professional sectors Dialects Overview The dialects of the Catalan language feature a relative uniformity, especially when compared to other Romance languages; both in terms of vocabulary, semantics, syntax, morphology, and phonology. Mutual intelligibility between dialects is very high, estimates ranging from 90% to 95%. The only exception is the isolated idiosyncratic Algherese dialect. Catalan is split in two major dialectal blocks: Eastern and Western. The main difference lies in the treatment of unstressed and ; which have merged to in Eastern dialects, but which remain distinct as and in Western dialects. There are a few other differences in pronunciation, verbal morphology, and vocabulary. Western Catalan comprises the two dialects of Northwestern Catalan and Valencian; the Eastern block comprises four dialects: Central Catalan, Balearic, Rossellonese, and Algherese. Each dialect can be further subdivided in several subdialects. The terms "Catalan" and "Valencian" (respectively used in Catalonia and the Valencian Community) refer to two varieties of the same language. There are two institutions regulating the two standard varieties, the Institute of Catalan Studies in Catalonia and the Valencian Academy of the Language in the Valencian Community. Central Catalan is considered the standard pronunciation of the language and has the largest number of speakers. It is spoken in the densely populated regions of the Barcelona province, the eastern half of the province of Tarragona, and most of the province of Girona. Catalan has an inflectional grammar. Nouns have two genders (masculine, feminine), and two numbers (singular, plural). Pronouns additionally can have a neuter gender, and some are also inflected for case and politeness, and can be combined in very complex ways. Verbs are split in several paradigms and are inflected for person, number, tense, aspect, mood, and gender. In terms of pronunciation, Catalan has many words ending in a wide variety of consonants and some consonant clusters, in contrast with many other Romance languages. Pronunciation Vowels Catalan has inherited the typical vowel system of Vulgar Latin, with seven stressed phonemes: , a common feature in Western Romance, except Spanish. Balearic has also instances of stressed . Dialects differ in the different degrees of vowel reduction, and the incidence of the pair . In Eastern Catalan (except Majorcan), unstressed vowels reduce to three: ; ; remains distinct. There are a few instances of unreduced , in some words. Algherese has lowered to . In Majorcan, unstressed vowels reduce to four: follow the Eastern Catalan reduction pattern; however reduce to , with remaining distinct, as in Western Catalan. In Western Catalan, unstressed vowels reduce to five: ; ; remain distinct. This reduction pattern, inherited from Proto-Romance, is also found in Italian and Portuguese. Some Western dialects present further reduction or vowel harmony in some cases. Central, Western, and Balearic differ in the lexical incidence of stressed and . Usually, words with in Central Catalan correspond to in Balearic and in Western Catalan. Words with in Balearic almost always have in Central and Western Catalan as well. As a result, Central Catalan has a much higher incidence of . Consonants Morphology Western Catalan: In verbs, the ending for 1st-person present indicative is in verbs of the 1st conjugation and -∅ in verbs of the 2nd and 3rd conjugations in most of the Valencian Community, or in all verb conjugations in the Northern Valencian Community and Western Catalonia.E.g. , , (Valencian); , , (Northwestern Catalan). Eastern Catalan: In verbs, the ending for 1st-person present indicative is , , or -∅ in all conjugations. E.g. (Central), (Balearic), and (Northern), all meaning ('I speak'). Western Catalan: In verbs, the inchoative endings are /, , , /. Eastern Catalan: In verbs, the inchoative endings are , , , . Western Catalan: In nouns and adjectives, maintenance of of medieval plurals in proparoxytone words.E.g. 'men', 'youth'. Eastern Catalan: In nouns and adjectives, loss of of medieval plurals in proparoxytone words.E.g. 'men', 'youth' (Ibicencan, however, follows the model of Western Catalan in this case). Vocabulary Despite its relative lexical unity, the two dialectal blocks of Catalan (Eastern and Western) show some differences in word choices. Any lexical divergence within any of the two groups can be explained as an archaism. Also, usually Central Catalan acts as an innovative element. Standards Standard Catalan, virtually accepted by all speakers, is mostly based on Eastern Catalan, which is the most widely used dialect. Nevertheless, the standards of the Valencian Community and the Balearics admit alternative forms, mostly traditional ones, which are not current in eastern Catalonia. The most notable difference between both standards is some tonic accentuation, for instance: (IEC) – (AVL). Nevertheless, AVL's standard keeps the grave accent , while pronouncing it as rather than , in some words like: ('what'), or . Other divergences include the use of (AVL) in some words instead of like in / ('almond'), / ('back'), the use of elided demonstratives ( 'this', 'that') in the same level as reinforced ones () or the use of many verbal forms common in Valencian, and some of these common in the rest of Western Catalan too, like subjunctive mood or inchoative conjugation in at the same level as or the priority use of morpheme in 1st person singular in present indicative ( verbs): instead of ('I buy'). In the Balearic Islands, IEC's standard is used but adapted for the Balearic dialect by the University of the Balearic Islands's philological section. In this way, for instance, IEC says it is correct writing as much as ('we sing'), but the university says that the priority form in the Balearic Islands must be in all fields. Another feature of the Balearic standard is the non-ending in the 1st person singular present indicative: ('I buy'), ('I fear'), ('I sleep'). In Alghero, the IEC has adapted its standard to the Algherese dialect of Sardinia. In this standard one can find, among other features: the definite article instead of , special possessive pronouns and determinants ('mine'), ('his/her'), ('yours'), and so on, the use of in the imperfect tense in all conjugations: , , ; the use of many archaic words, usual words in Algherese: instead of ('less'), instead of ('someone'), instead of ('which'), and so on; and the adaptation of weak pronouns. In 1999, Catalan (Algherese dialect) was among the twelve minority languages officially recognized as Italy's "historical linguistic minorities" by the Italian State under Law No. 482/1999. In 2011, the Aragonese government passed a decree approving the statutes of a new language regulator of Catalan in La Franja (the so-called Catalan-speaking areas of Aragon) as originally provided for by Law 10/2009. The new entity, designated as , shall allow a facultative education in Catalan and a standardization of the Catalan language in La Franja. Status of Valencian Valencian is classified as a Western dialect, along with the northwestern varieties spoken in Western Catalonia (provinces of Lleida and the western half of Tarragona). Central Catalan has 90% to 95% inherent intelligibility for speakers of Valencian. Linguists, including Valencian scholars, deal with Catalan and Valencian as the same language. The official regulating body of the language of the Valencian Community, the Valencian Academy of Language (Acadèmia Valenciana de la Llengua, AVL) declares the linguistic unity between Valencian and Catalan varieties. The AVL, created by the Valencian parliament, is in charge of dictating the official rules governing the use of Valencian, and its standard is based on the Norms of Castelló (Normes de Castelló). Currently, everyone who writes in Valencian uses this standard, except the Royal Academy of Valencian Culture (Acadèmia de Cultura Valenciana, RACV), which uses for Valencian an independent standard. Despite the position of the official organizations, an opinion poll carried out between 2001 and 2004 showed that the majority of the Valencian people consider Valencian different from Catalan. This position is promoted by people who do not use Valencian regularly. Furthermore, the data indicates that younger generations educated in Valencian are much less likely to hold these views. A minority of Valencian scholars active in fields other than linguistics defends the position of the Royal Academy of Valencian Culture (Acadèmia de Cultura Valenciana, RACV), which uses for Valencian a standard independent from Catalan. This clash of opinions has sparked much controversy. For example, during the drafting of the European Constitution in 2004, the Spanish government supplied the EU with translations of the text into Basque, Galician, Catalan, and Valencian, but the latter two were identical. Vocabulary Word choices Despite its relative lexical unity, the two dialectal blocks of Catalan (Eastern and Western) show some differences in word choices. Any lexical divergence within any of the two groups can be explained as an archaism. Also, usually Central Catalan acts as an innovative element. Literary Catalan allows the use of words from different dialects, except those of very restricted use. However, from the 19th century onwards, there has been a tendency towards favoring words of Northern dialects to the detriment of others, Latin and Greek loanwords Like other languages, Catalan has a large list of loanwords from Greek and Latin. This process started very early, and one can find such examples in Ramon Llull's work. In the 14th and 15th centuries Catalan had a far greater number of Greco-Latin loanwords than other Romance languages, as is attested for example in Roís de Corella's writings. The incorporation of learned, or "bookish" words from its own ancestor language, Latin, into Catalan is arguably another form of lexical borrowing through the influence of written language and the liturgical language of the Church. Throughout the Middle Ages and into the early modern period, most literate Catalan speakers were also literate in Latin; and thus they easily adopted Latin words into their writing—and eventually speech—in Catalan. Word formation The process of morphological derivation in Catalan follows the same principles as the other Romance languages, where agglutination is common. Many times, several affixes are appended to a preexisting lexeme, and some sound alternations can occur, for example ("electrical") vs. . Prefixes are usually appended to verbs, as in ("foresee"). There is greater regularity in the process of word-compounding, where one can find compounded words formed much like those in English. Writing system Catalan uses the Latin script, with some added symbols and digraphs. The Catalan orthography is systematic and largely phonologically based. Standardization of Catalan was among the topics discussed during the First International Congress of the Catalan Language, held in Barcelona October 1906. Subsequently, the Philological Section of the Institut d'Estudis Catalans (IEC, founded in 1911) published the Normes ortogràfiques in 1913 under the direction of Antoni Maria Alcover and Pompeu Fabra. In 1932, Valencian writers and intellectuals gathered in Castelló de la Plana to make a formal adoption of the so-called Normes de Castelló, a set of guidelines following Pompeu Fabra's Catalan language norms. Grammar The grammar of Catalan is similar to other Romance languages. Features include: Use of definite and indefinite articles. Nouns, adjectives, pronouns, and articles are inflected for gender (masculine and feminine), and number (singular and plural). There is no case inflexion, except in pronouns. Verbs are highly inflected for person, number, tense, aspect, and mood (including a subjunctive). There are no modal auxiliaries. Word order is freer than in English. Gender and number inflection In gender inflection, the most notable feature is (compared to Portuguese, Spanish or Italian), the loss of the typical masculine suffix . Thus, the alternance of /, has been replaced by ø/. There are only a few exceptions, like / ("scarce"). Many not completely predictable morphological alternations may occur, such as: Affrication: / ("insane") vs. / ("ugly") Loss of : / ("flat") vs. / ("second") Final obstruent devoicing: / ("felt") vs. / ("said") Catalan has few suppletive couplets, like Italian and Spanish, and unlike French. Thus, Catalan has / ("boy"/"girl") and / ("cock"/"hen"), whereas French has / and /. There is a tendency to abandon traditionally gender-invariable adjectives in favor of marked ones, something prevalent in Occitan and French. Thus, one can find / ("boiling") in contrast with traditional /. As in the other Western Romance languages, the main plural expression is the suffix , which may create morphological alternations similar to the ones found in gender inflection, albeit more rarely. The most important one is the addition of before certain consonant groups, a phonetic phenomenon that does not affect feminine forms: / ("the pulse"/"the pulses") vs. / ("the dust"/"the dusts"). Determiners The inflection of determinatives is complex, specially because of the high number of elisions, but is similar to the neighboring languages. Catalan has more contractions of preposition + article than Spanish, like ("of + the [plural]"), but not as many as Italian (which has , , , etc.). Central Catalan has abandoned almost completely unstressed possessives (, etc.) in favor of constructions of article + stressed forms (, etc.), a feature shared with Italian. Personal pronouns The morphology of Catalan personal pronouns is complex, especially in unstressed forms, which are numerous (13 distinct forms, compared to 11 in Spanish or 9 in Italian). Features include the gender-neutral and the great degree of freedom when combining different unstressed pronouns (65 combinations). Catalan pronouns exhibit T–V distinction, like all other Romance languages (and most European languages, but not Modern English). This feature implies the use of a different set of second person pronouns for formality. This flexibility allows Catalan to use extraposition extensively, much more than French or Spanish. Thus, Catalan can have ("they recommended me to him"), whereas in French one must say , and Spanish . This allows the placement of almost any nominal term as a sentence topic, without having to use so often the passive voice (as in French or English), or identifying the direct object with a preposition (as in Spanish). Verbs Like all the Romance languages, Catalan verbal inflection is more complex than the nominal. Suffixation is omnipresent, whereas morphological alternations play a secondary role. Vowel alternances are active, as well as infixation and suppletion. However, these are not as productive as in Spanish, and are mostly restricted to irregular verbs. The Catalan verbal system is basically common to all Western Romance, except that most dialects have replaced the synthetic indicative perfect with a periphrastic form of ("to go") + infinitive. Catalan verbs are traditionally divided into three conjugations, with vowel themes , , , the last two being split into two subtypes. However, this division is mostly theoretical. Only the first conjugation is nowadays productive (with about 3500 common verbs), whereas the third (the subtype of , with about 700 common verbs) is semiproductive. The verbs of the second conjugation are fewer than 100, and it is not possible to create new ones, except by compounding. Syntax The grammar of Catalan follows the general pattern of Western Romance languages. The primary word order is subject–verb–object. However, word order is very flexible. Commonly, verb-subject constructions are used to achieve a semantic effect. The sentence "The train has arrived" could be translated as or . Both sentences mean "the train has arrived", but the former puts a focus on the train, while the latter puts a focus on the arrival. This subtle distinction is described as "what you might say while waiting in the station" versus "what you might say on the train." Catalan names In Spain, every person officially has two surnames, one of which is the father's first surname and the other is the mother's first surname. The law contemplates the possibility of joining both surnames with the Catalan conjunction i ("and"). Sample text Selected text from Manuel de Pedrolo's 1970 novel ("A love affair outside the city"). See also Organizations Institut d'Estudis Catalans (Catalan Studies Institute) Acadèmia Valenciana de la Llengua (Valencian Academy of the Language) Òmnium Cultural Plataforma per la Llengua Scholars Marina Abràmova Germà Colón Dominique de Courcelles Martí de Riquer Arthur Terry Lawrence Venuti Other Languages of Catalonia Linguistic features of Spanish as spoken by Catalan speakers Languages of France Languages of Italy Languages of Spain Normes de Castelló Pompeu Fabra Notes References Works cited External links Institutions Consorci per a la Normalització Lingüística Institut d'Estudis Catalans Acadèmia Valenciana de la Llengua About the Catalan language llengua.gencat.cat, by the Government of Catalonia Gramàtica de la Llengua Catalana (Catalan grammar), from the Institute for Catalan Studies Gramàtica Normativa Valenciana (2006, Valencian grammar), from the Acadèmia Valenciana de la Llengua verbs.cat (Catalan verb conjugations with online trainers) Catalan and its dialects LEXDIALGRAM – online portal of 19th-century dialectal lexicographical and grammatical works of Catalan hosted by the University of Barcelona Monolingual dictionaries DIEC2, from the Institut d'Estudis Catalans Gran Diccionari de la Llengua Catalana , from Enciclopèdia Catalana Diccionari Català-Valencià-Balear d'Alcover i Moll , from the Institut d'Estudis Catalans Diccionari Normatiu Valencià (AVL), from the Acadèmia Valenciana de la Llengua diccionarivalencia.com (online Valencian dictionary) Diccionari Invers de la Llengua Catalana (dictionary of Catalan words spelled backwards) Bilingual and multilingual dictionaries Diccionari de la Llengua Catalana Multilingüe (Catalan ↔ English, French, German and Spanish), from Enciclopèdia Catalana DACCO – open source, collaborative dictionary (Catalan–English) Automated translation systems Traductor automated, online translations of text and web pages (Catalan ↔ English, French and Spanish), from gencat.cat by the Government of Catalonia Phrasebooks Catalan phrasebook on Wikivoyage Learning resources Catalan Swadesh list of basic vocabulary words, from Wiktionary's Swadesh-list appendix Catalan-language online encyclopedia Enciclopèdia Catalana Subject–verb–object languages Stress-timed languages
5285
https://en.wikipedia.org/wiki/STS-51-F
STS-51-F
STS-51-F (also known as Spacelab 2) was the 19th flight of NASA's Space Shuttle program and the eighth flight of Space Shuttle Challenger. It launched from Kennedy Space Center, Florida, on July 29, 1985, and landed eight days later on August 6, 1985. While STS-51-F's primary payload was the Spacelab 2 laboratory module, the payload that received the most publicity was the Carbonated Beverage Dispenser Evaluation, which was an experiment in which both Coca-Cola and Pepsi tried to make their carbonated drinks available to astronauts. A helium-cooled infrared telescope (IRT) was also flown on this mission, and while it did have some problems, it observed 60% of the galactic plane in infrared light. During launch, Challenger experienced multiple sensor failures in its Engine 1 Center SSME engine, which led to it shutting down and the shuttle had to perform an "Abort to Orbit" (ATO) emergency procedure. It is the only Shuttle mission to have carried out an abort after launching. As a result of the ATO, the mission was carried out at a slightly lower orbital altitude. Crew Backup crew Crew seating arrangements Crew notes As with previous Spacelab missions, the crew was divided between two 12-hour shifts. Acton, Bridges and Henize made up the "Red Team" while Bartoe, England and Musgrave comprised the "Blue Team"; commander Fullerton could take either shift when needed. Challenger carried two Extravehicular Mobility Units (EMU) in the event of an emergency spacewalk, which would have been performed by England and Musgrave. Launch STS-51-F's first launch attempt on July 12, 1985, was halted with the countdown at T−3 seconds after main engine ignition, when a malfunction of the number two RS-25 coolant valve caused an automatic launch abort. Challenger launched successfully on its second attempt on July 29, 1985, at 17:00 p.m. EDT, after a delay of 1 hour 37 minutes due to a problem with the table maintenance block update uplink. At 3 minutes 31 seconds into the ascent, one of the center engine's two high-pressure fuel turbopump turbine discharge temperature sensors failed. Two minutes and twelve seconds later, the second sensor failed, causing the shutdown of the center engine. This was the only in-flight RS-25 failure of the Space Shuttle program. Approximately 8 minutes into the flight, one of the same temperature sensors in the right engine failed, and the remaining right-engine temperature sensor displayed readings near the redline for engine shutdown. Booster Systems Engineer Jenny M. Howard acted quickly to recommend that the crew inhibit any further automatic RS-25 shutdowns based on readings from the remaining sensors, preventing the potential shutdown of a second engine and a possible abort mode that may have resulted in the loss of crew and vehicle (LOCV). The failed RS-25 resulted in an Abort to Orbit (ATO) trajectory, whereby the shuttle achieved a lower-than-planned orbital altitude. The plan had been for a by orbit, but the mission was carried out at by . Mission summary STS-51-F's primary payload was the laboratory module Spacelab 2. A special part of the modular Spacelab system, the "igloo", which was located at the head of a three-pallet train, provided on-site support to instruments mounted on pallets. The main mission objective was to verify performance of Spacelab systems, determine the interface capability of the orbiter, and measure the environment created by the spacecraft. Experiments covered life sciences, plasma physics, astronomy, high-energy astrophysics, solar physics, atmospheric physics and technology research. Despite mission replanning necessitated by Challengers abort to orbit trajectory, the Spacelab mission was declared a success. The flight marked the first time the European Space Agency (ESA) Instrument Pointing System (IPS) was tested in orbit. This unique pointing instrument was designed with an accuracy of one arcsecond. Initially, some problems were experienced when it was commanded to track the Sun, but a series of software fixes were made and the problem was corrected. In addition, Anthony W. England became the second amateur radio operator to transmit from space during the mission. Spacelab Infrared Telescope The Spacelab Infrared Telescope (IRT) was also flown on the mission. The IRT was a aperture helium-cooled infrared telescope, observing light between wavelengths of 1.7 to 118 μm. It was thought heat emissions from the Shuttle corrupting long-wavelength data, but it still returned useful astronomical data. Another problem was that a piece of mylar insulation broke loose and floated in the line-of-sight of the telescope. IRT collected infrared data on 60% of the galactic plane. (see also List of largest infrared telescopes) A later space mission that experienced a stray light problem from debris was Gaia astrometry spacecraft launch in 2013 by the ESA - the source of the stray light was later identified as the fibers of the sunshield, protruding beyond the edges of the shield. Other payloads The Plasma Diagnostics Package (PDP), which had been previously flown on STS-3, made its return on the mission, and was part of a set of plasma physics experiments designed to study the Earth's ionosphere. During the third day of the mission, it was grappled out of the payload bay by the Remote Manipulator System (Canadarm) and released for six hours. During this time, Challenger maneuvered around the PDP as part of a targeted proximity operations exercise. The PDP was successfully grappled by the Canadarm and returned to the payload bay at the beginning of the fourth day of the mission. In a heavily publicized marketing experiment, astronauts aboard STS-51-F drank carbonated beverages from specially designed cans from Cola Wars competitors Coca-Cola and Pepsi. According to Acton, after Coke developed its experimental dispenser for an earlier shuttle flight, Pepsi insisted to American president Ronald Reagan that Coke should not be the first cola in space. The experiment was delayed until Pepsi could develop its own system, and the two companies' products were assigned to STS-51-F. Blue Team tested Coke, and Red Team tested Pepsi. As part of the experiment, each team was photographed with the cola logo. Acton said that while the sophisticated Coke system "dispensed soda kind of like what we're used to drinking on Earth", the Pepsi can was a shaving cream can with the Pepsi logo on a paper wrapper, which "dispensed soda filled with bubbles" that was "not very drinkable". Acton said that when he gives speeches in schools, audiences are much more interested in hearing about the cola experiment than in solar physics. Post-flight, the astronauts revealed that they preferred Tang, in part because it could be mixed on-orbit with existing chilled-water supplies, whereas there was no dedicated refrigeration equipment on board to chill the cans, which also fizzed excessively in microgravity. In an experiment during the mission, thruster rockets were fired at a point over Tasmania and also above Boston to create two "holes" – plasma depletion regions – in the ionosphere. A worldwide group of geophysicists collaborated with the observations made from Spacelab 2. Landing Challenger landed at Edwards Air Force Base, California, on August 6, 1985, at 12:45:26 p.m. PDT. Its rollout distance was . The mission had been extended by 17 orbits for additional payload activities due to the Abort to Orbit. The orbiter arrived back at Kennedy Space Center on August 11, 1985. Mission insignia The mission insignia was designed by Houston, Texas artist Skip Bradley. is depicted ascending toward the heavens in search of new knowledge in the field of solar and stellar astronomy, with its Spacelab 2 payload. The constellations Leo and Orion are shown in the positions they were in relative to the Sun during the flight. The nineteen stars indicate that the mission is the 19th shuttle flight. Legacy One of the purposes of the mission was to test how suitable the Shuttle was for conducting infrared observations, and the IRT was operated on this mission. However, the orbiter was found to have some draw-backs for infrared astronomy, and this led to later infrared telescopes being free-flying from the Shuttle orbiter. See also List of human spaceflights List of Space Shuttle missions Salyut 7 (a space station of the Soviet Union also in orbit at this time) Soyuz T-13 (a mission to salvage that space station in the summer of 1985) References External links NASA mission summary Press Kit STS-51F Video Highlights Space Coke can Carbonated Drinks in Space YouTube: STS-51F launch, abort and landing July 12 launch attempt Space Shuttle Missions Summary Space Shuttle missions Edwards Air Force Base 1985 in spaceflight 1985 in the United States Crewed space observatories Spacecraft launched in 1985 Spacecraft which reentered in 1985
5288
https://en.wikipedia.org/wiki/Classical%20period%20%28music%29
Classical period (music)
The Classical period was an era of classical music between roughly 1750 and 1820. The Classical period falls between the Baroque and the Romantic periods. Classical music has a lighter, clearer texture than Baroque music, but a more varying use of musical form, which is, in simpler terms, the rhythm and organization of any given piece of music. It is mainly homophonic, using a clear melody line over a subordinate chordal accompaniment, but counterpoint was by no means forgotten, especially in liturgical vocal music and, later in the period, secular instrumental music. It also makes use of style galant which emphasized light elegance in place of the Baroque's dignified seriousness and impressive grandeur. Variety and contrast within a piece became more pronounced than before and the orchestra increased in size, range, and power. The harpsichord was replaced as the main keyboard instrument by the piano (or fortepiano). Unlike the harpsichord, which plucks strings with quills, pianos strike the strings with leather-covered hammers when the keys are pressed, which enables the performer to play louder or softer (hence the original name "fortepiano," literally "loud soft") and play with more expression; in contrast, the force with which a performer plays the harpsichord keys does not change the sound. Instrumental music was considered important by Classical period composers. The main kinds of instrumental music were the sonata, trio, string quartet, quintet, symphony (performed by an orchestra) and the solo concerto, which featured a virtuoso solo performer playing a solo work for violin, piano, flute, or another instrument, accompanied by an orchestra. Vocal music, such as songs for a singer and piano (notably the work of Schubert), choral works, and opera (a staged dramatic work for singers and orchestra) were also important during this period. The best-known composers from this period are Joseph Haydn, Wolfgang Amadeus Mozart, Ludwig van Beethoven, and Franz Schubert; other names in this period include: Carl Philipp Emanuel Bach, Johann Christian Bach, Luigi Boccherini, Domenico Cimarosa, Joseph Martin Kraus, Muzio Clementi, Christoph Willibald Gluck, Carl Ditters von Dittersdorf, André Grétry, Pierre-Alexandre Monsigny, Leopold Mozart, Michael Haydn, Giovanni Paisiello, Johann Baptist Wanhal, François-André Danican Philidor, Niccolò Piccinni, Antonio Salieri, Etienne Nicolas Mehul, Georg Christoph Wagenseil, Georg Matthias Monn, Johann Gottlieb Graun, Carl Heinrich Graun, Franz Benda, Georg Anton Benda, Johann Georg Albrechtsberger, Mauro Giuliani, Christian Cannabich and the Chevalier de Saint-Georges. Beethoven is regarded either as a Romantic composer or a Classical period composer who was part of the transition to the Romantic era. Schubert is also a transitional figure, as were Johann Nepomuk Hummel, Luigi Cherubini, Gaspare Spontini, Gioachino Rossini, Carl Maria von Weber, John Field, Jan Ladislav Dussek and Niccolò Paganini. The period is sometimes referred to as the era of Viennese Classicism (), since Gluck, Haydn, Salieri, Mozart, Beethoven, and Schubert all worked in Vienna. Classicism In the middle of the 18th century, Europe began to move toward a new style in architecture, literature, and the arts, generally known as Neoclassicism. This style sought to emulate the ideals of Classical antiquity, especially those of Classical Greece. Classical music used formality and emphasis on order and hierarchy, and a "clearer", "cleaner" style that used clearer divisions between parts (notably a clear, single melody accompanied by chords), brighter contrasts and "tone colors" (achieved by the use of dynamic changes and modulations to more keys). In contrast with the richly layered music of the Baroque era, Classical music moved towards simplicity rather than complexity. In addition, the typical size of orchestras began to increase, giving orchestras a more powerful sound. The remarkable development of ideas in "natural philosophy" had already established itself in the public consciousness. In particular, Newton's physics was taken as a paradigm: structures should be well-founded in axioms and be both well-articulated and orderly. This taste for structural clarity began to affect music, which moved away from the layered polyphony of the Baroque period toward a style known as homophony, in which the melody is played over a subordinate harmony. This move meant that chords became a much more prevalent feature of music, even if they interrupted the melodic smoothness of a single part. As a result, the tonal structure of a piece of music became more audible. The new style was also encouraged by changes in the economic order and social structure. As the 18th century progressed well, the nobility became the primary patrons of instrumental music, while public taste increasingly preferred lighter, funny comic operas. This led to changes in the way music was performed, the most crucial of which was the move to standard instrumental groups and the reduction in the importance of the continuo—the rhythmic and harmonic groundwork of a piece of music, typically played by a keyboard (harpsichord or organ) and usually accompanied by a varied group of bass instruments, including cello, double bass, bass viol, and theorbo. One way to trace the decline of the continuo and its figured chords is to examine the disappearance of the term obbligato, meaning a mandatory instrumental part in a work of chamber music. In Baroque compositions, additional instruments could be added to the continuo group according to the group or leader's preference; in Classical compositions, all parts were specifically noted, though not always notated, so the term "obbligato" became redundant. By 1800, basso continuo was practically extinct, except for the occasional use of a pipe organ continuo part in a religious Mass in the early 1800s. Economic changes also had the effect of altering the balance of availability and quality of musicians. While in the late Baroque, a major composer would have the entire musical resources of a town to draw on, the musical forces available at an aristocratic hunting lodge or small court were smaller and more fixed in their level of ability. This was a spur to having simpler parts for ensemble musicians to play, and in the case of a resident virtuoso group, a spur to writing spectacular, idiomatic parts for certain instruments, as in the case of the Mannheim orchestra, or virtuoso solo parts for particularly skilled violinists or flautists. In addition, the appetite by audiences for a continual supply of new music carried over from the Baroque. This meant that works had to be performable with, at best, one or two rehearsals. Even after 1790 Mozart writes about "the rehearsal", with the implication that his concerts would have only one rehearsal. Since there was a greater emphasis on a single melodic line, there was greater emphasis on notating that line for dynamics and phrasing. This contrasts with the Baroque era, when melodies were typically written with no dynamics, phrasing marks or ornaments, as it was assumed that the performer would improvise these elements on the spot. In the Classical era, it became more common for composers to indicate where they wanted performers to play ornaments such as trills or turns. The simplification of texture made such instrumental detail more important, and also made the use of characteristic rhythms, such as attention-getting opening fanfares, the funeral march rhythm, or the minuet genre, more important in establishing and unifying the tone of a single movement. The Classical period also saw the gradual development of sonata form, a set of structural principles for music that reconciled the Classical preference for melodic material with harmonic development, which could be applied across musical genres. The sonata itself continued to be the principal form for solo and chamber music, while later in the Classical period the string quartet became a prominent genre. The symphony form for orchestra was created in this period (this is popularly attributed to Joseph Haydn). The concerto grosso (a concerto for more than one musician), a very popular form in the Baroque era, began to be replaced by the solo concerto, featuring only one soloist. Composers began to place more importance on the particular soloist's ability to show off virtuoso skills, with challenging, fast scale and arpeggio runs. Nonetheless, some concerti grossi remained, the most famous of which being Mozart's Sinfonia Concertante for Violin and Viola in E-flat major. Main characteristics In the classical period, the theme consists of phrases with contrasting melodic figures and rhythms. These phrases are relatively brief, typically four bars in length, and can occasionally seem sparse or terse. The texture is mainly homophonic, with a clear melody above a subordinate chordal accompaniment, for instance an Alberti bass. This contrasts with the practice in Baroque music, where a piece or movement would typically have only one musical subject, which would then be worked out in a number of voices according to the principles of counterpoint, while maintaining a consistent rhythm or metre throughout. As a result, Classical music tends to have a lighter, clearer texture than the Baroque. The classical style draws on the style galant, a musical style which emphasised light elegance in place of the Baroque's dignified seriousness and impressive grandeur. Structurally, Classical music generally has a clear musical form, with a well-defined contrast between tonic and dominant, introduced by clear cadences. Dynamics are used to highlight the structural characteristics of the piece. In particular, sonata form and its variants were developed during the early classical period and was frequently used. The Classical approach to structure again contrasts with the Baroque, where a composition would normally move between tonic and dominant and back again, but through a continual progress of chord changes and without a sense of "arrival" at the new key. While counterpoint was less emphasised in the classical period, it was by no means forgotten, especially later in the period, and composers still used counterpoint in "serious" works such as symphonies and string quartets, as well as religious pieces, such as Masses. The classical musical style was supported by technical developments in instruments. The widespread adoption of equal temperament made classical musical structure possible, by ensuring that cadences in all keys sounded similar. The fortepiano and then the pianoforte replaced the harpsichord, enabling more dynamic contrast and more sustained melodies. Over the Classical period, keyboard instruments became richer, more sonorous and more powerful. The orchestra increased in size and range, and became more standardised. The harpsichord or pipe organ basso continuo role in orchestra fell out of use between 1750 and 1775, leaving the string section. Woodwinds became a self-contained section, consisting of clarinets, oboes, flutes and bassoons. While vocal music such as comic opera was popular, great importance was given to instrumental music. The main kinds of instrumental music were the sonata, trio, string quartet, quintet, symphony, concerto (usually for a virtuoso solo instrument accompanied by orchestra), and light pieces such as serenades and divertimentos. Sonata form developed and became the most important form. It was used to build up the first movement of most large-scale works in symphonies and string quartets. Sonata form was also used in other movements and in single, standalone pieces such as overtures. History Baroque/Classical transition c. 1750–1760 In his book The Classical Style, author and pianist Charles Rosen claims that from 1755 to 1775, composers groped for a new style that was more effectively dramatic. In the High Baroque period, dramatic expression was limited to the representation of individual affects (the "doctrine of affections", or what Rosen terms "dramatic sentiment"). For example, in Handel's oratorio Jephtha, the composer renders four emotions separately, one for each character, in the quartet "O, spare your daughter". Eventually this depiction of individual emotions came to be seen as simplistic and unrealistic; composers sought to portray multiple emotions, simultaneously or progressively, within a single character or movement ("dramatic action"). Thus in the finale of act 2 of Mozart's Die Entführung aus dem Serail, the lovers move "from joy through suspicion and outrage to final reconciliation." Musically speaking, this "dramatic action" required more musical variety. Whereas Baroque music was characterized by seamless flow within individual movements and largely uniform textures, composers after the High Baroque sought to interrupt this flow with abrupt changes in texture, dynamic, harmony, or tempo. Among the stylistic developments which followed the High Baroque, the most dramatic came to be called Empfindsamkeit, (roughly "sensitive style"), and its best-known practitioner was Carl Philipp Emanuel Bach. Composers of this style employed the above-discussed interruptions in the most abrupt manner, and the music can sound illogical at times. The Italian composer Domenico Scarlatti took these developments further. His more than five hundred single-movement keyboard sonatas also contain abrupt changes of texture, but these changes are organized into periods, balanced phrases that became a hallmark of the classical style. However, Scarlatti's changes in texture still sound sudden and unprepared. The outstanding achievement of the great classical composers (Haydn, Mozart and Beethoven) was their ability to make these dramatic surprises sound logically motivated, so that "the expressive and the elegant could join hands." Between the death of J. S. Bach and the maturity of Haydn and Mozart (roughly 1750–1770), composers experimented with these new ideas, which can be seen in the music of Bach's sons. Johann Christian developed a style which we now call Roccoco, comprising simpler textures and harmonies, and which was "charming, undramatic, and a little empty." As mentioned previously, Carl Philipp Emmanuel sought to increase drama, and his music was "violent, expressive, brilliant, continuously surprising, and often incoherent." And finally Wilhelm Friedemann, J.S. Bach's eldest son, extended Baroque traditions in an idiomatic, unconventional way. At first the new style took over Baroque forms—the ternary da capo aria, the sinfonia and the concerto—but composed with simpler parts, more notated ornamentation, rather than the improvised ornaments that were common in the Baroque era, and more emphatic division of pieces into sections. However, over time, the new aesthetic caused radical changes in how pieces were put together, and the basic formal layouts changed. Composers from this period sought dramatic effects, striking melodies, and clearer textures. One of the big textural changes was a shift away from the complex, dense polyphonic style of the Baroque, in which multiple interweaving melodic lines were played simultaneously, and towards homophony, a lighter texture which uses a clear single melody line accompanied by chords. Baroque music generally uses many harmonic fantasies and polyphonic sections that focus less on the structure of the musical piece, and there was less emphasis on clear musical phrases. In the classical period, the harmonies became simpler. However, the structure of the piece, the phrases and small melodic or rhythmic motives, became much more important than in the Baroque period. Another important break with the past was the radical overhaul of opera by Christoph Willibald Gluck, who cut away a great deal of the layering and improvisational ornaments and focused on the points of modulation and transition. By making these moments where the harmony changes more of a focus, he enabled powerful dramatic shifts in the emotional color of the music. To highlight these transitions, he used changes in instrumentation (orchestration), melody, and mode. Among the most successful composers of his time, Gluck spawned many emulators, including Antonio Salieri. Their emphasis on accessibility brought huge successes in opera, and in other vocal music such as songs, oratorios, and choruses. These were considered the most important kinds of music for performance and hence enjoyed greatest public success. The phase between the Baroque and the rise of the Classical (around 1730), was home to various competing musical styles. The diversity of artistic paths are represented in the sons of Johann Sebastian Bach: Wilhelm Friedemann Bach, who continued the Baroque tradition in a personal way; Johann Christian Bach, who simplified textures of the Baroque and most clearly influenced Mozart; and Carl Philipp Emanuel Bach, who composed passionate and sometimes violently eccentric music of the Empfindsamkeit movement. Musical culture was caught at a crossroads: the masters of the older style had the technique, but the public hungered for the new. This is one of the reasons C. P. E. Bach was held in such high regard: he understood the older forms quite well and knew how to present them in new garb, with an enhanced variety of form. 1750–1775 By the late 1750s there were flourishing centers of the new style in Italy, Vienna, Mannheim, and Paris; dozens of symphonies were composed and there were bands of players associated with musical theatres. Opera or other vocal music accompanied by orchestra was the feature of most musical events, with concertos and symphonies (arising from the overture) serving as instrumental interludes and introductions for operas and church services. Over the course of the Classical period, symphonies and concertos developed and were presented independently of vocal music. The "normal" orchestra ensemble—a body of strings supplemented by winds—and movements of particular rhythmic character were established by the late 1750s in Vienna. However, the length and weight of pieces was still set with some Baroque characteristics: individual movements still focused on one "affect" (musical mood) or had only one sharply contrasting middle section, and their length was not significantly greater than Baroque movements. There was not yet a clearly enunciated theory of how to compose in the new style. It was a moment ripe for a breakthrough. The first great master of the style was the composer Joseph Haydn. In the late 1750s he began composing symphonies, and by 1761 he had composed a triptych (Morning, Noon, and Evening) solidly in the contemporary mode. As a vice-Kapellmeister and later Kapellmeister, his output expanded: he composed over forty symphonies in the 1760s alone. And while his fame grew, as his orchestra was expanded and his compositions were copied and disseminated, his voice was only one among many. While some scholars suggest that Haydn was overshadowed by Mozart and Beethoven, it would be difficult to overstate Haydn's centrality to the new style, and therefore to the future of Western art music as a whole. At the time, before the pre-eminence of Mozart or Beethoven, and with Johann Sebastian Bach known primarily to connoisseurs of keyboard music, Haydn reached a place in music that set him above all other composers except perhaps the Baroque era's George Frideric Handel. Haydn took existing ideas, and radically altered how they functioned—earning him the titles "father of the symphony" and "father of the string quartet". One of the forces that worked as an impetus for his pressing forward was the first stirring of what would later be called Romanticism—the Sturm und Drang, or "storm and stress" phase in the arts, a short period where obvious and dramatic emotionalism was a stylistic preference. Haydn accordingly wanted more dramatic contrast and more emotionally appealing melodies, with sharpened character and individuality in his pieces. This period faded away in music and literature: however, it influenced what came afterward and would eventually be a component of aesthetic taste in later decades. The Farewell Symphony, No. 45 in F minor, exemplifies Haydn's integration of the differing demands of the new style, with surprising sharp turns and a long slow adagio to end the work. In 1772, Haydn completed his Opus 20 set of six string quartets, in which he deployed the polyphonic techniques he had gathered from the previous Baroque era to provide structural coherence capable of holding together his melodic ideas. For some, this marks the beginning of the "mature" Classical style, in which the period of reaction against late Baroque complexity yielded to a period of integration Baroque and Classical elements. 1775–1790 Haydn, having worked for over a decade as the music director for a prince, had far more resources and scope for composing than most other composers. His position also gave him the ability to shape the forces that would play his music, as he could select skilled musicians. This opportunity was not wasted, as Haydn, beginning quite early on his career, sought to press forward the technique of building and developing ideas in his music. His next important breakthrough was in the Opus 33 string quartets (1781), in which the melodic and the harmonic roles segue among the instruments: it is often momentarily unclear what is melody and what is harmony. This changes the way the ensemble works its way between dramatic moments of transition and climactic sections: the music flows smoothly and without obvious interruption. He then took this integrated style and began applying it to orchestral and vocal music. Haydn's gift to music was a way of composing, a way of structuring works, which was at the same time in accord with the governing aesthetic of the new style. However, a younger contemporary, Wolfgang Amadeus Mozart, brought his genius to Haydn's ideas and applied them to two of the major genres of the day: opera, and the virtuoso concerto. Whereas Haydn spent much of his working life as a court composer, Mozart wanted public success in the concert life of cities, playing for the general public. This meant he needed to write operas and write and perform virtuoso pieces. Haydn was not a virtuoso at the international touring level; nor was he seeking to create operatic works that could play for many nights in front of a large audience. Mozart wanted to achieve both. Moreover, Mozart also had a taste for more chromatic chords (and greater contrasts in harmonic language generally), a greater love for creating a welter of melodies in a single work, and a more Italianate sensibility in music as a whole. He found, in Haydn's music and later in his study of the polyphony of J.S. Bach, the means to discipline and enrich his artistic gifts. Mozart rapidly came to the attention of Haydn, who hailed the new composer, studied his works, and considered the younger man his only true peer in music. In Mozart, Haydn found a greater range of instrumentation, dramatic effect and melodic resource. The learning relationship moved in both directions. Mozart also had a great respect for the older, more experienced composer, and sought to learn from him. Mozart's arrival in Vienna in 1780 brought an acceleration in the development of the Classical style. There, Mozart absorbed the fusion of Italianate brilliance and Germanic cohesiveness that had been brewing for the previous 20 years. His own taste for flashy brilliances, rhythmically complex melodies and figures, long cantilena melodies, and virtuoso flourishes was merged with an appreciation for formal coherence and internal connectedness. It is at this point that war and economic inflation halted a trend to larger orchestras and forced the disbanding or reduction of many theater orchestras. This pressed the Classical style inwards: toward seeking greater ensemble and technical challenges—for example, scattering the melody across woodwinds, or using a melody harmonized in thirds. This process placed a premium on small ensemble music, called chamber music. It also led to a trend for more public performance, giving a further boost to the string quartet and other small ensemble groupings. It was during this decade that public taste began, increasingly, to recognize that Haydn and Mozart had reached a high standard of composition. By the time Mozart arrived at age 25, in 1781, the dominant styles of Vienna were recognizably connected to the emergence in the 1750s of the early Classical style. By the end of the 1780s, changes in performance practice, the relative standing of instrumental and vocal music, technical demands on musicians, and stylistic unity had become established in the composers who imitated Mozart and Haydn. During this decade Mozart composed his most famous operas, his six late symphonies that helped to redefine the genre, and a string of piano concerti that still stand at the pinnacle of these forms. One composer who was influential in spreading the more serious style that Mozart and Haydn had formed is Muzio Clementi, a gifted virtuoso pianist who tied with Mozart in a musical "duel" before the emperor in which they each improvised on the piano and performed their compositions. Clementi's sonatas for the piano circulated widely, and he became the most successful composer in London during the 1780s. Also in London at this time was Jan Ladislav Dussek, who, like Clementi, encouraged piano makers to extend the range and other features of their instruments, and then fully exploited the newly opened up possibilities. The importance of London in the Classical period is often overlooked, but it served as the home to the Broadwood's factory for piano manufacturing and as the base for composers who, while less notable than the "Vienna School", had a decisive influence on what came later. They were composers of many fine works, notable in their own right. London's taste for virtuosity may well have encouraged the complex passage work and extended statements on tonic and dominant. Around 1790–1820 When Haydn and Mozart began composing, symphonies were played as single movements—before, between, or as interludes within other works—and many of them lasted only ten or twelve minutes; instrumental groups had varying standards of playing, and the continuo was a central part of music-making. In the intervening years, the social world of music had seen dramatic changes. International publication and touring had grown explosively, and concert societies formed. Notation became more specific, more descriptive—and schematics for works had been simplified (yet became more varied in their exact working out). In 1790, just before Mozart's death, with his reputation spreading rapidly, Haydn was poised for a series of successes, notably his late oratorios and London symphonies. Composers in Paris, Rome, and all over Germany turned to Haydn and Mozart for their ideas on form. In the 1790s, a new generation of composers, born around 1770, emerged. While they had grown up with the earlier styles, they heard in the recent works of Haydn and Mozart a vehicle for greater expression. In 1788 Luigi Cherubini settled in Paris and in 1791 composed Lodoiska, an opera that raised him to fame. Its style is clearly reflective of the mature Haydn and Mozart, and its instrumentation gave it a weight that had not yet been felt in the grand opera. His contemporary Étienne Méhul extended instrumental effects with his 1790 opera Euphrosine et Coradin, from which followed a series of successes. The final push towards change came from Gaspare Spontini, who was deeply admired by future romantic composers such as Weber, Berlioz and Wagner. The innovative harmonic language of his operas, their refined instrumentation and their "enchained" closed numbers (a structural pattern which was later adopted by Weber in Euryanthe and from him handed down, through Marschner, to Wagner), formed the basis from which French and German romantic opera had its beginnings. The most fateful of the new generation was Ludwig van Beethoven, who launched his numbered works in 1794 with a set of three piano trios, which remain in the repertoire. Somewhat younger than the others, though equally accomplished because of his youthful study under Mozart and his native virtuosity, was Johann Nepomuk Hummel. Hummel studied under Haydn as well; he was a friend to Beethoven and Franz Schubert. He concentrated more on the piano than any other instrument, and his time in London in 1791 and 1792 generated the composition and publication in 1793 of three piano sonatas, opus 2, which idiomatically used Mozart's techniques of avoiding the expected cadence, and Clementi's sometimes modally uncertain virtuoso figuration. Taken together, these composers can be seen as the vanguard of a broad change in style and the center of music. They studied one another's works, copied one another's gestures in music, and on occasion behaved like quarrelsome rivals. The crucial differences with the previous wave can be seen in the downward shift in melodies, increasing durations of movements, the acceptance of Mozart and Haydn as paradigmatic, the greater use of keyboard resources, the shift from "vocal" writing to "pianistic" writing, the growing pull of the minor and of modal ambiguity, and the increasing importance of varying accompanying figures to bring "texture" forward as an element in music. In short, the late Classical was seeking music that was internally more complex. The growth of concert societies and amateur orchestras, marking the importance of music as part of middle-class life, contributed to a booming market for pianos, piano music, and virtuosi to serve as exemplars. Hummel, Beethoven, and Clementi were all renowned for their improvising. The direct influence of the Baroque continued to fade: the figured bass grew less prominent as a means of holding performance together, the performance practices of the mid-18th century continued to die out. However, at the same time, complete editions of Baroque masters began to become available, and the influence of Baroque style continued to grow, particularly in the ever more expansive use of brass. Another feature of the period is the growing number of performances where the composer was not present. This led to increased detail and specificity in notation; for example, there were fewer "optional" parts that stood separately from the main score. The force of these shifts became apparent with Beethoven's 3rd Symphony, given the name Eroica, which is Italian for "heroic", by the composer. As with Stravinsky's The Rite of Spring, it may not have been the first in all of its innovations, but its aggressive use of every part of the Classical style set it apart from its contemporary works: in length, ambition, and harmonic resources as well making it the first symphony of the Romantic era. First Viennese School The First Viennese School is a name mostly used to refer to three composers of the Classical period in late-18th-century Vienna: Haydn, Mozart, and Beethoven. Franz Schubert is occasionally added to the list. In German-speaking countries, the term Wiener Klassik (lit. Viennese classical era/art) is used. That term is often more broadly applied to the Classical era in music as a whole, as a means to distinguish it from other periods that are colloquially referred to as classical, namely Baroque and Romantic music. The term "Viennese School" was first used by Austrian musicologist Raphael Georg Kiesewetter in 1834, although he only counted Haydn and Mozart as members of the school. Other writers followed suit, and eventually Beethoven was added to the list. The designation "first" is added today to avoid confusion with the Second Viennese School. Whilst, Schubert apart, these composers certainly knew each other (with Haydn and Mozart even being occasional chamber-music partners), there is no sense in which they were engaged in a collaborative effort in the sense that one would associate with 20th-century schools such as the Second Viennese School, or Les Six. Nor is there any significant sense in which one composer was "schooled" by another (in the way that Berg and Webern were taught by Schoenberg), though it is true that Beethoven for a time received lessons from Haydn. Attempts to extend the First Viennese School to include such later figures as Anton Bruckner, Johannes Brahms, and Gustav Mahler are merely journalistic, and never encountered in academic musicology. Classical influence on later composers Musical eras and their prevalent styles, forms and instruments seldom disappear at once; instead, features are replaced over time, until the old approach is simply felt as "old-fashioned". The Classical style did not "die" suddenly; rather, it gradually got phased out under the weight of changes. To give just one example, while it is generally stated that the Classical era stopped using the harpsichord in orchestras, this did not happen all of a sudden at the start of the Classical era in 1750. Rather, orchestras slowly stopped using the harpsichord to play basso continuo until the practice was discontinued by the end of the 1700s. One crucial change was the shift towards harmonies centering on "flatward" keys: shifts in the subdominant direction . In the Classical style, major key was far more common than minor, chromaticism being moderated through the use of "sharpward" modulation (e.g., a piece in C major modulating to G major, D major, or A major, all of which are keys with more sharps). As well, sections in the minor mode were often used for contrast. Beginning with Mozart and Clementi, there began a creeping colonization of the subdominant region (the ii or IV chord, which in the key of C major would be the keys of d minor or F major). With Schubert, subdominant modulations flourished after being introduced in contexts in which earlier composers would have confined themselves to dominant shifts (modulations to the dominant chord, e.g., in the key of C major, modulating to G major). This introduced darker colors to music, strengthened the minor mode, and made structure harder to maintain. Beethoven contributed to this by his increasing use of the fourth as a consonance, and modal ambiguity—for example, the opening of the Symphony No. 9 in D minor. Ludwig van Beethoven, Franz Schubert, Carl Maria von Weber, Johann Nepomuk Hummel, and John Field are among the most prominent in this generation of "Proto-Romantics", along with the young Felix Mendelssohn. Their sense of form was strongly influenced by the Classical style. While they were not yet "learned" composers (imitating rules which were codified by others), they directly responded to works by Haydn, Mozart, Clementi, and others, as they encountered them. The instrumental forces at their disposal in orchestras were also quite "Classical" in number and variety, permitting similarity with Classical works. However, the forces destined to end the hold of the Classical style gathered strength in the works of many of the above composers, particularly Beethoven. The most commonly cited one is harmonic innovation. Also important is the increasing focus on having a continuous and rhythmically uniform accompanying figuration: Beethoven's Moonlight Sonata was the model for hundreds of later pieces—where the shifting movement of a rhythmic figure provides much of the drama and interest of the work, while a melody drifts above it. Greater knowledge of works, greater instrumental expertise, increasing variety of instruments, the growth of concert societies, and the unstoppable domination of the increasingly more powerful piano (which was given a bolder, louder tone by technological developments such as the use of steel strings, heavy cast-iron frames and sympathetically vibrating strings) all created a huge audience for sophisticated music. All of these trends contributed to the shift to the "Romantic" style. Drawing the line between these two styles is very difficult: some sections of Mozart's later works, taken alone, are indistinguishable in harmony and orchestration from music written 80 years later—and some composers continued to write in normative Classical styles into the early 20th century. Even before Beethoven's death, composers such as Louis Spohr were self-described Romantics, incorporating, for example, more extravagant chromaticism in their works (e.g., using chromatic harmonies in a piece's chord progression). Conversely, works such as Schubert's Symphony No. 5, written during the chronological end of the Classical era and dawn of the Romantic era, exhibit a deliberately anachronistic artistic paradigm, harking back to the compositional style of several decades before. However, Vienna's fall as the most important musical center for orchestral composition during the late 1820s, precipitated by the deaths of Beethoven and Schubert, marked the Classical style's final eclipse—and the end of its continuous organic development of one composer learning in close proximity to others. Franz Liszt and Frédéric Chopin visited Vienna when they were young, but they then moved on to other cities. Composers such as Carl Czerny, while deeply influenced by Beethoven, also searched for new ideas and new forms to contain the larger world of musical expression and performance in which they lived. Renewed interest in the formal balance and restraint of 18th century classical music led in the early 20th century to the development of so-called Neoclassical style, which numbered Stravinsky and Prokofiev among its proponents, at least at certain times in their careers. Classical period instruments Guitar The Baroque guitar, with four or five sets of double strings or "courses" and elaborately decorated soundhole, was a very different instrument from the early classical guitar which more closely resembles the modern instrument with the standard six strings. Judging by the number of instructional manuals published for the instrument – over three hundred texts were published by over two hundred authors between 1760 and 1860 – the classical period marked a golden age for guitar. Strings In the Baroque era, there was more variety in the bowed stringed instruments used in ensembles, with instruments such as the viola d'amore and a range of fretted viols being used, ranging from small viols to large bass viols. In the Classical period, the string section of the orchestra was standardized as just four instruments: Violin (in orchestras and chamber music, typically there are first violins and second violins, with the former playing the melody and/or a higher line and the latter playing either a countermelody, a harmony part, a part below the first violin line in pitch, or an accompaniment line) Viola (the alto voice of the orchestral string section and string quartet; it often performs "inner voices", which are accompaniment lines which fill in the harmony of the piece) Cello (the cello plays two roles in Classical era music; at times it is used to play the bassline of the piece, typically doubled by the double basses [Note: When cellos and double basses read the same bassline, the basses play an octave below the cellos, because the bass is a transposing instrument]; and at other times it performs melodies and solos in the lower register) Double bass (the bass typically performs the lowest pitches in the string section in order to provide the bassline for the piece) In the Baroque era, the double bass players were not usually given a separate part; instead, they typically played the same basso continuo bassline that the cellos and other low-pitched instruments (e.g., theorbo, serpent wind instrument, viols), albeit an octave below the cellos, because the double bass is a transposing instrument that sounds one octave lower than it is written. In the Classical era, some composers continued to write only one bass part for their symphony, labeled "bassi"; this bass part was played by cellists and double bassists. During the Classical era, some composers began to give the double basses their own part. Woodwinds It was commonplace for all orchestras to have at least 2 winds, usually oboes, flutes, clarinets, or sometimes english horns (see Symphony No. 22 (Haydn). Patrons also usually employed an ensemble of entirely winds, called the harmonie, which would be employed for certain events. The harmonie would join the larger string orchestra sometimes to serve as the wind section. Piccolo (used in military bands) Flute Oboe English horn Clarinet Basset horn Basset Clarinet Clarinette d'amour Bassoon Contrabassoon Bagpipe (see Leopold Mozart's divertimento, "Die Bauernhochzeit" or "Peasant Wedding") Percussion Timpani "Turkish music": Bass drum Cymbals Triangle Tambourine Keyboards Clavichord Fortepiano (the forerunner to the modern piano) Harpsichord, the standard Baroque era basso continuo keyboard instrument, was used until the 1750s, after which time it was gradually phased out, and replaced with the fortepiano and then the piano. By the early 1800s, the harpsichord was no longer used. Organ Brasses Natural horn Natural trumpet Sackbut (Trombone precursor) Serpent (instrument) Post horn (see Serenade No. 9 (Mozart)) See also List of Classical-era composers Notes Further reading Downs, Philip G. (1992). Classical Music: The Era of Haydn, Mozart, and Beethoven, 4th vol of Norton Introduction to Music History. W. W. Norton. (hardcover). Grout, Donald Jay; Palisca, Claude V. (1996). A History of Western Music, Fifth Edition. W. W. Norton. (hardcover). Hanning, Barbara Russano; Grout, Donald Jay (1998 rev. 2006). Concise History of Western Music. W. W. Norton. (hardcover). Kennedy, Michael (2006), The Oxford Dictionary of Music, 985 pages, Lihoreau, Tim; Fry, Stephen (2004). Stephen Fry's Incomplete and Utter History of Classical Music. Boxtree. Rosen, Charles (1972 expanded 1997). The Classical Style. New York: W. W. Norton. (expanded edition with CD, 1997) Taruskin, Richard (2005, rev. Paperback version 2009). Oxford History of Western Music. Oxford University Press (US). (Hardback), (Paperback) External links Classical Net – Classical music reference site
5295
https://en.wikipedia.org/wiki/Character%20encoding
Character encoding
Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values that make up a character encoding are known as "code points" and collectively comprise a "code space", a "code page", or a "character map". Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. The low cost of digital representation of data in modern computer systems allows more elaborate character codes (such as Unicode) which represent most of the characters used in many written languages. Character encoding using internationally accepted standards permits worldwide interchange of text in electronic form. History The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher, Braille, international maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, Morse code, introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use. Most codes are of fixed per-character length or variable-length sequences of fixed-length codes (e.g. Unicode). Common examples of character encoding systems include Morse code, the Baudot code, the American Standard Code for Information Interchange (ASCII) and Unicode. Unicode, a well-defined and extensible encoding system, has supplanted most earlier character encodings, but the path of code development to the present is fairly well known. The Baudot code, a five-bit encoding, was created by Émile Baudot in 1870, patented in 1874, modified by Donald Murray in 1901, and standardized by CCITT as International Telegraph Alphabet No. 2 (ITA2) in 1930. The name baudot has been erroneously applied to ITA2 and its many variants. ITA2 suffered from many shortcomings and was often improved by many equipment manufacturers, sometimes creating compatibility issues. In 1959 the U.S. military defined its Fieldata code, a six-or seven-bit code, introduced by the U.S. Army Signal Corps. While Fieldata addressed many of the then-modern issues (e.g. letter and digit codes arranged for machine collation), it fell short of its goals and was short-lived. In 1963 the first ASCII code was released (X3.4-1963) by the ASCII committee (which contained at least one member of the Fieldata committee, W. F. Leubbert), which addressed most of the shortcomings of Fieldata, using a simpler code. Many of the changes were subtle, such as collatable character sets within certain numeric ranges. ASCII63 was a success, widely adopted by industry, and with the follow-up issue of the 1967 ASCII code (which added lower-case letters and fixed some "control code" issues) ASCII67 was adopted fairly widely. ASCII67's American-centric nature was somewhat addressed in the European ECMA-6 standard. Herman Hollerith invented punch card data encoding in the late 19th century to analyze census data. Initially, each hole position represented a different data element, but later, numeric information was encoded by numbering the lower rows 0 to 9, with a punch in a column representing its row number. Later alphabetic data was encoded by allowing more than one punch per column. Electromechanical tabulating machines represented date internally by the timing of pulses relative to the motion of the cards through the machine. When IBM went to electronic processing, starting with the IBM 603 Electronic Multiplier, it used a variety of binary encoding schemes that were tied to the punch card code. IBM's Binary Coded Decimal (BCD) was a six-bit encoding scheme used by IBM as early as 1953 in its 702 and 704 computers, and in its later 7000 Series and 1400 series, as well as in associated peripherals. Since the punched card code then in use only allowed digits, upper-case English letters and a few special characters, six bits were sufficient. BCD extended existing simple four-bit numeric encoding to include alphabetic and special characters, mapping it easily to punch-card encoding which was already in widespread use. IBMs codes were used primarily with IBM equipment; other computer vendors of the era had their own character codes, often six-bit, but usually had the ability to read tapes produced on IBM equipment. BCD was the precursor of IBM's Extended Binary-Coded Decimal Interchange Code (usually abbreviated as EBCDIC), an eight-bit encoding scheme developed in 1963 for the IBM System/360 that featured a larger character set, including lower case letters. In trying to develop universally interchangeable character encodings, researchers in the 1980s faced the dilemma that, on the one hand, it seemed necessary to add more bits to accommodate additional characters, but on the other hand, for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users). In 1985, the average personal computer user's hard disk drive could store only about 10 megabytes, and it cost approximately US$250 on the wholesale market (and much higher if purchased separately at retail), so it was very important at the time to make every bit count. The compromise solution that was eventually found and was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to a particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called code points. Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for eight-bit units, the solution was to implement variable-length encodings where an escape sequence would signal that subsequent bits should be parsed as a higher code point. Terminology Informally, the terms "character encoding", "character map", "character set" and "code page" are often used interchangeably. Historically, the same standard would specify a repertoire of characters and how they were to be encoded into a stream of code units — usually with a single character per code unit. However, due to the emergence of more sophisticated character encodings, the distinction between these terms has become important. A character is a minimal unit of text that has semantic value. A character set is a collection of elements used to represent text. For example, the Latin alphabet and Greek alphabet are both character sets. A coded character set is a character set mapped to set of unique numbers. For historical reasons, this is also often referred to as a code page. A character repertoire is the set of characters that can be represented by a particular coded character set. The repertoire may be closed, meaning that no additions are allowed without creating a new standard (as is the case with ASCII and most of the ISO-8859 series); or it may be open, allowing additions (as is the case with Unicode and to a limited extent Windows code pages). A code point is a value or position of a character in a coded character set. A code space is the range of numerical values spanned by a coded character set. A code unit is the minimum bit combination that can represent a character in a character encoding (in computer science terms, it is the word size of the character encoding). For example, common code units include 7-bit, 8-bit, 16-bit, and 32-bit. In some encodings, some characters are encoded using multiple code units; such an encoding is referred to as a variable-width encoding. Code pages "Code page" is a historical name for a coded character set. Originally, a code page referred to a specific page number in the IBM standard character set manual, which would define a particular character encoding. Other vendors, including Microsoft, SAP, and Oracle Corporation, also published their own sets of code pages; the most well-known code page suites are "Windows" (based on Windows-1252) and "IBM"/"DOS" (based on code page 437). Despite no longer referring to specific page numbers in a standard, many character encodings are still referred to by their code page number; likewise, the term "code page" is often still used to refer to character encodings in general. The term "code page" is not used in Unix or Linux, where "charmap" is preferred, usually in the larger context of locales. IBM's Character Data Representation Architecture (CDRA) designates entities with coded character set identifiers (CCSIDs), each of which is variously called a "charset", "character set", "code page", or "CHARMAP". Code units The code unit size is equivalent to the bit measurement for the particular encoding: A code unit in US-ASCII consists of 7 bits; A code unit in UTF-8, EBCDIC and GB 18030 consists of 8 bits; A code unit in UTF-16 consists of 16 bits; A code unit in UTF-32 consists of 32 bits. Code points A code point is represented by a sequence of code units. The mapping is defined by the encoding. Thus, the number of code units required to represent a code point depends on the encoding: UTF-8: code points map to a sequence of one, two, three or four code units. UTF-16: code units are twice as long as 8-bit code units. Therefore, any code point with a scalar value less than U+10000 is encoded with a single code unit. Code points with a value U+10000 or higher require two code units each. These pairs of code units have a unique term in UTF-16: "Unicode surrogate pairs". UTF-32: the 32-bit code unit is large enough that every code point is represented as a single code unit. GB 18030: multiple code units per code point are common, because of the small code units. Code points are mapped to one, two, or four code units. Characters Exactly what constitutes a character varies between character encodings. For example, for letters with diacritics, there are two distinct approaches that can be taken to encode them: they can be encoded either as a single unified character (known as a precomposed character), or as separate characters that combine into a single glyph. The former simplifies the text handling system, but the latter allows any letter/diacritic combination to be used in text. Ligatures pose similar problems. Exactly how to handle glyph variants is a choice that must be made when constructing a particular character encoding. Some writing systems, such as Arabic and Hebrew, need to accommodate things like graphemes that are joined in different ways in different contexts, but represent the same semantic character. Unicode encoding model Unicode and its parallel standard, the ISO/IEC 10646 Universal Character Set, together constitute a unified standard for character encoding. Rather than mapping characters directly to bytes, Unicode separately defines a coded character set that maps characters to unique natural numbers (code points), how those code points are mapped to a series of fixed-size natural numbers (code units), and finally how those units are encoded as a stream of octets (bytes). The purpose of this decomposition is to establish a universal set of characters that can be encoded in a variety of ways. To describe this model precisely, Unicode uses its own set of terminology to describe its process: An abstract character repertoire (ACR) is the full set of abstract characters that a system supports. Unicode has an open repertoire, meaning that new characters will be added to the repertoire over time. A coded character set (CCS) is a function that maps characters to code points (each code point represents one character). For example, in a given repertoire, the capital letter "A" in the Latin alphabet might be represented by the code point 65, the character "B" by 66, and so on. Multiple coded character sets may share the same character repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover the same repertoire but map them to different code points. A character encoding form (CEF) is the mapping of code points to code units to facilitate storage in a system that represents numbers as bit sequences of fixed length (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit units can only directly represent code points 0 to 65,535 in each unit, but larger code points (say, 65,536 to 1.4 million) could be represented by using multiple 16-bit units. This correspondence is defined by a CEF. A character encoding scheme (CES) is the mapping of code units to a sequence of octets to facilitate storage on an octet-based file system or transmission over an octet-based network. Simple character encoding schemes include UTF-8, UTF-16BE, UTF-32BE, UTF-16LE, and UTF-32LE; compound character encoding schemes, such as UTF-16, UTF-32 and ISO/IEC 2022, switch between several simple schemes by using a byte order mark or escape sequences; compressing schemes try to minimize the number of bytes used per code unit (such as SCSU and BOCU). Although UTF-32BE and UTF-32LE are simpler CESes, most systems working with Unicode use either UTF-8, which is backward compatible with fixed-length ASCII and maps Unicode code points to variable-length sequences of octets, or UTF-16BE, which is backward compatible with fixed-length UCS-2BE and maps Unicode code points to variable-length sequences of 16-bit words. See comparison of Unicode encodings for a detailed discussion. Finally, there may be a higher-level protocol which supplies additional information to select the particular variant of a Unicode character, particularly where there are regional variants that have been 'unified' in Unicode as the same character. An example is the XML attribute xml:lang. The Unicode model uses the term "character map" for other systems which directly assign a sequence of characters to a sequence of bytes, covering all of the CCS, CEF and CES layers. Unicode code points In Unicode, a character can be referred to as 'U+' followed by its codepoint value in hexadecimal. The range of valid code points (the codespace) for the Unicode standard is U+0000 to U+10FFFF, inclusive, divided in 17 planes, identified by the numbers 0 to 16. Characters in the range U+0000 to U+FFFF are in plane 0, called the Basic Multilingual Plane (BMP). This plane contains most commonly-used characters. Characters in the range U+10000 to U+10FFFF in the other planes are called supplementary characters. The following table shows examples of code point values: Example Consider a string of the letters "ab̲c𐐀"—that is, a string containing a Unicode combining character () as well a supplementary character (). This string has several Unicode representations which are logically equivalent, yet while each is suited to a diverse set of circumstances or range of requirements: Four composed characters: , , , Five graphemes: , , , , Five Unicode code points: , , , , Five UTF-32 code units (32-bit integer values): , , , , Six UTF-16 code units (16-bit integers) , , , , , Nine UTF-8 code units (8-bit values, or bytes) , , , , , , , , Note in particular that 𐐀 is represented with either one 32-bit value (UTF-32), two 16-bit values (UTF-16), or four 8-bit values (UTF-8). Although each of those forms uses the same total number of bits (32) to represent the glyph, it is not obvious how the actual numeric byte values are related. Transcoding As a result of having many character encoding methods in use (and the need for backward compatibility with archived data), many computer programs have been developed to translate data between character encoding schemes, a process known as transcoding. Some of these are cited below. Cross-platform: Web browsers – most modern web browsers feature automatic character encoding detection. On Firefox 3, for example, see the View/Character Encoding submenu. iconv – a program and standardized API to convert encodings luit – a program that converts encoding of input and output to programs running interactively International Components for Unicode – A set of C and Java libraries to perform charset conversion. uconv can be used from ICU4C. Windows: Encoding.Convert – .NET API MultiByteToWideChar/WideCharToMultiByte – to convert from ANSI to Unicode & Unicode to ANSI See also Percent-encoding Alt code Character encodings in HTML :Category:Character encoding – articles related to character encoding in general :Category:Character sets – articles detailing specific character encodings Hexadecimal representations Mojibake – character set mismap Mojikyō – a system ("glyph set") that includes over 100,000 Chinese character drawings, modern and ancient, popular and obscure Presentation layer TRON, part of the TRON project, is an encoding system that does not use Han Unification; instead, it uses "control codes" to switch between 16-bit "planes" of characters. Universal Character Set characters Charset sniffing – used in some applications when character encoding metadata is not available Common character encodings ISO 646 ASCII EBCDIC ISO 8859: ISO 8859-1 Western Europe ISO 8859-2 Western and Central Europe ISO 8859-3 Western Europe and South European (Turkish, Maltese plus Esperanto) ISO 8859-4 Western Europe and Baltic countries (Lithuania, Estonia, Latvia and Lapp) ISO 8859-5 Cyrillic alphabet ISO 8859-6 Arabic ISO 8859-7 Greek ISO 8859-8 Hebrew ISO 8859-9 Western Europe with amended Turkish character set ISO 8859-10 Western Europe with rationalised character set for Nordic languages, including complete Icelandic set ISO 8859-11 Thai ISO 8859-13 Baltic languages plus Polish ISO 8859-14 Celtic languages (Irish Gaelic, Scottish, Welsh) ISO 8859-15 Added the Euro sign and other rationalisations to ISO 8859-1 ISO 8859-16 Central, Eastern and Southern European languages (Albanian, Bosnian, Croatian, Hungarian, Polish, Romanian, Serbian and Slovenian, but also French, German, Italian and Irish Gaelic) CP437, CP720, CP737, CP850, CP852, CP855, CP857, CP858, CP860, CP861, CP862, CP863, CP865, CP866, CP869, CP872 MS-Windows character sets: Windows-1250 for Central European languages that use Latin script, (Polish, Czech, Slovak, Hungarian, Slovene, Serbian, Croatian, Bosnian, Romanian and Albanian) Windows-1251 for Cyrillic alphabets Windows-1252 for Western languages Windows-1253 for Greek Windows-1254 for Turkish Windows-1255 for Hebrew Windows-1256 for Arabic Windows-1257 for Baltic languages Windows-1258 for Vietnamese Mac OS Roman KOI8-R, KOI8-U, KOI7 MIK ISCII TSCII VISCII JIS X 0208 is a widely deployed standard for Japanese character encoding that has several encoding forms. Shift JIS (Microsoft Code page 932 is a dialect of Shift_JIS) EUC-JP ISO-2022-JP JIS X 0213 is an extended version of JIS X 0208. Shift_JIS-2004 EUC-JIS-2004 ISO-2022-JP-2004 Chinese Guobiao GB 2312 GBK (Microsoft Code page 936) GB 18030 Taiwan Big5 (a more famous variant is Microsoft Code page 950) Hong Kong HKSCS Korean KS X 1001 is a Korean double-byte character encoding standard EUC-KR ISO-2022-KR Unicode (and subsets thereof, such as the 16-bit 'Basic Multilingual Plane') UTF-8 UTF-16 UTF-32 ANSEL or ISO/IEC 6937 References Further reading External links Character sets registered by Internet Assigned Numbers Authority (IANA) Characters and encodings, by Jukka Korpela Unicode Technical Report #17: Character Encoding Model Decimal, Hexadecimal Character Codes in HTML Unicode – Encoding converter The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) by Joel Spolsky (Oct 10, 2003) Encoding
5298
https://en.wikipedia.org/wiki/Control%20character
Control character
In computing and telecommunication, a control character or non-printing character (NPC) is a code point in a character set that does not represent a written character or symbol. They are used as in-band signaling to cause effects other than the addition of a symbol to the text. All other characters are mainly graphic characters, also known as printing characters (or printable characters), except perhaps for "space" characters. In the ASCII standard there are 33 control characters, such as code 7, , which rings a terminal bell. History Procedural signs in Morse code are a form of control character. A form of control characters were introduced in the 1870 Baudot code: NUL and DEL. The 1901 Murray code added the carriage return (CR) and line feed (LF), and other versions of the Baudot code included other control characters. The bell character (BEL), which rang a bell to alert operators, was also an early teletype control character. Some control characters have also been called "format effectors". In ASCII There were quite a few control characters defined (33 in ASCII, and the ECMA-48 standard adds 32 more). This was because early terminals had very primitive mechanical or electrical controls that made any kind of state-remembering API quite expensive to implement, thus a different code for each and every function looked like a requirement. It quickly became possible and inexpensive to interpret sequences of codes to perform a function, and device makers found a way to send hundreds of device instructions. Specifically, they used ASCII code 2710 (escape), followed by a series of characters called a "control sequence" or "escape sequence". The mechanism was invented by Bob Bemer, the father of ASCII. For example, the sequence of code 2710, followed by the printable characters "[2;10H", would cause a Digital Equipment Corporation VT100 terminal to move its cursor to the 10th cell of the 2nd line of the screen. Several standards exist for these sequences, notably ANSI X3.64. But the number of non-standard variations in use is large, especially among printers, where technology has advanced far faster than any standards body can possibly keep up with. All entries in the ASCII table below code 3210 (technically the C0 control code set) are of this kind, including CR and LF used to separate lines of text. The code 12710 (DEL) is also a control character. Extended ASCII sets defined by ISO 8859 added the codes 12810 through 15910 as control characters. This was primarily done so that if the high bit was stripped, it would not change a printing character to a C0 control code. This second set is called the C1 set. These 65 control codes were carried over to Unicode. Unicode added more characters that could be considered controls, but it makes a distinction between these "Formatting characters" (such as the zero-width non-joiner) and the 65 control characters. The Extended Binary Coded Decimal Interchange Code (EBCDIC) character set contains 65 control codes, including all of the ASCII control codes plus additional codes which are mostly used to control IBM peripherals. The control characters in ASCII still in common use include: 0x00 (null, , , ), originally intended to be an ignored character, but now used by many programming languages including C to mark the end of a string. 0x07 (bell, , , ), which may cause the device to emit a warning such as a bell or beep sound or the screen flashing. 0x08 (backspace, , , ), may overprint the previous character. 0x09 (horizontal tab, , , ), moves the printing position right to the next tab stop. 0x0A (line feed, , , ), moves the print head down one line, or to the left edge and down. Used as the end of line marker in most UNIX systems and variants. 0x0B (vertical tab, , , ), vertical tabulation. 0x0C (form feed, , , ), to cause a printer to eject paper to the top of the next page, or a video terminal to clear the screen. 0x0D (carriage return, , , ), moves the printing position to the start of the line, allowing overprinting. Used as the end of line marker in Classic Mac OS, OS-9, FLEX (and variants). A pair is used by CP/M-80 and its derivatives including DOS and Windows, and by Application Layer protocols such as FTP, SMTP, and HTTP. 0x1A (Control-Z, , ). Acts as an end-of-file for the Windows text-mode file i/o. 0x1B (escape, , (GCC only), ). Introduces an escape sequence. Control characters may be described as doing something when the user inputs them, such as code 3 (End-of-Text character, ETX, ) to interrupt the running process, or code 4 (End-of-Transmission character, EOT, ), used to end text input on Unix or to exit a Unix shell. These uses usually have little to do with their use when they are in text being output. In Unicode In Unicode, "Control-characters" are U+0000—U+001F (C0 controls), U+007F (delete), and U+0080—U+009F (C1 controls). Their General Category is "Cc". Formatting codes are distinct, in General Category "Cf". The Cc control characters have no Name in Unicode, but are given labels such as "<control-001A>" instead. Display There are a number of techniques to display non-printing characters, which may be illustrated with the bell character in ASCII encoding: Code point: decimal 7, hexadecimal 0x07 An abbreviation, often three capital letters: BEL A special character condensing the abbreviation: Unicode U+2407 (␇), "symbol for bell" An ISO 2047 graphical representation: Unicode U+237E (⍾), "graphic for bell" Caret notation in ASCII, where code point 00xxxxx is represented as a caret followed by the capital letter at code point 10xxxxx: ^G An escape sequence, as in C/C++ character string codes: , , , etc. How control characters map to keyboards ASCII-based keyboards have a key labelled "Control", "Ctrl", or (rarely) "Cntl" which is used much like a shift key, being pressed in combination with another letter or symbol key. In one implementation, the control key generates the code 64 places below the code for the (generally) uppercase letter it is pressed in combination with (i.e., subtract 0x40 from ASCII code value of the (generally) uppercase letter). The other implementation is to take the ASCII code produced by the key and bitwise AND it with 0x1F, forcing bits 5 to 7 to zero. For example, pressing "control" and the letter "g" (which is 0110 0111 in binary), produces the code 7 (BELL, 7 in base ten, or 0000 0111 in binary). The NULL character (code 0) is represented by Ctrl-@, "@" being the code immediately before "A" in the ASCII character set. For convenience, some terminals accept Ctrl-Space as an alias for Ctrl-@. In either case, this produces one of the 32 ASCII control codes between 0 and 31. Neither approach works to produce the DEL character because of its special location in the table and its value (code 12710), Ctrl-? is sometimes used for this character. When the control key is held down, letter keys produce the same control characters regardless of the state of the shift or caps lock keys. In other words, it does not matter whether the key would have produced an upper-case or a lower-case letter. The interpretation of the control key with the space, graphics character, and digit keys (ASCII codes 32 to 63) vary between systems. Some will produce the same character code as if the control key were not held down. Other systems translate these keys into control characters when the control key is held down. The interpretation of the control key with non-ASCII ("foreign") keys also varies between systems. Control characters are often rendered into a printable form known as caret notation by printing a caret (^) and then the ASCII character that has a value of the control character plus 64. Control characters generated using letter keys are thus displayed with the upper-case form of the letter. For example, ^G represents code 7, which is generated by pressing the G key when the control key is held down. Keyboards also typically have a few single keys which produce control character codes. For example, the key labelled "Backspace" typically produces code 8, "Tab" code 9, "Enter" or "Return" code 13 (though some keyboards might produce code 10 for "Enter"). Many keyboards include keys that do not correspond to any ASCII printable or control character, for example cursor control arrows and word processing functions. The associated keypresses are communicated to computer programs by one of four methods: appropriating otherwise unused control characters; using some encoding other than ASCII; using multi-character control sequences; or using an additional mechanism outside of generating characters. "Dumb" computer terminals typically use control sequences. Keyboards attached to stand-alone personal computers made in the 1980s typically use one (or both) of the first two methods. Modern computer keyboards generate scancodes that identify the specific physical keys that are pressed; computer software then determines how to handle the keys that are pressed, including any of the four methods described above. The design purpose The control characters were designed to fall into a few groups: printing and display control, data structuring, transmission control, and miscellaneous. Printing and display control Printing control characters were first used to control the physical mechanism of printers, the earliest output device. An early example of this idea was the use of Figures (FIGS) and Letters (LTRS) in Baudot code to shift between two code pages. A later, but still early, example was the out-of-band ASA carriage control characters. Later, control characters were integrated into the stream of data to be printed. The carriage return character (CR), when sent to such a device, causes it to put the character at the edge of the paper at which writing begins (it may, or may not, also move the printing position to the next line). The line feed character (LF/NL) causes the device to put the printing position on the next line. It may (or may not), depending on the device and its configuration, also move the printing position to the start of the next line (which would be the leftmost position for left-to-right scripts, such as the alphabets used for Western languages, and the rightmost position for right-to-left scripts such as the Hebrew and Arabic alphabets). The vertical and horizontal tab characters (VT and HT/TAB) cause the output device to move the printing position to the next tab stop in the direction of reading. The form feed character (FF/NP) starts a new sheet of paper, and may or may not move to the start of the first line. The backspace character (BS) moves the printing position one character space backwards. On printers, including hard-copy terminals, this is most often used so the printer can overprint characters to make other, not normally available, characters. On video terminals and other electronic output devices, there are often software (or hardware) configuration choices that allow a destructive backspace (e.g., a BS, SP, BS sequence), which erases, or a non-destructive one, which does not. The shift in and shift out characters (SI and SO) selected alternate character sets, fonts, underlining, or other printing modes. Escape sequences were often used to do the same thing. With the advent of computer terminals that did not physically print on paper and so offered more flexibility regarding screen placement, erasure, and so forth, printing control codes were adapted. Form feeds, for example, usually cleared the screen, there being no new paper page to move to. More complex escape sequences were developed to take advantage of the flexibility of the new terminals, and indeed of newer printers. The concept of a control character had always been somewhat limiting, and was extremely so when used with new, much more flexible, hardware. Control sequences (sometimes implemented as escape sequences) could match the new flexibility and power and became the standard method. However, there were, and remain, a large variety of standard sequences to choose from. Data structuring The separators (File, Group, Record, and Unit: FS, GS, RS and US) were made to structure data, usually on a tape, in order to simulate punched cards. End of medium (EM) warns that the tape (or other recording medium) is ending. While many systems use CR/LF and TAB for structuring data, it is possible to encounter the separator control characters in data that needs to be structured. The separator control characters are not overloaded; there is no general use of them except to separate data into structured groupings. Their numeric values are contiguous with the space character, which can be considered a member of the group, as a word separator. For example, the RS separator is used by (JSON Text Sequences) to encode a sequence of JSON elements. Each sequence item starts with a RS character and ends with a line feed. This allows to serialize open-ended JSON sequences. It is one of the JSON streaming protocols. Transmission control The transmission control characters were intended to structure a data stream, and to manage re-transmission or graceful failure, as needed, in the face of transmission errors. The start of heading (SOH) character was to mark a non-data section of a data stream—the part of a stream containing addresses and other housekeeping data. The start of text character (STX) marked the end of the header, and the start of the textual part of a stream. The end of text character (ETX) marked the end of the data of a message. A widely used convention is to make the two characters preceding ETX a checksum or CRC for error-detection purposes. The end of transmission block character (ETB) was used to indicate the end of a block of data, where data was divided into such blocks for transmission purposes. The escape character (ESC) was intended to "quote" the next character, if it was another control character it would print it instead of performing the control function. It is almost never used for this purpose today. Various printable characters are used as visible "escape characters", depending on context. The substitute character (SUB) was intended to request a translation of the next character from a printable character to another value, usually by setting bit 5 to zero. This is handy because some media (such as sheets of paper produced by typewriters) can transmit only printable characters. However, on MS-DOS systems with files opened in text mode, "end of text" or "end of file" is marked by this Ctrl-Z character, instead of the Ctrl-C or Ctrl-D, which are common on other operating systems. The cancel character (CAN) signaled that the previous element should be discarded. The negative acknowledge character (NAK) is a definite flag for, usually, noting that reception was a problem, and, often, that the current element should be sent again. The acknowledge character (ACK) is normally used as a flag to indicate no problem detected with current element. When a transmission medium is half duplex (that is, it can transmit in only one direction at a time), there is usually a master station that can transmit at any time, and one or more slave stations that transmit when they have permission. The enquire character (ENQ) is generally used by a master station to ask a slave station to send its next message. A slave station indicates that it has completed its transmission by sending the end of transmission character (EOT). The device control codes (DC1 to DC4) were originally generic, to be implemented as necessary by each device. However, a universal need in data transmission is to request the sender to stop transmitting when a receiver is temporarily unable to accept any more data. Digital Equipment Corporation invented a convention which used 19 (the device control 3 character (DC3), also known as control-S, or XOFF) to "S"top transmission, and 17 (the device control 1 character (DC1), a.k.a. control-Q, or XON) to start transmission. It has become so widely used that most don't realize it is not part of official ASCII. This technique, however implemented, avoids additional wires in the data cable devoted only to transmission management, which saves money. A sensible protocol for the use of such transmission flow control signals must be used, to avoid potential deadlock conditions, however. The data link escape character (DLE) was intended to be a signal to the other end of a data link that the following character is a control character such as STX or ETX. For example a packet may be structured in the following way (DLE) <STX> <PAYLOAD> (DLE) <ETX>. Miscellaneous codes Code 7 (BEL) is intended to cause an audible signal in the receiving terminal. Many of the ASCII control characters were designed for devices of the time that are not often seen today. For example, code 22, "synchronous idle" (SYN), was originally sent by synchronous modems (which have to send data constantly) when there was no actual data to send. (Modern systems typically use a start bit to announce the beginning of a transmitted word— this is a feature of asynchronous communication. Synchronous communication links were more often seen with mainframes, where they were typically run over corporate leased lines to connect a mainframe to another mainframe or perhaps a minicomputer.) Code 0 (ASCII code name NUL) is a special case. In paper tape, it is the case when there are no holes. It is convenient to treat this as a fill character with no meaning otherwise. Since the position of a NUL character has no holes punched, it can be replaced with any other character at a later time, so it was typically used to reserve space, either for correcting errors or for inserting information that would be available at a later time or in another place. In computing it is often used for padding in fixed length records and more commonly, to mark the end of a string. Code 127 (DEL, a.k.a. "rubout") is likewise a special case. Its 7-bit code is all-bits-on in binary, which essentially erased a character cell on a paper tape when overpunched. Paper tape was a common storage medium when ASCII was developed, with a computing history dating back to WWII code breaking equipment at Biuro Szyfrów. Paper tape became obsolete in the 1970s, so this clever aspect of ASCII rarely saw any use after that. Some systems (such as the original Apples) converted it to a backspace. But because its code is in the range occupied by other printable characters, and because it had no official assigned glyph, many computer equipment vendors used it as an additional printable character (often an all-black "box" character useful for erasing text by overprinting with ink). Non-erasable programmable ROMs are typically implemented as arrays of fusible elements, each representing a bit, which can only be switched one way, usually from one to zero. In such PROMs, the DEL and NUL characters can be used in the same way that they were used on punched tape: one to reserve meaningless fill bytes that can be written later, and the other to convert written bytes to meaningless fill bytes. For PROMs that switch one to zero, the roles of NUL and DEL are reversed; also, DEL will only work with 7-bit characters, which are rarely used today; for 8-bit content, the character code 255, commonly defined as a nonbreaking space character, can be used instead of DEL. Many file systems do not allow control characters in filenames, as they may have reserved functions. See also , HJKL as arrow keys, used on ADM-3A terminal C0 and C1 control codes Escape sequence In-band signaling Whitespace character Notes and references External links ISO IR 1 C0 Set of ISO 646 (PDF)
5299
https://en.wikipedia.org/wiki/Carbon
Carbon
Carbon () is a chemical element with the symbol C and atomic number 6. It is nonmetallic and tetravalent—its atom making four electrons available to form covalent chemical bonds. It belongs to group 14 of the periodic table. Carbon makes up about 0.025 percent of Earth's crust. Three isotopes occur naturally, C and C being stable, while C is a radionuclide, decaying with a half-life of about 5,730 years. Carbon is one of the few elements known since antiquity. Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass after hydrogen, helium, and oxygen. Carbon's abundance, its unique diversity of organic compounds, and its unusual ability to form polymers at the temperatures commonly encountered on Earth, enables this element to serve as a common element of all known life. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen. The atoms of carbon can bond together in diverse ways, resulting in various allotropes of carbon. Well-known allotropes include graphite, diamond, amorphous carbon, and fullerenes. The physical properties of carbon vary widely with the allotropic form. For example, graphite is opaque and black, while diamond is highly transparent. Graphite is soft enough to form a streak on paper (hence its name, from the Greek verb "γράφειν" which means "to write"), while diamond is the hardest naturally occurring material known. Graphite is a good electrical conductor while diamond has a low electrical conductivity. Under normal conditions, diamond, carbon nanotubes, and graphene have the highest thermal conductivities of all known materials. All carbon allotropes are solids under normal conditions, with graphite being the most thermodynamically stable form at standard temperature and pressure. They are chemically resistant and require high temperature to react even with oxygen. The most common oxidation state of carbon in inorganic compounds is +4, while +2 is found in carbon monoxide and transition metal carbonyl complexes. The largest sources of inorganic carbon are limestones, dolomites and carbon dioxide, but significant quantities occur in organic deposits of coal, peat, oil, and methane clathrates. Carbon forms a vast number of compounds, with about two hundred million having been described and indexed; and yet that number is but a fraction of the number of theoretically possible compounds under standard conditions. Characteristics The allotropes of carbon include graphite, one of the softest known substances, and diamond, the hardest naturally occurring substance. It bonds readily with other small atoms, including other carbon atoms, and is capable of forming multiple stable covalent bonds with suitable multivalent atoms. Carbon is a component element in the large majority of all chemical compounds, with about two hundred million examples having been described in the published chemical literature. Carbon also has the highest sublimation point of all elements. At atmospheric pressure it has no melting point, as its triple point is at and , so it sublimes at about . Graphite is much more reactive than diamond at standard conditions, despite being more thermodynamically stable, as its delocalised pi system is much more vulnerable to attack. For example, graphite can be oxidised by hot concentrated nitric acid at standard conditions to mellitic acid, C6(CO2H)6, which preserves the hexagonal units of graphite while breaking up the larger structure. Carbon sublimes in a carbon arc, which has a temperature of about 5800 K (5,530 °C or 9,980 °F). Thus, irrespective of its allotropic form, carbon remains solid at higher temperatures than the highest-melting-point metals such as tungsten or rhenium. Although thermodynamically prone to oxidation, carbon resists oxidation more effectively than elements such as iron and copper, which are weaker reducing agents at room temperature. Carbon is the sixth element, with a ground-state electron configuration of 1s22s22p2, of which the four outer electrons are valence electrons. Its first four ionisation energies, 1086.5, 2352.6, 4620.5 and 6222.7 kJ/mol, are much higher than those of the heavier group-14 elements. The electronegativity of carbon is 2.5, significantly higher than the heavier group-14 elements (1.8–1.9), but close to most of the nearby nonmetals, as well as some of the second- and third-row transition metals. Carbon's covalent radii are normally taken as 77.2 pm (C−C), 66.7 pm (C=C) and 60.3 pm (C≡C), although these may vary depending on coordination number and what the carbon is bonded to. In general, covalent radius decreases with lower coordination number and higher bond order. Carbon-based compounds form the basis of all known life on Earth, and the carbon-nitrogen-oxygen cycle provides a small portion of the energy produced by the Sun, and most of the energy in larger stars (e.g. Sirius). Although it forms an extraordinary variety of compounds, most forms of carbon are comparatively unreactive under normal conditions. At standard temperature and pressure, it resists all but the strongest oxidizers. It does not react with sulfuric acid, hydrochloric acid, chlorine or any alkalis. At elevated temperatures, carbon reacts with oxygen to form carbon oxides and will rob oxygen from metal oxides to leave the elemental metal. This exothermic reaction is used in the iron and steel industry to smelt iron and to control the carbon content of steel: + 4 C + 2 → 3 Fe + 4 . Carbon reacts with sulfur to form carbon disulfide, and it reacts with steam in the coal-gas reaction used in coal gasification: C + HO → CO + H. Carbon combines with some metals at high temperatures to form metallic carbides, such as the iron carbide cementite in steel and tungsten carbide, widely used as an abrasive and for making hard tips for cutting tools. The system of carbon allotropes spans a range of extremes: Allotropes Atomic carbon is a very short-lived species and, therefore, carbon is stabilized in various multi-atomic structures with diverse molecular configurations called allotropes. The three relatively well-known allotropes of carbon are amorphous carbon, graphite, and diamond. Once considered exotic, fullerenes are nowadays commonly synthesized and used in research; they include buckyballs, carbon nanotubes, carbon nanobuds and nanofibers. Several other exotic allotropes have also been discovered, such as lonsdaleite, glassy carbon, carbon nanofoam and linear acetylenic carbon (carbyne). Graphene is a two-dimensional sheet of carbon with the atoms arranged in a hexagonal lattice. As of 2009, graphene appears to be the strongest material ever tested. The process of separating it from graphite will require some further technological development before it is economical for industrial processes. If successful, graphene could be used in the construction of a space elevator. It could also be used to safely store hydrogen for use in a hydrogen based engine in cars. The amorphous form is an assortment of carbon atoms in a non-crystalline, irregular, glassy state, not held in a crystalline macrostructure. It is present as a powder, and is the main constituent of substances such as charcoal, lampblack (soot), and activated carbon. At normal pressures, carbon takes the form of graphite, in which each atom is bonded trigonally to three others in a plane composed of fused hexagonal rings, just like those in aromatic hydrocarbons. The resulting network is 2-dimensional, and the resulting flat sheets are stacked and loosely bonded through weak van der Waals forces. This gives graphite its softness and its cleaving properties (the sheets slip easily past one another). Because of the delocalization of one of the outer electrons of each atom to form a π-cloud, graphite conducts electricity, but only in the plane of each covalently bonded sheet. This results in a lower bulk electrical conductivity for carbon than for most metals. The delocalization also accounts for the energetic stability of graphite over diamond at room temperature. At very high pressures, carbon forms the more compact allotrope, diamond, having nearly twice the density of graphite. Here, each atom is bonded tetrahedrally to four others, forming a 3-dimensional network of puckered six-membered rings of atoms. Diamond has the same cubic structure as silicon and germanium, and because of the strength of the carbon-carbon bonds, it is the hardest naturally occurring substance measured by resistance to scratching. Contrary to the popular belief that "diamonds are forever", they are thermodynamically unstable (ΔfG°(diamond, 298 K) = 2.9 kJ/mol) under normal conditions (298 K, 105 Pa) and should theoretically transform into graphite. But due to a high activation energy barrier, the transition into graphite is so slow at normal temperature that it is unnoticeable. However, at very high temperatures diamond will turn into graphite, and diamonds can burn up in a house fire. The bottom left corner of the phase diagram for carbon has not been scrutinized experimentally. Although a computational study employing density functional theory methods reached the conclusion that as and , diamond becomes more stable than graphite by approximately 1.1 kJ/mol, more recent and definitive experimental and computational studies show that graphite is more stable than diamond for , without applied pressure, by 2.7 kJ/mol at T = 0 K and 3.2 kJ/mol at T = 298.15 K. Under some conditions, carbon crystallizes as lonsdaleite, a hexagonal crystal lattice with all atoms covalently bonded and properties similar to those of diamond. Fullerenes are a synthetic crystalline formation with a graphite-like structure, but in place of flat hexagonal cells only, some of the cells of which fullerenes are formed may be pentagons, nonplanar hexagons, or even heptagons of carbon atoms. The sheets are thus warped into spheres, ellipses, or cylinders. The properties of fullerenes (split into buckyballs, buckytubes, and nanobuds) have not yet been fully analyzed and represent an intense area of research in nanomaterials. The names fullerene and buckyball are given after Richard Buckminster Fuller, popularizer of geodesic domes, which resemble the structure of fullerenes. The buckyballs are fairly large molecules formed completely of carbon bonded trigonally, forming spheroids (the best-known and simplest is the soccerball-shaped C buckminsterfullerene). Carbon nanotubes (buckytubes) are structurally similar to buckyballs, except that each atom is bonded trigonally in a curved sheet that forms a hollow cylinder. Nanobuds were first reported in 2007 and are hybrid buckytube/buckyball materials (buckyballs are covalently bonded to the outer wall of a nanotube) that combine the properties of both in a single structure. Of the other discovered allotropes, carbon nanofoam is a ferromagnetic allotrope discovered in 1997. It consists of a low-density cluster-assembly of carbon atoms strung together in a loose three-dimensional web, in which the atoms are bonded trigonally in six- and seven-membered rings. It is among the lightest known solids, with a density of about 2 kg/m. Similarly, glassy carbon contains a high proportion of closed porosity, but contrary to normal graphite, the graphitic layers are not stacked like pages in a book, but have a more random arrangement. Linear acetylenic carbon has the chemical structure −(C≡C)− . Carbon in this modification is linear with sp orbital hybridization, and is a polymer with alternating single and triple bonds. This carbyne is of considerable interest to nanotechnology as its Young's modulus is 40 times that of the hardest known material – diamond. In 2015, a team at the North Carolina State University announced the development of another allotrope they have dubbed Q-carbon, created by a high-energy low-duration laser pulse on amorphous carbon dust. Q-carbon is reported to exhibit ferromagnetism, fluorescence, and a hardness superior to diamonds. In the vapor phase, some of the carbon is in the form of highly reactive diatomic carbon dicarbon (). When excited, this gas glows green. Occurrence Carbon is the fourth most abundant chemical element in the observable universe by mass after hydrogen, helium, and oxygen. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Some meteorites contain microscopic diamonds that were formed when the Solar System was still a protoplanetary disk. Microscopic diamonds may also be formed by the intense pressure and high temperature at the sites of meteorite impacts. In 2014 NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. More than 20% of the carbon in the universe may be associated with PAHs, complex compounds of carbon and hydrogen without oxygen. These compounds figure in the PAH world hypothesis where they are hypothesized to have a role in abiogenesis and formation of life. PAHs seem to have been formed "a couple of billion years" after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. It has been estimated that the solid earth as a whole contains 730 ppm of carbon, with 2000 ppm in the core and 120 ppm in the combined mantle and crust. Since the mass of the earth is , this would imply 4360 million gigatonnes of carbon. This is much more than the amount of carbon in the oceans or atmosphere (below). In combination with oxygen in carbon dioxide, carbon is found in the Earth's atmosphere (approximately 900 gigatonnes of carbon — each ppm corresponds to 2.13 Gt) and dissolved in all water bodies (approximately 36,000 gigatonnes of carbon). Carbon in the biosphere has been estimated at 550 gigatonnes but with a large uncertainty, due mostly to a huge uncertainty in the amount of terrestrial deep subsurface bacteria. Hydrocarbons (such as coal, petroleum, and natural gas) contain carbon as well. Coal "reserves" (not "resources") amount to around 900 gigatonnes with perhaps 18,000 Gt of resources. Oil reserves are around 150 gigatonnes. Proven sources of natural gas are about (containing about 105 gigatonnes of carbon), but studies estimate another of "unconventional" deposits such as shale gas, representing about 540 gigatonnes of carbon. Carbon is also found in methane hydrates in polar regions and under the seas. Various estimates put this carbon between 500, 2500, or 3,000 Gt. According to one source, in the period from 1751 to 2008 about 347 gigatonnes of carbon were released as carbon dioxide to the atmosphere from burning of fossil fuels. Another source puts the amount added to the atmosphere for the period since 1750 at 879 Gt, and the total going to the atmosphere, sea, and land (such as peat bogs) at almost 2,000 Gt. Carbon is a constituent (about 12% by mass) of the very large masses of carbonate rock (limestone, dolomite, marble, and others). Coal is very rich in carbon (anthracite contains 92–98%) and is the largest commercial source of mineral carbon, accounting for 4,000 gigatonnes or 80% of fossil fuel. As for individual carbon allotropes, graphite is found in large quantities in the United States (mostly in New York and Texas), Russia, Mexico, Greenland, and India. Natural diamonds occur in the rock kimberlite, found in ancient volcanic "necks", or "pipes". Most diamond deposits are in Africa, notably in South Africa, Namibia, Botswana, the Republic of the Congo, and Sierra Leone. Diamond deposits have also been found in Arkansas, Canada, the Russian Arctic, Brazil, and in Northern and Western Australia. Diamonds are now also being recovered from the ocean floor off the Cape of Good Hope. Diamonds are found naturally, but about 30% of all industrial diamonds used in the U.S. are now manufactured. Carbon-14 is formed in upper layers of the troposphere and the stratosphere at altitudes of 9–15 km by a reaction that is precipitated by cosmic rays. Thermal neutrons are produced that collide with the nuclei of nitrogen-14, forming carbon-14 and a proton. As such, of atmospheric carbon dioxide contains carbon-14. Carbon-rich asteroids are relatively preponderant in the outer parts of the asteroid belt in the Solar System. These asteroids have not yet been directly sampled by scientists. The asteroids can be used in hypothetical space-based carbon mining, which may be possible in the future, but is currently technologically impossible. Isotopes Isotopes of carbon are atomic nuclei that contain six protons plus a number of neutrons (varying from 2 to 16). Carbon has two stable, naturally occurring isotopes. The isotope carbon-12 (C) forms 98.93% of the carbon on Earth, while carbon-13 (C) forms the remaining 1.07%. The concentration of C is further increased in biological materials because biochemical reactions discriminate against C. In 1961, the International Union of Pure and Applied Chemistry (IUPAC) adopted the isotope carbon-12 as the basis for atomic weights. Identification of carbon in nuclear magnetic resonance (NMR) experiments is done with the isotope C. Carbon-14 (C) is a naturally occurring radioisotope, created in the upper atmosphere (lower stratosphere and upper troposphere) by interaction of nitrogen with cosmic rays. It is found in trace amounts on Earth of 1 part per trillion (0.0000000001%) or more, mostly confined to the atmosphere and superficial deposits, particularly of peat and other organic materials. This isotope decays by 0.158 MeV β emission. Because of its relatively short half-life of 5730 years, C is virtually absent in ancient rocks. The amount of C in the atmosphere and in living organisms is almost constant, but decreases predictably in their bodies after death. This principle is used in radiocarbon dating, invented in 1949, which has been used extensively to determine the age of carbonaceous materials with ages up to about 40,000 years. There are 15 known isotopes of carbon and the shortest-lived of these is C which decays through proton emission and alpha decay and has a half-life of 1.98739 × 10 s. The exotic C exhibits a nuclear halo, which means its radius is appreciably larger than would be expected if the nucleus were a sphere of constant density. Formation in stars Formation of the carbon atomic nucleus occurs within a giant or supergiant star through the triple-alpha process. This requires a nearly simultaneous collision of three alpha particles (helium nuclei), as the products of further nuclear fusion reactions of helium with hydrogen or another helium nucleus produce lithium-5 and beryllium-8 respectively, both of which are highly unstable and decay almost instantly back into smaller nuclei. The triple-alpha process happens in conditions of temperatures over 100 megakelvins and helium concentration that the rapid expansion and cooling of the early universe prohibited, and therefore no significant carbon was created during the Big Bang. According to current physical cosmology theory, carbon is formed in the interiors of stars on the horizontal branch. When massive stars die as supernova, the carbon is scattered into space as dust. This dust becomes component material for the formation of the next-generation star systems with accreted planets. The Solar System is one such star system with an abundance of carbon, enabling the existence of life as we know it. It is the opinion of most scholars that all the carbon in the Solar System and the Milky Way comes from dying stars. The CNO cycle is an additional hydrogen fusion mechanism that powers stars, wherein carbon operates as a catalyst. Rotational transitions of various isotopic forms of carbon monoxide (for example, CO, CO, and CO) are detectable in the submillimeter wavelength range, and are used in the study of newly forming stars in molecular clouds. Carbon cycle Under terrestrial conditions, conversion of one element to another is very rare. Therefore, the amount of carbon on Earth is effectively constant. Thus, processes that use carbon must obtain it from somewhere and dispose of it somewhere else. The paths of carbon in the environment form the carbon cycle. For example, photosynthetic plants draw carbon dioxide from the atmosphere (or seawater) and build it into biomass, as in the Calvin cycle, a process of carbon fixation. Some of this biomass is eaten by animals, while some carbon is exhaled by animals as carbon dioxide. The carbon cycle is considerably more complicated than this short loop; for example, some carbon dioxide is dissolved in the oceans; if bacteria do not consume it, dead plant or animal matter may become petroleum or coal, which releases carbon when burned. Compounds Organic compounds Carbon can form very long chains of interconnecting carbon–carbon bonds, a property that is called catenation. Carbon-carbon bonds are strong and stable. Through catenation, carbon forms a countless number of compounds. A tally of unique compounds shows that more contain carbon than do not. A similar claim can be made for hydrogen because most organic compounds contain hydrogen chemically bonded to carbon or another common element like oxygen or nitrogen. The simplest form of an organic molecule is the hydrocarbon—a large family of organic molecules that are composed of hydrogen atoms bonded to a chain of carbon atoms. A hydrocarbon backbone can be substituted by other atoms, known as heteroatoms. Common heteroatoms that appear in organic compounds include oxygen, nitrogen, sulfur, phosphorus, and the nonradioactive halogens, as well as the metals lithium and magnesium. Organic compounds containing bonds to metal are known as organometallic compounds (see below). Certain groupings of atoms, often including heteroatoms, recur in large numbers of organic compounds. These collections, known as functional groups, confer common reactivity patterns and allow for the systematic study and categorization of organic compounds. Chain length, shape and functional groups all affect the properties of organic molecules. In most stable compounds of carbon (and nearly all stable organic compounds), carbon obeys the octet rule and is tetravalent, meaning that a carbon atom forms a total of four covalent bonds (which may include double and triple bonds). Exceptions include a small number of stabilized carbocations (three bonds, positive charge), radicals (three bonds, neutral), carbanions (three bonds, negative charge) and carbenes (two bonds, neutral), although these species are much more likely to be encountered as unstable, reactive intermediates. Carbon occurs in all known organic life and is the basis of organic chemistry. When united with hydrogen, it forms various hydrocarbons that are important to industry as refrigerants, lubricants, solvents, as chemical feedstock for the manufacture of plastics and petrochemicals, and as fossil fuels. When combined with oxygen and hydrogen, carbon can form many groups of important biological compounds including sugars, lignans, chitins, alcohols, fats, aromatic esters, carotenoids and terpenes. With nitrogen it forms alkaloids, and with the addition of sulfur also it forms antibiotics, amino acids, and rubber products. With the addition of phosphorus to these other elements, it forms DNA and RNA, the chemical-code carriers of life, and adenosine triphosphate (ATP), the most important energy-transfer molecule in all living cells. Norman Horowitz, head of the Mariner and Viking missions to Mars (1965-1976), considered that the unique characteristics of carbon made it unlikely that any other element could replace carbon, even on another planet, to generate the biochemistry necessary for life. Inorganic compounds Commonly carbon-containing compounds which are associated with minerals or which do not contain bonds to the other carbon atoms, halogens, or hydrogen, are treated separately from classical organic compounds; the definition is not rigid, and the classification of some compounds can vary from author to author (see reference articles above). Among these are the simple oxides of carbon. The most prominent oxide is carbon dioxide (). This was once the principal constituent of the paleoatmosphere, but is a minor component of the Earth's atmosphere today. Dissolved in water, it forms carbonic acid (), but as most compounds with multiple single-bonded oxygens on a single carbon it is unstable. Through this intermediate, though, resonance-stabilized carbonate ions are produced. Some important minerals are carbonates, notably calcite. Carbon disulfide () is similar. Nevertheless, due to its physical properties and its association with organic synthesis, carbon disulfide is sometimes classified as an organic solvent. The other common oxide is carbon monoxide (CO). It is formed by incomplete combustion, and is a colorless, odorless gas. The molecules each contain a triple bond and are fairly polar, resulting in a tendency to bind permanently to hemoglobin molecules, displacing oxygen, which has a lower binding affinity. Cyanide (CN), has a similar structure, but behaves much like a halide ion (pseudohalogen). For example, it can form the nitride cyanogen molecule ((CN)), similar to diatomic halides. Likewise, the heavier analog of cyanide, cyaphide (CP), is also considered inorganic, though most simple derivatives are highly unstable. Other uncommon oxides are carbon suboxide (), the unstable dicarbon monoxide (CO), carbon trioxide (CO), cyclopentanepentone (CO), cyclohexanehexone (CO), and mellitic anhydride (CO). However, mellitic anhydride is the triple acyl anhydride of mellitic acid; moreover, it contains a benzene ring. Thus, many chemists consider it to be organic. With reactive metals, such as tungsten, carbon forms either carbides (C) or acetylides () to form alloys with high melting points. These anions are also associated with methane and acetylene, both very weak acids. With an electronegativity of 2.5, carbon prefers to form covalent bonds. A few carbides are covalent lattices, like carborundum (SiC), which resembles diamond. Nevertheless, even the most polar and salt-like of carbides are not completely ionic compounds. Organometallic compounds Organometallic compounds by definition contain at least one carbon-metal covalent bond. A wide range of such compounds exist; major classes include simple alkyl-metal compounds (for example, tetraethyllead), η-alkene compounds (for example, Zeise's salt), and η-allyl compounds (for example, allylpalladium chloride dimer); metallocenes containing cyclopentadienyl ligands (for example, ferrocene); and transition metal carbene complexes. Many metal carbonyls and metal cyanides exist (for example, tetracarbonylnickel and potassium ferricyanide); some workers consider metal carbonyl and cyanide complexes without other carbon ligands to be purely inorganic, and not organometallic. However, most organometallic chemists consider metal complexes with any carbon ligand, even 'inorganic carbon' (e.g., carbonyls, cyanides, and certain types of carbides and acetylides) to be organometallic in nature. Metal complexes containing organic ligands without a carbon-metal covalent bond (e.g., metal carboxylates) are termed metalorganic compounds. While carbon is understood to strongly prefer formation of four covalent bonds, other exotic bonding schemes are also known. Carboranes are highly stable dodecahedral derivatives of the [B12H12]2- unit, with one BH replaced with a CH+. Thus, the carbon is bonded to five boron atoms and one hydrogen atom. The cation [(PhPAu)C] contains an octahedral carbon bound to six phosphine-gold fragments. This phenomenon has been attributed to the aurophilicity of the gold ligands, which provide additional stabilization of an otherwise labile species. In nature, the iron-molybdenum cofactor (FeMoco) responsible for microbial nitrogen fixation likewise has an octahedral carbon center (formally a carbide, C(-IV)) bonded to six iron atoms. In 2016, it was confirmed that, in line with earlier theoretical predictions, the hexamethylbenzene dication contains a carbon atom with six bonds. More specifically, the dication could be described structurally by the formulation [MeC(η5-C5Me5)]2+, making it an "organic metallocene" in which a MeC3+ fragment is bonded to a η5-C5Me5− fragment through all five of the carbons of the ring. It is important to note that in the cases above, each of the bonds to carbon contain less than two formal electron pairs. Thus, the formal electron count of these species does not exceed an octet. This makes them hypercoordinate but not hypervalent. Even in cases of alleged 10-C-5 species (that is, a carbon with five ligands and a formal electron count of ten), as reported by Akiba and co-workers, electronic structure calculations conclude that the electron population around carbon is still less than eight, as is true for other compounds featuring four-electron three-center bonding. History and etymology The English name carbon comes from the Latin carbo for coal and charcoal, whence also comes the French charbon, meaning charcoal. In German, Dutch and Danish, the names for carbon are Kohlenstoff, koolstof, and kulstof respectively, all literally meaning coal-substance. Carbon was discovered in prehistory and was known in the forms of soot and charcoal to the earliest human civilizations. Diamonds were known probably as early as 2500 BCE in China, while carbon in the form of charcoal was made around Roman times by the same chemistry as it is today, by heating wood in a pyramid covered with clay to exclude air. In 1722, René Antoine Ferchault de Réaumur demonstrated that iron was transformed into steel through the absorption of some substance, now known to be carbon. In 1772, Antoine Lavoisier showed that diamonds are a form of carbon; when he burned samples of charcoal and diamond and found that neither produced any water and that both released the same amount of carbon dioxide per gram. In 1779, Carl Wilhelm Scheele showed that graphite, which had been thought of as a form of lead, was instead identical with charcoal but with a small admixture of iron, and that it gave "aerial acid" (his name for carbon dioxide) when oxidized with nitric acid. In 1786, the French scientists Claude Louis Berthollet, Gaspard Monge and C. A. Vandermonde confirmed that graphite was mostly carbon by oxidizing it in oxygen in much the same way Lavoisier had done with diamond. Some iron again was left, which the French scientists thought was necessary to the graphite structure. In their publication they proposed the name carbone (Latin carbonum) for the element in graphite which was given off as a gas upon burning graphite. Antoine Lavoisier then listed carbon as an element in his 1789 textbook. A new allotrope of carbon, fullerene, that was discovered in 1985 includes nanostructured forms such as buckyballs and nanotubes. Their discoverers – Robert Curl, Harold Kroto, and Richard Smalley – received the Nobel Prize in Chemistry in 1996. The resulting renewed interest in new forms led to the discovery of further exotic allotropes, including glassy carbon, and the realization that "amorphous carbon" is not strictly amorphous. Production Graphite Commercially viable natural deposits of graphite occur in many parts of the world, but the most important sources economically are in China, India, Brazil, and North Korea. Graphite deposits are of metamorphic origin, found in association with quartz, mica, and feldspars in schists, gneisses, and metamorphosed sandstones and limestone as lenses or veins, sometimes of a metre or more in thickness. Deposits of graphite in Borrowdale, Cumberland, England were at first of sufficient size and purity that, until the 19th century, pencils were made by sawing blocks of natural graphite into strips before encasing the strips in wood. Today, smaller deposits of graphite are obtained by crushing the parent rock and floating the lighter graphite out on water. There are three types of natural graphite—amorphous, flake or crystalline flake, and vein or lump. Amorphous graphite is the lowest quality and most abundant. Contrary to science, in industry "amorphous" refers to very small crystal size rather than complete lack of crystal structure. Amorphous is used for lower value graphite products and is the lowest priced graphite. Large amorphous graphite deposits are found in China, Europe, Mexico and the United States. Flake graphite is less common and of higher quality than amorphous; it occurs as separate plates that crystallized in metamorphic rock. Flake graphite can be four times the price of amorphous. Good quality flakes can be processed into expandable graphite for many uses, such as flame retardants. The foremost deposits are found in Austria, Brazil, Canada, China, Germany and Madagascar. Vein or lump graphite is the rarest, most valuable, and highest quality type of natural graphite. It occurs in veins along intrusive contacts in solid lumps, and it is only commercially mined in Sri Lanka. According to the USGS, world production of natural graphite was 1.1 million tonnes in 2010, to which China contributed 800,000 t, India 130,000 t, Brazil 76,000 t, North Korea 30,000 t and Canada 25,000 t. No natural graphite was reported mined in the United States, but 118,000 t of synthetic graphite with an estimated value of $998 million was produced in 2009. Diamond The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly concentrated in a small number of locations around the world (see figure). Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during which care has to be taken in order to prevent larger diamonds from being destroyed in this process and subsequently the particles are sorted by density. Today, diamonds are located in the diamond-rich density fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a stronger tendency to stick to grease than the other minerals in the ore. Historically diamonds were known to be found only in alluvial deposits in southern India. India led the world in diamond production from the time of their discovery in approximately the 9th century BC to the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late 18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were found in 1725. Diamond production of primary deposits (kimberlites and lamproites) only started in the 1870s after the discovery of the diamond fields in South Africa. Production has increased over time and an accumulated total of over 4.5 billion carats have been mined since that date. Most commercially viable diamond deposits were in Russia, Botswana, Australia and the Democratic Republic of Congo. By 2005, Russia produced almost one-fifth of the global diamond output (mostly in Yakutia territory; for example, Mir pipe and Udachnaya pipe) but the Argyle mine in Australia became the single largest source, producing 14 million carats in 2018. New finds, the Canadian mines at Diavik and Ekati, are expected to become even more valuable owing to their production of gem quality stones. In the United States, diamonds have been found in Arkansas, Colorado, and Montana. In 2004, a startling discovery of a microscopic diamond in the United States led to the January 2008 bulk-sampling of kimberlite pipes in a remote part of Montana. Applications Carbon is essential to all known living systems, and without it life as we know it could not exist (see alternative biochemistry). The major economic use of carbon other than food and wood is in the form of hydrocarbons, most notably the fossil fuel methane gas and crude oil (petroleum). Crude oil is distilled in refineries by the petrochemical industry to produce gasoline, kerosene, and other products. Cellulose is a natural, carbon-containing polymer produced by plants in the form of wood, cotton, linen, and hemp. Cellulose is used primarily for maintaining structure in plants. Commercially valuable carbon polymers of animal origin include wool, cashmere, and silk. Plastics are made from synthetic carbon polymers, often with oxygen and nitrogen atoms included at regular intervals in the main polymer chain. The raw materials for many of these synthetic substances come from crude oil. The uses of carbon and its compounds are extremely varied. It can form alloys with iron, of which the most common is carbon steel. Graphite is combined with clays to form the 'lead' used in pencils used for writing and drawing. It is also used as a lubricant and a pigment, as a molding material in glass manufacture, in electrodes for dry batteries and in electroplating and electroforming, in brushes for electric motors, and as a neutron moderator in nuclear reactors. Charcoal is used as a drawing material in artwork, barbecue grilling, iron smelting, and in many other applications. Wood, coal and oil are used as fuel for production of energy and heating. Gem quality diamond is used in jewelry, and industrial diamonds are used in drilling, cutting and polishing tools for machining metals and stone. Plastics are made from fossil hydrocarbons, and carbon fiber, made by pyrolysis of synthetic polyester fibers is used to reinforce plastics to form advanced, lightweight composite materials. Carbon fiber is made by pyrolysis of extruded and stretched filaments of polyacrylonitrile (PAN) and other organic substances. The crystallographic structure and mechanical properties of the fiber depend on the type of starting material, and on the subsequent processing. Carbon fibers made from PAN have structure resembling narrow filaments of graphite, but thermal processing may re-order the structure into a continuous rolled sheet. The result is fibers with higher specific tensile strength than steel. Carbon black is used as the black pigment in printing ink, artist's oil paint, and water colours, carbon paper, automotive finishes, India ink and laser printer toner. Carbon black is also used as a filler in rubber products such as tyres and in plastic compounds. Activated charcoal is used as an absorbent and adsorbent in filter material in applications as diverse as gas masks, water purification, and kitchen extractor hoods, and in medicine to absorb toxins, poisons, or gases from the digestive system. Carbon is used in chemical reduction at high temperatures. Coke is used to reduce iron ore into iron (smelting). Case hardening of steel is achieved by heating finished steel components in carbon powder. Carbides of silicon, tungsten, boron, and titanium are among the hardest known materials, and are used as abrasives in cutting and grinding tools. Carbon compounds make up most of the materials used in clothing, such as natural and synthetic textiles and leather, and almost all of the interior surfaces in the built environment other than glass, stone, drywall and metal. Diamonds The diamond industry falls into two categories: one dealing with gem-grade diamonds and the other, with industrial-grade diamonds. While a large trade in both types of diamonds exists, the two markets function dramatically differently. Unlike precious metals such as gold or platinum, gem diamonds do not trade as a commodity: there is a substantial mark-up in the sale of diamonds, and there is not a very active market for resale of diamonds. Industrial diamonds are valued mostly for their hardness and heat conductivity, with the gemological qualities of clarity and color being mostly irrelevant. About 80% of mined diamonds (equal to about 100 million carats or 20 tonnes annually) are unsuitable for use as gemstones and relegated for industrial use (known as bort). Synthetic diamonds, invented in the 1950s, found almost immediate industrial applications; 3 billion carats (600 tonnes) of synthetic diamond is produced annually. The dominant industrial use of diamond is in cutting, drilling, grinding, and polishing. Most of these applications do not require large diamonds; in fact, most diamonds of gem-quality except for their small size can be used industrially. Diamonds are embedded in drill tips or saw blades, or ground into a powder for use in grinding and polishing applications. Specialized applications include use in laboratories as containment for high-pressure experiments (see diamond anvil cell), high-performance bearings, and limited use in specialized windows. With the continuing advances in the production of synthetic diamonds, new applications are becoming feasible. Garnering much excitement is the possible use of diamond as a semiconductor suitable for microchips, and because of its exceptional heat conductance property, as a heat sink in electronics. Precautions Pure carbon has extremely low toxicity to humans and can be handled safely in the form of graphite or charcoal. It is resistant to dissolution or chemical attack, even in the acidic contents of the digestive tract. Consequently, once it enters into the body's tissues it is likely to remain there indefinitely. Carbon black was probably one of the first pigments to be used for tattooing, and Ötzi the Iceman was found to have carbon tattoos that survived during his life and for 5200 years after his death. Inhalation of coal dust or soot (carbon black) in large quantities can be dangerous, irritating lung tissues and causing the congestive lung disease, coalworker's pneumoconiosis. Diamond dust used as an abrasive can be harmful if ingested or inhaled. Microparticles of carbon are produced in diesel engine exhaust fumes, and may accumulate in the lungs. In these examples, the harm may result from contaminants (e.g., organic chemicals, heavy metals) rather than from the carbon itself. Carbon generally has low toxicity to life on Earth; but carbon nanoparticles are deadly to Drosophila. Carbon may burn vigorously and brightly in the presence of air at high temperatures. Large accumulations of coal, which have remained inert for hundreds of millions of years in the absence of oxygen, may spontaneously combust when exposed to air in coal mine waste tips, ship cargo holds and coal bunkers, and storage dumps. In nuclear applications where graphite is used as a neutron moderator, accumulation of Wigner energy followed by a sudden, spontaneous release may occur. Annealing to at least 250 °C can release the energy safely, although in the Windscale fire the procedure went wrong, causing other reactor materials to combust. The great variety of carbon compounds include such lethal poisons as tetrodotoxin, the lectin ricin from seeds of the castor oil plant Ricinus communis, cyanide (CN), and carbon monoxide; and such essentials to life as glucose and protein. See also Carbon chauvinism Carbon detonation Carbon footprint Carbon star Carbon planet Gas carbon Low-carbon economy Timeline of carbon nanotubes References Bibliography External links Carbon at The Periodic Table of Videos (University of Nottingham) Carbon on Britannica Extensive Carbon page at asu.edu (archived 18 June 2010) Electrochemical uses of carbon (archived 9 November 2001) Carbon—Super Stuff. Animation with sound and interactive 3D-models. (archived 9 November 2012) Allotropes of carbon Chemical elements with hexagonal planar structure Chemical elements Native element minerals Polyatomic nonmetals Reactive nonmetals Reducing agents
5300
https://en.wikipedia.org/wiki/Computer%20data%20storage
Computer data storage
Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Functionality Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines. Data organization and representation A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character. Data are encoded by assigning a bit pattern to each character, digit, or multimedia object. Many standards exist for encoding (e.g. character encodings like ASCII, image encodings like JPEG, and video encodings like MPEG-4). By adding bits to each encoded unit, redundancy allows the computer to detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in the storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A random bit flip (e.g. due to random radiation) is typically corrected upon detection. A bit or a group of malfunctioning physical bits (the specific defective bit is not always known; group definition depends on the specific storage device) is typically automatically fenced out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried. Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percent) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of the trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons, certain types of data (e.g. credit card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots. Hierarchy of storage Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit. In contemporary usage, memory is usually semiconductor storage read-write random-access memory, typically DRAM (dynamic RAM) or other forms of fast but temporary storage. Storage consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down). Historically, memory has, depending on technology, been called central memory, core memory, core storage, drum, main memory, real storage, or internal memory. Meanwhile, slower persistent storage devices have been referred to as secondary storage, external memory, or auxiliary/peripheral storage. Primary storage Primary storage (also known as main memory, internal memory, or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in a uniform manner. Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic-core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive. This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage are volatile, meaning that they lose the information when not powered. Besides storing opened programs, it serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as it's not needed by running software. Spare memory can be utilized as RAM drive for temporary high-speed data storage. As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM: Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are the fastest of all forms of computer data storage. Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It was introduced solely to improve the performance of computers. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand, main memory is much slower, but has a much greater storage capacity than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower. Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks. As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access). Many types of "ROM" are not literally read only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly. Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage. Secondary storage Secondary storage (also known as external memory or auxiliary storage) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive. In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (thousandths of a second), while the access time per byte for primary storage is measured in nanoseconds (billionths of a second). Thus, secondary storage is significantly slower than primary storage. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM disks. Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based on sequential and block access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel to increase the bandwidth between primary and secondary memory. Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information. Most computer operating systems use the concept of virtual memory, allowing the utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded. Tertiary storage Tertiary storage or tertiary memory is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes. When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library. Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is: Online storage is immediately available for I/O. Nearline storage is not immediately available, but can be made online quickly without human intervention. Offline storage is not immediately available, and requires some human intervention to become online. For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage. Off-line storage Off-line storage is computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction. Off-line storage is used to transfer information since the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enabling disaster recovery. Off-line storage increases general information security since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage. In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are the most popular, and to a much lesser extent removable hard disk drives; older examples include floppy disks and Zip disks. In enterprise uses, magnetic tape cartridges are predominant; older examples include open-reel magnetic tape and punched cards. Characteristics of storage Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance. Volatility Non-volatile memory retains the stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory. Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed, otherwise it would vanish. Static random-access memory is a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost. An uninterruptible power supply (UPS) can be used to give a computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for example EMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes. Mutability Read/write storage or mutable storage Allows information to be overwritten at any time. A computer without some amount of read/write storage for primary storage purposes would be useless for many tasks. Modern computers typically use read/write storage also for secondary storage. Slow write, fast read storage Read/write storage which allows information to be overwritten multiple times, but with the write operation being much slower than the read operation. Examples include CD-RW and SSD. Write once storage Write once read many (WORM) allows the information to be written only once at some point after manufacture. Examples include semiconductor programmable read-only memory and CD-R. Read only storage Retains the information stored at the time of manufacture. Examples include mask ROM ICs and CD-ROM. Accessibility Random access Any location in storage can be accessed at any moment in approximately the same amount of time. Such characteristic is well suited for primary and secondary storage. Most semiconductor memories, flash memories and hard disk drives provide random access, though both semiconductor and flash memories have minimal latency when compared to hard disk drives, as no mechanical parts need to be moved. Sequential access The accessing of pieces of information will be in a serial order, one after the other; therefore the time to access a particular piece of information depends upon which piece of information was last accessed. Such characteristic is typical of off-line storage. Addressability Location-addressable Each individually accessible unit of information in storage is selected with its numerical memory address. In modern computers, location-addressable storage usually limits to primary storage, accessed internally by computer programs, since location-addressability is very efficient, but burdensome for humans. File addressable Information is divided into files of variable length, and a particular file is selected with human-readable directory and file names. The underlying device is still location-addressable, but the operating system of a computer provides the file system abstraction to make the operation more understandable. In modern computers, secondary, tertiary and off-line storage use file systems. Content-addressable Each individually accessible unit of information is selected based on the basis of (part of) the contents stored there. Content-addressable storage can be implemented using software (computer program) or hardware (computer device), with hardware being faster but more expensive option. Hardware content addressable memory is often used in a computer's CPU cache. Capacity Raw capacity The total amount of stored information that a storage device or medium can hold. It is expressed as a quantity of bits or bytes (e.g. 10.4 megabytes). Memory storage density The compactness of stored information. It is the storage capacity of a medium divided with a unit of length, area or volume (e.g. 1.2 megabytes per square inch). Performance Latency The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second for tertiary storage. It may make sense to separate read latency and write latency (especially for non-volatile memory) and in case of sequential access storage, minimum, maximum and average latency. Throughput The rate at which information can be read from or written to the storage. In computer data storage, throughput is usually expressed in terms of megabytes per second (MB/s), though bit rate may also be used. As with latency, read rate and write rate may need to be differentiated. Also accessing media sequentially, as opposed to randomly, typically yields maximum throughput. Granularity The size of the largest "chunk" of data that can be efficiently accessed as a single unit, e.g. without introducing additional latency. Reliability The probability of spontaneous bit value change under various conditions, or overall failure rate. Utilities such as hdparm and sar can be used to measure IO performance in Linux. Energy use Storage devices that reduce fan usage automatically shut-down during inactivity, and low power hard drives can reduce energy consumption by 90 percent. 2.5-inch hard disk drives often consume less power than larger ones. Low capacity solid-state drives have no moving parts and consume less power than hard disks. Also, memory may use more power than hard disks. Large caches, which are used to avoid hitting the memory wall, may also consume a large amount of power. Security Full disk encryption, volume and virtual disk encryption, andor file/folder encryption is readily available for most storage devices. Hardware memory encryption is available in Intel Architecture, supporting Total Memory Encryption (TME) and page granular memory encryption with multiple keys (MKTME). and in SPARC M7 generation since October 2015. Vulnerability and reliability Distinct types of data storage have different points of failure and various methods of predictive failure analysis. Vulnerabilities that can instantly lead to total loss are head crashing on mechanical hard drives and failure of electronic components on flash storage. Error detection Impending failure on hard disk drives is estimable using S.M.A.R.T. diagnostic data that includes the hours of operation and the count of spin-ups, though its reliability is disputed. Flash storage may experience downspiking transfer rates as a result of accumulating errors, which the flash memory controller attempts to correct. The health of optical media can be determined by measuring correctable minor errors, of which high counts signify deteriorating and/or low-quality media. Too many consecutive minor errors can lead to data corruption. Not all vendors and models of optical drives support error scanning. Storage media , the most commonly used data storage media are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies, such as all-flash arrays (AFAs) are proposed for development. Semiconductor Semiconductor memory uses semiconductor-based integrated circuit (IC) chips to store information. Data are typically stored in metal–oxide–semiconductor (MOS) memory cells. A semiconductor memory chip may contain millions of memory cells, consisting of tiny MOS field-effect transistors (MOSFETs) and/or MOS capacitors. Both volatile and non-volatile forms of semiconductor memory exist, the former using standard MOSFETs and the latter using floating-gate MOSFETs. In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor random-access memory (RAM), particularly dynamic random-access memory (DRAM). Since the turn of the century, a type of non-volatile floating-gate semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers that are designed for them. As early as 2006, notebook and desktop computer manufacturers started using flash-based solid-state drives (SSDs) as default configuration options for the secondary storage either in addition to or instead of the more traditional HDD. Magnetic Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms: Magnetic disk; Floppy disk, used for off-line storage; Hard disk drive, used for secondary storage. Magnetic tape, used for tertiary and off-line storage; Carousel memory (magnetic rolls). In early computers, magnetic storage was also used as: Primary storage in a form of magnetic memory, or core memory, core rope memory, thin-film memory and/or twistor memory; Tertiary (e.g. NCR CRAM) or off line storage in the form of magnetic cards; Magnetic tape was then often used for secondary storage. Magnetic storage does not have a definite limit of rewriting cycles like flash storage and re-writeable optical media, as altering magnetic fields causes no physical wear. Rather, their life span is limited by mechanical parts. Optical Optical storage, the typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile. The deformities may be permanent (read only media), formed once (write once media) or reversible (recordable or read/write media). The following forms are in common use : CD, CD-ROM, DVD, BD-ROM: Read only storage, used for mass distribution of digital information (music, video, computer programs); CD-R, DVD-R, DVD+R, BD-R: Write once storage, used for tertiary and off-line storage; CD-RW, DVD-RW, DVD+RW, DVD-RAM, BD-RE: Slow write, fast read storage, used for tertiary and off-line storage; Ultra Density Optical or UDO is similar in capacity to BD-R or BD-RE and is slow write, fast read storage used for tertiary and off-line storage. Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic surface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage is non-volatile, sequential access, slow write, fast read storage used for tertiary and off-line storage. 3D optical data storage has also been proposed. Light induced magnetization melting in magnetic photoconductors has also been proposed for high-speed low-energy consumption magneto-optical storage. Paper Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole. Barcodes make it possible for objects that are sold or transported to have some computer-readable information securely attached. Relatively small amounts of digital data (compared to other digital data storage) may be backed up on paper as a matrix barcode for very long-term storage, as the longevity of paper typically exceeds even magnetic data storage. Other storage media or substrates Vacuum-tube memory A Williams tube used a cathode-ray tube, and a Selectron tube used a large vacuum tube to store information. These primary storage devices were short-lived in the market, since the Williams tube was unreliable, and the Selectron tube was expensive. Electro-acoustic memory Delay-line memory used sound waves in a substance such as mercury to store information. Delay-line memory was dynamic volatile, cycle sequential read/write storage, and was used for primary storage. Optical tape is a medium for optical storage, generally consisting of a long and narrow strip of plastic, onto which patterns can be written and from which the patterns can be read back. It shares some technologies with cinema film stock and optical discs, but is compatible with neither. The motivation behind developing this technology was the possibility of far greater storage capacities than either magnetic tape or optical discs. Phase-change memory uses different mechanical phases of phase-change material to store information in an X–Y addressable matrix and reads the information by observing the varying electrical resistance of the material. Phase-change memory would be non-volatile, random-access read/write storage, and might be used for primary, secondary and off-line storage. Most rewritable and many write-once optical disks already use phase-change material to store information. Holographic data storage stores information optically inside crystals or photopolymers. Holographic storage can utilize the whole volume of the storage medium, unlike optical disc storage, which is limited to a small number of surface layers. Holographic storage would be non-volatile, sequential-access, and either write-once or read/write storage. It might be used for secondary and off-line storage. See Holographic Versatile Disc (HVD). Molecular memory stores information in polymer that can store electric charge. Molecular memory might be especially suited for primary storage. The theoretical storage capacity of molecular memory is 10 terabits per square inch (16 Gbit/mm2). Magnetic photoconductors store magnetic information, which can be modified by low-light illumination. DNA stores information in DNA nucleotides. It was first done in 2012, when researchers achieved a ratio of 1.28 petabytes per gram of DNA. In March 2017 scientists reported that a new algorithm called a DNA fountain achieved 85% of the theoretical limit, at 215 petabytes per gram of DNA. Related technologies Redundancy While a group of bits malfunction may be resolved by error detection and correction mechanisms (see above), storage device malfunction requires different solutions. The following solutions are commonly used and valid for most storage devices: Device mirroring (replication) – A common solution to the problem is constantly maintaining an identical copy of device content on another device (typically of the same type). The downside is that this doubles the storage, and both devices (copies) need to be updated simultaneously with some overhead and possibly some delays. The upside is the possible concurrent reading of the same data group by two independent processes, which increases performance. When one of the replicated devices is detected to be defective, the other copy is still operational and is being utilized to generate a new copy on another device (usually available operational in a pool of stand-by devices for this purpose). Redundant array of independent disks (RAID) – This method generalizes the device mirroring above by allowing one device in a group of devices to fail and be replaced with the content restored (Device mirroring is RAID with n=2). RAID groups of n=5 or n=6 are common. n>2 saves storage, when compared with n=2, at the cost of more processing during both regular operation (with often reduced performance) and defective device replacement. Device mirroring and typical RAID are designed to handle a single device failure in the RAID group of devices. However, if a second failure occurs before the RAID group is completely repaired from the first failure, then data can be lost. The probability of a single failure is typically small. Thus the probability of two failures in the same RAID group in time proximity is much smaller (approximately the probability squared, i.e., multiplied by itself). If a database cannot tolerate even such a smaller probability of data loss, then the RAID group itself is replicated (mirrored). In many cases such mirroring is done geographically remotely, in a different storage array, to handle recovery from disasters (see disaster recovery above). Network connectivity A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors to a lesser degree. Direct-attached storage (DAS) is a traditional mass storage, that does not use any network. This is still a most popular approach. This retronym was coined recently, together with NAS and SAN. Network-attached storage (NAS) is mass storage attached to a computer which another computer can access at file level over a local area network, a private wide area network, or in the case of online file storage, over the Internet. NAS is commonly associated with the NFS and CIFS/SMB protocols. Storage area network (SAN) is a specialized network, that provides other computers with storage capacity. The crucial difference between NAS and SAN, is that NAS presents and manages file systems to client computers, while SAN provides access at block-addressing (raw) level, leaving it to attaching systems to manage data or file systems within the provided capacity. SAN is commonly associated with Fibre Channel networks. Robotic storage Large quantities of individual magnetic tapes, and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known as tape libraries, and in optical storage field optical jukeboxes, or optical disk libraries per analogy. The smallest forms of either technology containing just one drive device are referred to as autoloaders or autochangers. Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects performance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots. Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically migrating long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are retrieved back to disk. See also Primary storage topics Aperture (computer memory) Dynamic random-access memory (DRAM) Memory latency Mass storage Memory cell (disambiguation) Memory management Memory leak Virtual memory Memory protection Page address register Stable storage Static random-access memory (SRAM) Secondary, tertiary and off-line storage topics Cloud storage Hybrid cloud storage Data deduplication Data proliferation Data storage tag used for capturing research data Disk utility File system List of file formats Global filesystem Flash memory Geoplexing Information repository Noise-predictive maximum-likelihood detection Object(-based) storage Removable media Solid-state drive Spindle Virtual tape library Wait state Write buffer Write protection Data storage conferences Storage Networking World Storage World Conference Notes References Further reading Memory & storage, Computer history museum Computer architecture
5302
https://en.wikipedia.org/wiki/Conditional
Conditional
Conditional (if then) may refer to: Causal conditional, if X then Y, where X is a cause of Y Conditional probability, the probability of an event A given that another event B has occurred Conditional proof, in logic: a proof that asserts a conditional, and proves that the antecedent leads to the consequent Strict conditional, in philosophy, logic, and mathematics Material conditional, in propositional calculus, or logical calculus in mathematics Relevance conditional, in relevance logic Conditional (computer programming), a statement or expression in computer programming languages A conditional expression in computer programming languages such as ?: Conditions in a contract Grammar and linguistics Conditional mood (or conditional tense), a verb form in many languages Conditional sentence, a sentence type used to refer to hypothetical situations and their consequences Indicative conditional, a conditional sentence expressing "if A then B" in a natural language Counterfactual conditional, a conditional sentence indicating what would be the case if its antecedent were true Other "Conditional" (Laura Mvula song) Conditional jockey, an apprentice jockey in British or Irish National Hunt racing Conditional short-circuit current Conditional Value-at-Risk See also Condition (disambiguation) Conditional statement (disambiguation)
5304
https://en.wikipedia.org/wiki/Cone%20%28disambiguation%29
Cone (disambiguation)
A cone is a basic geometrical shape. Cone may also refer to: Mathematics Cone (category theory) Cone (formal languages) Cone (graph theory), a graph in which one vertex is adjacent to all others Cone (linear algebra), a subset of vector space Mapping cone (homological algebra) Cone (topology) Conic bundle, a concept in algebraic geometry Conical surface, generated by a moving line with one fixed point Projective cone, the union of all lines that intersect a projective subspace and an arbitrary subset of some other disjoint subspace Computing Cone tracing, a derivative of the ray-tracing algorithm that replaces rays, which have no thickness, with cones Second-order cone programming, a library of routines that implements a predictor corrector variant of the semidefinite programming algorithm Astronomy Cone Nebula (also known as NGC 2264), an H II region in the constellation of Monoceros Ionization cone, cones of material extending out from spiral galaxies Engineering and physical science Antenna blind cone, the volume of space that cannot be scanned by an antenna Carbon nanocones, conical structures which are made predominantly from carbon and which have at least one dimension of the order one micrometer or smaller Cone algorithm identifies surface particles quickly and accurately for three-dimensional clusters composed of discrete particles Cone beam reconstruction, a method of X-ray scanning in microtomography Cone calorimeter, a modern device used to study the fire behavior of small samples of various materials in condensed phase Cone clutch serves the same purpose as a disk or plate clutch Cone of depression occurs in an aquifer when groundwater is pumped from a well Cone penetration test (CPT), an in situ testing method used to determine the geotechnical engineering properties of soils Cone Penetrometer apparatus, an alternative method to the Casagrande Device in measuring the Liquid Limit of a soil sample Conical intersection of two potential energy surfaces of the same spatial and spin symmetries Conical measure, a type of graduated laboratory glassware with a conical cup and a notch on the top to facilitate pouring of liquids Conical mill (or conical screen mill), a machine used to reduce the size of material in a uniform manner Conical pendulum, a weight (or bob) fixed on the end of a string (or rod) suspended from a pivot Conical scanning, a system used in early radar units to improve their accuracy Helical cone beam computed tomography, a type of three-dimensional computed tomography Hertzian cone, the cone of force that propagates through a brittle, amorphous or cryptocrystalline solid material from a point of impact Nose cone, used to refer to the forwardmost section of a rocket, guided missile or aircraft Pyrometric cone, pyrometric devices that are used to gauge time and temperature during the firing of ceramic materials Roller cone bit, a drill bit used for drilling through rock, for example when drilling for oil and gas Skid cone, a hollow steel or plastic cone placed over the sawn end of a log Speaker cone, the cone inside a loudspeaker that moves to generate sound Spinning cone columns are used in a form of steam distillation to gently extract volatile chemicals from liquid foodstuffs Biology and medicine Cone cell, in anatomy, a type of light-sensitive cell found along with rods in the retina of the eye Cone dystrophy, an inherited ocular disorder characterized by the loss of cone cells Cone snail, a carnivorous mollusc of the family Conidae Cone-billed tanager (Conothraupis mesoleuca), a species of bird in the family Thraupidae Conifer cone, a seed-bearing organ on conifer plants Growth cone, a dynamic, actin-supported extension of a developing axon seeking its synaptic target Witch-hazel cone gall aphid (Hormaphis hamamelidis), a minuscule insect, a member of the aphid superfamily Coning, a brain herniation in which the cerebellar tonsils move downwards through the foramen magnum Geography Cinder cone, a steep conical hill of volcanic fragments around and downwind from a volcanic vent Cone (hill), a hill in the shape of a cone which may or may not be volcanic in origin Dirt cone, a feature of a glacier or snow patch, in which dirt forms a coating insulating the ice below Parasitic cone (or satellite cone), a geographical feature found around a volcano Shatter cone, rare geological feature in the bedrock beneath meteorite impact craters or underground nuclear explosions Volcanic cone, among the simplest volcanic formations in the world Lambert conformal conic projection (LCC), a conic map projection, which is often used for aeronautical charts Places Cone (Phrygia), a town and bishopric of ancient Phrygia Cone, Michigan, an unincorporated community in Michigan Cone, Texas, an unincorporated community in Crosby County, Texas, United States Cone Islet, a small granite island in south-eastern Australia Conical Range, a small mountain range in southwestern British Columbia, Canada, between Seymour Inlet and Belize Inlet People Bonnie Ethel Cone (1907–2003), American educator and founder of the University of North Carolina at Charlotte Carin Cone (born 1940), American swimmer, Olympic medalist, world record holder, and gold medal winner from the Pan American Games Chadrick Cone (born 1983), American football wide receiver for the Georgia Force in the Arena Football League Cindy Parlow Cone (born 1978), American soccer player and coach Cone sisters, Claribel Cone (1864–1929), and Etta Cone (1870–1949), collectors and socialites David Cone (born 1963), former Major League Baseball pitcher Edward T. Cone (1917–2004), American music theorist and composer Fairfax M. Cone (1903–1977), director of the American Association of Advertising Agencies Fred Cone (baseball) (1848–1909), pioneer professional baseball player Fred Cone (American football) (born 1926), former professional American football running back Fred P. Cone (1871–1948), twenty-seventh governor of Florida (Frederick Preston) Jason McCaslin (born 1980), nicknamed Cone, bassist for the Canadian band Sum 41 James Hal Cone (born 1938), advocate of Black liberation theology John Cone (born 1974), American professional wrestling referee John J. Cone, the fourth Supreme Knight of the Knights of Columbus from 1898 to 1899 Mac Cone (born 1952), Canadian show jumper Martin Cone (1882–1963), 6th president of St. Ambrose College from 1930 to 1937 Marvin Cone (1891–1965), American painter Reuben Cone (1788–1851), pioneer and landowner in Atlanta, Georgia Robert W. Cone (1957-2016), major general in the United States Army, and Special Assistant to the Commanding General of TRADOC Sara Cone Bryant (1873–?), author of various children's book in the early 20th century Spencer Cone Jones (1836–1915), President of the Maryland State Senate, Mayor of Rockville, Maryland Spencer Houghton Cone (1785–1855), American Baptist minister and president of the American and Foreign Bible Society Tim Cone (born 1957), American basketball coach Other uses Conical Asian hat, a simple style of straw hat originating in East and Southeast Asia Ice cream cone, an edible container in which ice cream is served, shaped like an inverted cone open at its top Snow cone, a dessert usually made of crushed or shaved ice, flavored with sweet, usually fruit-flavored, brightly colored syrup Traffic cone, a brightly colored cone-shaped plastic object commonly used as a temporary traffic barrier or warning sign USS Cone (DD-866), a Gearing-class destroyer of the United States Navy Elizabethan collar or e-collar, a device to keep an animal from licking or biting itself To locate an aircraft using a searchlight Cone Mills Corporation, a textile manufacturer See also Kone (disambiguation) Colne (disambiguation) (pronounced cone) Kegel (disambiguation) (German/Dutch translation of cone)
5306
https://en.wikipedia.org/wiki/Chemical%20equilibrium
Chemical equilibrium
In a chemical reaction, chemical equilibrium is the state in which both the reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system. This state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but they are equal. Thus, there are no net changes in the concentrations of the reactants and products. Such a state is known as dynamic equilibrium. Historical introduction The concept of chemical equilibrium was developed in 1803, after Berthollet found that some chemical reactions are reversible. For any reaction mixture to exist at equilibrium, the rates of the forward and backward (reverse) reactions must be equal. In the following chemical equation, arrows point both ways to indicate equilibrium. A and B are reactant chemical species, S and T are product species, and α, β, σ, and τ are the stoichiometric coefficients of the respective reactants and products: α A + β B σ S + τ T The equilibrium concentration position of a reaction is said to lie "far to the right" if, at equilibrium, nearly all the reactants are consumed. Conversely the equilibrium position is said to be "far to the left" if hardly any product is formed from the reactants. Guldberg and Waage (1865), building on Berthollet's ideas, proposed the law of mass action: where A, B, S and T are active masses and k+ and k− are rate constants. Since at equilibrium forward and backward rates are equal: and the ratio of the rate constants is also a constant, now known as an equilibrium constant. By convention, the products form the numerator. However, the law of mass action is valid only for concerted one-step reactions that proceed through a single transition state and is not valid in general because rate equations do not, in general, follow the stoichiometry of the reaction as Guldberg and Waage had proposed (see, for example, nucleophilic aliphatic substitution by SN1 or reaction of hydrogen and bromine to form hydrogen bromide). Equality of forward and backward reaction rates, however, is a necessary condition for chemical equilibrium, though it is not sufficient to explain why equilibrium occurs. Despite the limitations of this derivation, the equilibrium constant for a reaction is indeed a constant, independent of the activities of the various species involved, though it does depend on temperature as observed by the van 't Hoff equation. Adding a catalyst will affect both the forward reaction and the reverse reaction in the same way and will not have an effect on the equilibrium constant. The catalyst will speed up both reactions thereby increasing the speed at which equilibrium is reached. Although the macroscopic equilibrium concentrations are constant in time, reactions do occur at the molecular level. For example, in the case of acetic acid dissolved in water and forming acetate and hydronium ions, a proton may hop from one molecule of acetic acid onto a water molecule and then onto an acetate anion to form another molecule of acetic acid and leaving the number of acetic acid molecules unchanged. This is an example of dynamic equilibrium. Equilibria, like the rest of thermodynamics, are statistical phenomena, averages of microscopic behavior. Le Châtelier's principle (1884) predicts the behavior of an equilibrium system when changes to its reaction conditions occur. If a dynamic equilibrium is disturbed by changing the conditions, the position of equilibrium moves to partially reverse the change. For example, adding more S (to the chemical reaction above) from the outside will cause an excess of products, and the system will try to counteract this by increasing the reverse reaction and pushing the equilibrium point backward (though the equilibrium constant will stay the same). If mineral acid is added to the acetic acid mixture, increasing the concentration of hydronium ion, the amount of dissociation must decrease as the reaction is driven to the left in accordance with this principle. This can also be deduced from the equilibrium constant expression for the reaction: If {H3O+} increases {CH3CO2H} must increase and must decrease. The H2O is left out, as it is the solvent and its concentration remains high and nearly constant. A quantitative version is given by the reaction quotient. J. W. Gibbs suggested in 1873 that equilibrium is attained when the Gibbs free energy of the system is at its minimum value (assuming the reaction is carried out at a constant temperature and pressure). What this means is that the derivative of the Gibbs energy with respect to reaction coordinate (a measure of the extent of reaction that has occurred, ranging from zero for all reactants to a maximum for all products) vanishes (because dG = 0), signaling a stationary point. This derivative is called the reaction Gibbs energy (or energy change) and corresponds to the difference between the chemical potentials of reactants and products at the composition of the reaction mixture. This criterion is both necessary and sufficient. If a mixture is not at equilibrium, the liberation of the excess Gibbs energy (or Helmholtz energy at constant volume reactions) is the "driving force" for the composition of the mixture to change until equilibrium is reached. The equilibrium constant can be related to the standard Gibbs free energy change for the reaction by the equation where R is the universal gas constant and T the temperature. When the reactants are dissolved in a medium of high ionic strength the quotient of activity coefficients may be taken to be constant. In that case the concentration quotient, Kc, where [A] is the concentration of A, etc., is independent of the analytical concentration of the reactants. For this reason, equilibrium constants for solutions are usually determined in media of high ionic strength. Kc varies with ionic strength, temperature and pressure (or volume). Likewise Kp for gases depends on partial pressure. These constants are easier to measure and encountered in high-school chemistry courses. Thermodynamics At constant temperature and pressure, one must consider the Gibbs free energy, G, while at constant temperature and volume, one must consider the Helmholtz free energy, A, for the reaction; and at constant internal energy and volume, one must consider the entropy, S, for the reaction. The constant volume case is important in geochemistry and atmospheric chemistry where pressure variations are significant. Note that, if reactants and products were in standard state (completely pure), then there would be no reversibility and no equilibrium. Indeed, they would necessarily occupy disjoint volumes of space. The mixing of the products and reactants contributes a large entropy increase (known as entropy of mixing) to states containing equal mixture of products and reactants and gives rise to a distinctive minimum in the Gibbs energy as a function of the extent of reaction. The standard Gibbs energy change, together with the Gibbs energy of mixing, determine the equilibrium state. In this article only the constant pressure case is considered. The relation between the Gibbs free energy and the equilibrium constant can be found by considering chemical potentials. At constant temperature and pressure in the absence of an applied voltage, the Gibbs free energy, G, for the reaction depends only on the extent of reaction: ξ (Greek letter xi), and can only decrease according to the second law of thermodynamics. It means that the derivative of G with respect to ξ must be negative if the reaction happens; at the equilibrium this derivative is equal to zero. :equilibrium In order to meet the thermodynamic condition for equilibrium, the Gibbs energy must be stationary, meaning that the derivative of G with respect to the extent of reaction, ξ, must be zero. It can be shown that in this case, the sum of chemical potentials times the stoichiometric coefficients of the products is equal to the sum of those corresponding to the reactants. Therefore, the sum of the Gibbs energies of the reactants must be the equal to the sum of the Gibbs energies of the products. where μ is in this case a partial molar Gibbs energy, a chemical potential. The chemical potential of a reagent A is a function of the activity, {A} of that reagent. (where μ is the standard chemical potential). The definition of the Gibbs energy equation interacts with the fundamental thermodynamic relation to produce . Inserting dNi = νi dξ into the above equation gives a stoichiometric coefficient () and a differential that denotes the reaction occurring to an infinitesimal extent (dξ). At constant pressure and temperature the above equations can be written as which is the "Gibbs free energy change for the reaction. This results in: . By substituting the chemical potentials: , the relationship becomes: : which is the standard Gibbs energy change for the reaction' that can be calculated using thermodynamical tables. The reaction quotient is defined as: Therefore, At equilibrium: leading to: and Obtaining the value of the standard Gibbs energy change, allows the calculation of the equilibrium constant. Addition of reactants or products For a reactional system at equilibrium: Qr = Keq; ξ = ξeq. If the activities of constituents are modified, the value of the reaction quotient changes and becomes different from the equilibrium constant: Qr ≠ Keq and then If activity of a reagent i increases the reaction quotient decreases. Then and The reaction will shift to the right (i.e. in the forward direction, and thus more products will form). If activity of a product j increases, then and The reaction will shift to the left (i.e. in the reverse direction, and thus less products will form). Note that activities and equilibrium constants are dimensionless numbers. Treatment of activity The expression for the equilibrium constant can be rewritten as the product of a concentration quotient, Kc and an activity coefficient quotient, Γ. [A] is the concentration of reagent A, etc. It is possible in principle to obtain values of the activity coefficients, γ. For solutions, equations such as the Debye–Hückel equation or extensions such as Davies equation Specific ion interaction theory or Pitzer equations may be used.Software (below) However this is not always possible. It is common practice to assume that Γ is a constant, and to use the concentration quotient in place of the thermodynamic equilibrium constant. It is also general practice to use the term equilibrium constant instead of the more accurate concentration quotient. This practice will be followed here. For reactions in the gas phase partial pressure is used in place of concentration and fugacity coefficient in place of activity coefficient. In the real world, for example, when making ammonia in industry, fugacity coefficients must be taken into account. Fugacity, f, is the product of partial pressure and fugacity coefficient. The chemical potential of a species in the real gas phase is given by so the general expression defining an equilibrium constant is valid for both solution and gas phases. Concentration quotients In aqueous solution, equilibrium constants are usually determined in the presence of an "inert" electrolyte such as sodium nitrate, NaNO3, or potassium perchlorate, KClO4. The ionic strength of a solution is given by where ci and zi stand for the concentration and ionic charge of ion type i, and the sum is taken over all the N types of charged species in solution. When the concentration of dissolved salt is much higher than the analytical concentrations of the reagents, the ions originating from the dissolved salt determine the ionic strength, and the ionic strength is effectively constant. Since activity coefficients depend on ionic strength, the activity coefficients of the species are effectively independent of concentration. Thus, the assumption that Γ is constant is justified. The concentration quotient is a simple multiple of the equilibrium constant. However, Kc will vary with ionic strength. If it is measured at a series of different ionic strengths, the value can be extrapolated to zero ionic strength. The concentration quotient obtained in this manner is known, paradoxically, as a thermodynamic equilibrium constant. Before using a published value of an equilibrium constant in conditions of ionic strength different from the conditions used in its determination, the value should be adjustedSoftware (below). Metastable mixtures A mixture may appear to have no tendency to change, though it is not at equilibrium. For example, a mixture of SO2 and O2 is metastable as there is a kinetic barrier to formation of the product, SO3. 2 SO2 + O2 2 SO3 The barrier can be overcome when a catalyst is also present in the mixture as in the contact process, but the catalyst does not affect the equilibrium concentrations. Likewise, the formation of bicarbonate from carbon dioxide and water is very slow under normal conditions but almost instantaneous in the presence of the catalytic enzyme carbonic anhydrase. Pure substances When pure substances (liquids or solids) are involved in equilibria their activities do not appear in the equilibrium constant because their numerical values are considered one. Applying the general formula for an equilibrium constant to the specific case of a dilute solution of acetic acid in water one obtains CH3CO2H + H2O CH3CO2− + H3O+ For all but very concentrated solutions, the water can be considered a "pure" liquid, and therefore it has an activity of one. The equilibrium constant expression is therefore usually written as . A particular case is the self-ionization of water 2 H2O H3O+ + OH− Because water is the solvent, and has an activity of one, the self-ionization constant of water is defined as It is perfectly legitimate to write [H+] for the hydronium ion concentration, since the state of solvation of the proton is constant (in dilute solutions) and so does not affect the equilibrium concentrations. Kw varies with variation in ionic strength and/or temperature. The concentrations of H+ and OH− are not independent quantities. Most commonly [OH−] is replaced by Kw[H+]−1 in equilibrium constant expressions which would otherwise include hydroxide ion. Solids also do not appear in the equilibrium constant expression, if they are considered to be pure and thus their activities taken to be one. An example is the Boudouard reaction: 2 CO CO2 + C for which the equation (without solid carbon) is written as: Multiple equilibria Consider the case of a dibasic acid H2A. When dissolved in water, the mixture will contain H2A, HA− and A2−. This equilibrium can be split into two steps in each of which one proton is liberated.K1 and K2 are examples of stepwise equilibrium constants. The overall equilibrium constant, βD, is product of the stepwise constants. {H2A} <=> {A^{2-}} + {2H+}: Note that these constants are dissociation constants because the products on the right hand side of the equilibrium expression are dissociation products. In many systems, it is preferable to use association constants.β1 and β2 are examples of association constants. Clearly and ; and For multiple equilibrium systems, also see: theory of Response reactions. Effect of temperature The effect of changing temperature on an equilibrium constant is given by the van 't Hoff equation Thus, for exothermic reactions (ΔH is negative), K decreases with an increase in temperature, but, for endothermic reactions, (ΔH is positive) K increases with an increase temperature. An alternative formulation is At first sight this appears to offer a means of obtaining the standard molar enthalpy of the reaction by studying the variation of K with temperature. In practice, however, the method is unreliable because error propagation almost always gives very large errors on the values calculated in this way. Effect of electric and magnetic fields The effect of electric field on equilibrium has been studied by Manfred Eigen among others. Types of equilibrium Equilibrium can be broadly classified as heterogeneous and homogeneous equilibrium. Homogeneous equilibrium consists of reactants and products belonging in the same phase whereas heterogeneous equilibrium comes into play for reactants and products in different phases. In the gas phase: rocket engines The industrial synthesis such as ammonia in the Haber–Bosch process (depicted right) takes place through a succession of equilibrium steps including adsorption processes Atmospheric chemistry Seawater and other natural waters: chemical oceanography Distribution between two phases log D distribution coefficient: important for pharmaceuticals where lipophilicity is a significant property of a drug Liquid–liquid extraction, Ion exchange, Chromatography Solubility product Uptake and release of oxygen by hemoglobin in blood Acid–base equilibria: acid dissociation constant, hydrolysis, buffer solutions, indicators, acid–base homeostasis Metal–ligand complexation: sequestering agents, chelation therapy, MRI contrast reagents, Schlenk equilibrium Adduct formation: host–guest chemistry, supramolecular chemistry, molecular recognition, dinitrogen tetroxide In certain oscillating reactions, the approach to equilibrium is not asymptotically but in the form of a damped oscillation . The related Nernst equation in electrochemistry gives the difference in electrode potential as a function of redox concentrations. When molecules on each side of the equilibrium are able to further react irreversibly in secondary reactions, the final product ratio is determined according to the Curtin–Hammett principle. In these applications, terms such as stability constant, formation constant, binding constant, affinity constant, association constant and dissociation constant are used. In biochemistry, it is common to give units for binding constants, which serve to define the concentration units used when the constant's value was determined. Composition of a mixture When the only equilibrium is that of the formation of a 1:1 adduct as the composition of a mixture, there are many ways that the composition of a mixture can be calculated. For example, see ICE table for a traditional method of calculating the pH of a solution of a weak acid. There are three approaches to the general calculation of the composition of a mixture at equilibrium. The most basic approach is to manipulate the various equilibrium constants until the desired concentrations are expressed in terms of measured equilibrium constants (equivalent to measuring chemical potentials) and initial conditions. Minimize the Gibbs energy of the system. Satisfy the equation of mass balance. The equations of mass balance are simply statements that demonstrate that the total concentration of each reactant must be constant by the law of conservation of mass. Mass-balance equations In general, the calculations are rather complicated or complex. For instance, in the case of a dibasic acid, H2A dissolved in water the two reactants can be specified as the conjugate base, A2−, and the proton, H+. The following equations of mass-balance could apply equally well to a base such as 1,2-diaminoethane, in which case the base itself is designated as the reactant A: with TA the total concentration of species A. Note that it is customary to omit the ionic charges when writing and using these equations. When the equilibrium constants are known and the total concentrations are specified there are two equations in two unknown "free concentrations" [A] and [H]. This follows from the fact that [HA] = β1[A][H], [H2A] = β2[A][H]2 and [OH] = Kw[H]−1 so the concentrations of the "complexes" are calculated from the free concentrations and the equilibrium constants. General expressions applicable to all systems with two reagents, A and B would be It is easy to see how this can be extended to three or more reagents. Polybasic acids The composition of solutions containing reactants A and H is easy to calculate as a function of p[H]. When [H] is known, the free concentration [A] is calculated from the mass-balance equation in A. The diagram alongside, shows an example of the hydrolysis of the aluminium Lewis acid Al3+(aq) shows the species concentrations for a 5 × 10−6 M solution of an aluminium salt as a function of pH. Each concentration is shown as a percentage of the total aluminium. Solution and precipitation The diagram above illustrates the point that a precipitate that is not one of the main species in the solution equilibrium may be formed. At pH just below 5.5 the main species present in a 5 μM solution of Al3+ are aluminium hydroxides Al(OH)2+, and , but on raising the pH Al(OH)3 precipitates from the solution. This occurs because Al(OH)3 has a very large lattice energy. As the pH rises more and more Al(OH)3 comes out of solution. This is an example of Le Châtelier's principle in action: Increasing the concentration of the hydroxide ion causes more aluminium hydroxide to precipitate, which removes hydroxide from the solution. When the hydroxide concentration becomes sufficiently high the soluble aluminate, , is formed. Another common instance where precipitation occurs is when a metal cation interacts with an anionic ligand to form an electrically neutral complex. If the complex is hydrophobic, it will precipitate out of water. This occurs with the nickel ion Ni2+ and dimethylglyoxime, (dmgH2): in this case the lattice energy of the solid is not particularly large, but it greatly exceeds the energy of solvation of the molecule Ni(dmgH)2. Minimization of Gibbs energy At equilibrium, at a specified temperature and pressure, and with no external forces, the Gibbs free energy G is at a minimum: where μj is the chemical potential of molecular species j, and Nj is the amount of molecular species j. It may be expressed in terms of thermodynamic activity as: where is the chemical potential in the standard state, R is the gas constant T is the absolute temperature, and Aj is the activity. For a closed system, no particles may enter or leave, although they may combine in various ways. The total number of atoms of each element will remain constant. This means that the minimization above must be subjected to the constraints: where aij is the number of atoms of element i in molecule j and b is the total number of atoms of element i, which is a constant, since the system is closed. If there are a total of k types of atoms in the system, then there will be k such equations. If ions are involved, an additional row is added to the aij matrix specifying the respective charge on each molecule which will sum to zero. This is a standard problem in optimisation, known as constrained minimisation. The most common method of solving it is using the method of Lagrange multipliers (although other methods may be used). Define: where the λi are the Lagrange multipliers, one for each element. This allows each of the Nj and λj to be treated independently, and it can be shown using the tools of multivariate calculus that the equilibrium condition is given by (For proof see Lagrange multipliers.) This is a set of (m + k) equations in (m + k) unknowns (the Nj and the λi) and may, therefore, be solved for the equilibrium concentrations Nj as long as the chemical activities are known as functions of the concentrations at the given temperature and pressure. (In the ideal case, activities are proportional to concentrations.) (See Thermodynamic databases for pure substances.) Note that the second equation is just the initial constraints for minimization. This method of calculating equilibrium chemical concentrations is useful for systems with a large number of different molecules. The use of k atomic element conservation equations for the mass constraint is straightforward, and replaces the use of the stoichiometric coefficient equations. The results are consistent with those specified by chemical equations. For example, if equilibrium is specified by a single chemical equation:, where νj is the stoichiometric coefficient for the j th molecule (negative for reactants, positive for products) and Rj is the symbol for the j th molecule, a properly balanced equation will obey: Multiplying the first equilibrium condition by νj and using the above equation yields: As above, defining ΔG where Kc'' is the equilibrium constant, and ΔG will be zero at equilibrium. Analogous procedures exist for the minimization of other thermodynamic potentials. See also Acidosis Alkalosis Arterial blood gas Benesi–Hildebrand method Determination of equilibrium constants Equilibrium constant Henderson–Hasselbalch equation Michaelis–Menten kinetics pCO2 pH pKa Redox equilibria Steady state (chemistry) Thermodynamic databases for pure substances Non-random two-liquid model (NRTL model) - Phase equilibrium calculations UNIQUAC model - Phase equilibrium calculations References Further reading Mainly concerned with gas-phase equilibria. External links Analytical chemistry Physical chemistry
5308
https://en.wikipedia.org/wiki/Combination
Combination
In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has n elements, the number of k-combinations, denoted by or , is equal to the binomial coefficient which can be written using factorials as whenever , and which is zero when . This formula can be derived from the fact that each k-combination of a set S of n members has permutations so or . The set of all k-combinations of a set S is often denoted by . A combination is a combination of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-combination with repetition, k-multiset, or k-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears. Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical as the size of the set increases. For example, a poker hand can be described as a 5-combination (k = 5) of cards from a 52 card deck (n = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960. Number of k-combinations The number of k-combinations from a given set S of n elements is often denoted in elementary combinatorics texts by , or by a variation such as , , , or even (the last form is standard in French, Romanian, Russian, Chinese and Polish texts). The same number however occurs in many other mathematical contexts, where it is denoted by (often read as "n choose k"); notably it occurs as a coefficient in the binomial formula, hence its name binomial coefficient. One can define for all natural numbers k at once by the relation from which it is clear that and further, for k > n. To see that these coefficients count k-combinations from S, one can first consider a collection of n distinct variables Xs labeled by the elements s of S, and expand the product over all elements of S: it has 2n distinct terms corresponding to all the subsets of S, each subset giving the product of the corresponding variables Xs. Now setting all of the Xs equal to the unlabeled variable X, so that the product becomes , the term for each k-combination from S becomes Xk, so that the coefficient of that power in the result equals the number of such k-combinations. Binomial coefficients can be computed explicitly in various ways. To get all of them for the expansions up to , one can use (in addition to the basic cases already given) the recursion relation for 0 < k < n, which follows from =; this leads to the construction of Pascal's triangle. For determining an individual binomial coefficient, it is more practical to use the formula The numerator gives the number of k-permutations of n, i.e., of sequences of k distinct elements of S, while the denominator gives the number of such k-permutations that give the same k-combination when the order is ignored. When k exceeds n/2, the above formula contains factors common to the numerator and the denominator, and canceling them out gives the relation for 0 ≤ k ≤ n. This expresses a symmetry that is evident from the binomial formula, and can also be understood in terms of k-combinations by taking the complement of such a combination, which is an -combination. Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember: where n! denotes the factorial of n. It is obtained from the previous formula by multiplying denominator and numerator by !, so it is certainly computationally less efficient than that formula. The last formula can be understood directly, by considering the n! permutations of all the elements of S. Each such permutation gives a k-combination by selecting its first k elements. There are many duplicate selections: any combined permutation of the first k elements among each other, and of the final (n − k) elements among each other produces the same combination; this explains the division in the formula. From the above formulas follow relations between adjacent numbers in Pascal's triangle in all three directions: Together with the basic cases , these allow successive computation of respectively all numbers of combinations from the same set (a row in Pascal's triangle), of k-combinations of sets of growing sizes, and of combinations with a complement of fixed size . Example of counting combinations As a specific example, one can compute the number of five-card hands possible from a standard fifty-two card deck as: Alternatively one may use the formula in terms of factorials and cancel the factors in the numerator against parts of the factors in the denominator, after which only multiplication of the remaining factors is required: Another alternative computation, equivalent to the first, is based on writing which gives When evaluated in the following order, , this can be computed using only integer arithmetic. The reason is that when each division occurs, the intermediate result that is produced is itself a binomial coefficient, so no remainders ever occur. Using the symmetric formula in terms of factorials without performing simplifications gives a rather extensive calculation: Enumerating k-combinations One can enumerate all k-combinations of a given set S of n elements in some fixed order, which establishes a bijection from an interval of integers with the set of those k-combinations. Assuming S is itself ordered, for instance S = { 1, 2, ..., n }, there are two natural possibilities for ordering its k-combinations: by comparing their smallest elements first (as in the illustrations above) or by comparing their largest elements first. The latter option has the advantage that adding a new largest element to S will not change the initial part of the enumeration, but just add the new k-combinations of the larger set after the previous ones. Repeating this process, the enumeration can be extended indefinitely with k-combinations of ever larger sets. If moreover the intervals of the integers are taken to start at 0, then the k-combination at a given place i in the enumeration can be computed easily from i, and the bijection so obtained is known as the combinatorial number system. It is also known as "rank"/"ranking" and "unranking" in computational mathematics. There are many ways to enumerate k combinations. One way is to visit all the binary numbers less than 2n. Choose those numbers having k nonzero bits, although this is very inefficient even for small n (e.g. n = 20 would require visiting about one million numbers while the maximum number of allowed k combinations is about 186 thousand for k = 10). The positions of these 1 bits in such a number is a specific k-combination of the set { 1, ..., n }. Another simple, faster way is to track k index numbers of the elements selected, starting with {0 .. k−1} (zero-based) or {1 .. k} (one-based) as the first allowed k-combination and then repeatedly moving to the next allowed k-combination by incrementing the last index number if it is lower than n-1 (zero-based) or n (one-based) or the last index number x that is less than the index number following it minus one if such an index exists and resetting the index numbers after x to {x+1, x+2, ...}. Number of combinations with repetition A k-combination with repetitions, or k-multicombination, or multisubset of size k from a set S of size n is given by a set of k not necessarily distinct elements of S, where order is not taken into account: two sequences define the same multiset if one can be obtained from the other by permuting the terms. In other words, it is a sample of k elements from a set of n elements allowing for duplicates (i.e., with replacement) but disregarding different orderings (e.g. {2,1,2} = {1,2,2}). Associate an index to each element of S and think of the elements of S as types of objects, then we can let denote the number of elements of type i in a multisubset. The number of multisubsets of size k is then the number of nonnegative integer (so allowing zero) solutions of the Diophantine equation: If S has n elements, the number of such k-multisubsets is denoted by a notation that is analogous to the binomial coefficient which counts k-subsets. This expression, n multichoose k, can also be given in terms of binomial coefficients: This relationship can be easily proved using a representation known as stars and bars. A solution of the above Diophantine equation can be represented by stars, a separator (a bar), then more stars, another separator, and so on. The total number of stars in this representation is k and the number of bars is n - 1 (since a separation into n parts needs n-1 separators). Thus, a string of k + n - 1 (or n + k - 1) symbols (stars and bars) corresponds to a solution if there are k stars in the string. Any solution can be represented by choosing k out of positions to place stars and filling the remaining positions with bars. For example, the solution of the equation (n = 4 and k = 10) can be represented by The number of such strings is the number of ways to place 10 stars in 13 positions, which is the number of 10-multisubsets of a set with 4 elements. As with binomial coefficients, there are several relationships between these multichoose expressions. For example, for , This identity follows from interchanging the stars and bars in the above representation. Example of counting multisubsets For example, if you have four types of donuts (n = 4) on a menu to choose from and you want three donuts (k = 3), the number of ways to choose the donuts with repetition can be calculated as This result can be verified by listing all the 3-multisubsets of the set S = {1,2,3,4}. This is displayed in the following table. The second column lists the donuts you actually chose, the third column shows the nonnegative integer solutions of the equation and the last column gives the stars and bars representation of the solutions. Number of k-combinations for all k The number of k-combinations for all k is the number of subsets of a set of n elements. There are several ways to see that this number is 2n. In terms of combinations, , which is the sum of the nth row (counting from 0) of the binomial coefficients in Pascal's triangle. These combinations (subsets) are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to 2n − 1, where each digit position is an item from the set of n. Given 3 cards numbered 1 to 3, there are 8 distinct combinations (subsets), including the empty set: Representing these subsets (in the same order) as base 2 numerals: 0 – 000 1 – 001 2 – 010 3 – 011 4 – 100 5 – 101 6 – 110 7 – 111 Probability: sampling a random combination There are various algorithms to pick out a random combination from a given set or list. Rejection sampling is extremely slow for large sample sizes. One way to select a k-combination efficiently from a population of size n is to iterate across each element of the population, and at each step pick that element with a dynamically changing probability of (see Reservoir sampling). Another is to pick a random non-negative integer less than and convert it into a combination using the combinatorial number system. Number of ways to put objects into bins A combination can also be thought of as a selection of two sets of items: those that go into the chosen bin and those that go into the unchosen bin. This can be generalized to any number of bins with the constraint that every item must go to exactly one bin. The number of ways to put objects into bins is given by the multinomial coefficient where n is the number of items, m is the number of bins, and is the number of items that go into bin i. One way to see why this equation holds is to first number the objects arbitrarily from 1 to n and put the objects with numbers into the first bin in order, the objects with numbers into the second bin in order, and so on. There are distinct numberings, but many of them are equivalent, because only the set of items in a bin matters, not their order in it. Every combined permutation of each bins' contents produces an equivalent way of putting items into bins. As a result, every equivalence class consists of distinct numberings, and the number of equivalence classes is . The binomial coefficient is the special case where k items go into the chosen bin and the remaining items go into the unchosen bin: See also Binomial coefficient Combinatorics Block design Kneser graph List of permutation topics Multiset Pascal's triangle Permutation Probability Subset Notes References Erwin Kreyszig, Advanced Engineering Mathematics, John Wiley & Sons, INC, 1999. External links Topcoder tutorial on combinatorics C code to generate all combinations of n elements chosen as k Many Common types of permutation and combination math problems, with detailed solutions The Unknown Formula For combinations when choices can be repeated and order does not matter Combinations with repetitions (by: Akshatha AG and Smitha B) The dice roll with a given sum problem An application of the combinations with repetition to rolling multiple dice Combinatorics
5309
https://en.wikipedia.org/wiki/Software
Software
Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work. At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example, displaying some text on a computer screen, causing state changes that should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction or is interrupted by the operating system. , most personal computers, smartphone devices, and servers have processors with multiple execution units, or multiple processors performing computation together, so computing has become a much more concurrent activity than in the past. The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler, an interpreter, or a combination of the two. Software may also be written in a low-level assembly language that has a strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler. History An algorithm for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. She created proofs to show how the engine would calculate Bernoulli numbers. Because of the proofs and the algorithm, she is considered the first computer programmer. The first theory about software, prior to the creation of computers as we know them today, was proposed by Alan Turing in his 1936 essay, On Computable Numbers, with an Application to the Entscheidungsproblem (decision problem). This eventually led to the creation of the academic fields of computer science and software engineering; both fields study software and its creation. Computer science is the theoretical study of computer and software (Turing's essay is an example of computer science), whereas software engineering is the application of engineering principles to development of software. In 2000, Fred Shapiro, a librarian at the Yale Law School, published a letter revealing that John Wilder Tukey's 1958 paper "The Teaching of Concrete Mathematics" contained the earliest known usage of the term "software" found in a search of JSTOR's electronic archives, predating the Oxford English Dictionary's citation by two years. This led many to credit Tukey with coining the term, particularly in obituaries published that same year, although Tukey never claimed credit for any such coinage. In 1995, Paul Niquette claimed he had originally coined the term in October 1953, although he could not find any documents supporting his claim. The earliest known publication of the term "software" in an engineering context was in August 1953 by Richard R. Carhart, in a Rand Corporation Research Memorandum. Types On virtually all computer platforms, software can be grouped into a few broad categories. Purpose, or domain of use Based on the goal, computer software can be divided into: Application software uses the computer system to perform special functions beyond the basic operation of the computer itself. There are many different types of application software because the range of tasks that can be performed with a modern computer is so large—see list of software. System software manages hardware behaviour, as to provide basic functionalities that are required by users, or for other software to run properly, if at all. System software is also designed for providing a platform for running application software, and it includes the following: Operating systems are essential collections of software that manage resources and provide common services for other software that runs "on top" of them. Supervisory programs, boot loaders, shells and window systems are core parts of operating systems. In practice, an operating system comes bundled with additional software (including application software) so that a user can potentially do some work with a computer that only has one operating system. Device drivers operate or control a particular type of device that is attached to a computer. Each device needs at least one corresponding device driver; because a computer typically has at minimum at least one input device and at least one output device, a computer typically needs more than one device driver. Utilities are computer programs designed to assist users in the maintenance and care of their computers. Malicious software, or malware, is software that is developed to harm or disrupt computers. Malware is closely associated with computer-related crimes, though some malicious programs may have been designed as practical jokes. Nature or domain of execution Desktop applications such as web browsers and Microsoft Office and LibreOffice and WordPerfect, as well as smartphone and tablet applications (called "apps"). JavaScript scripts are pieces of software traditionally embedded in web pages that are run directly inside the web browser when a web page is loaded without the need for a web browser plugin. Software written in other programming languages can also be run within the web browser if the software is either translated into JavaScript, or if a web browser plugin that supports that language is installed; the most common example of the latter is ActionScript scripts, which are supported by the Adobe Flash plugin. Server software, including: Web applications, which usually run on the web server and output dynamically generated web pages to web browsers, using e.g. PHP, Java, ASP.NET, or even JavaScript that runs on the server. In modern times these commonly include some JavaScript to be run in the web browser as well, in which case they typically run partly on the server, partly in the web browser. Plugins and extensions are software that extends or modifies the functionality of another piece of software, and require that software be used in order to function. Embedded software resides as firmware within embedded systems, devices dedicated to a single use or a few uses such as cars and televisions (although some embedded devices such as wireless chipsets can themselves be part of an ordinary, non-embedded computer system such as a PC or smartphone). In the embedded system context there is sometimes no clear distinction between the system software and the application software. However, some embedded systems run embedded operating systems, and these systems do retain the distinction between system software and application software (although typically there will only be one, fixed application which is always run). Microcode is a special, relatively obscure type of embedded software which tells the processor itself how to execute machine code, so it is actually a lower level than machine code. It is typically proprietary to the processor manufacturer, and any necessary correctional microcode software updates are supplied by them to users (which is much cheaper than shipping replacement processor hardware). Thus an ordinary programmer would not expect to ever have to deal with it. Programming tools Programming tools are also software in the form of programs or applications that developers use to create, debug, maintain, or otherwise support software. Software is written in one or more programming languages; there are many programming languages in existence, and each has at least one implementation, each of which consists of its own set of programming tools. These tools may be relatively self-contained programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined to accomplish a task; or they may form an integrated development environment (IDE), which combines much or all of the functionality of such self-contained tools. IDEs may do this by either invoking the relevant individual tools or by re-implementing their functionality in a new way. An IDE can make it easier to do specific tasks, such as searching in files in a particular project. Many programming language implementations provide the option of using both individual tools or an IDE. Topics Architecture People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software. Platform software: The platform includes the firmware, device drivers, an operating system, and typically a graphical user interface which, in total, allow a user to interact with the computer and its peripherals (associated equipment). Platform software often comes bundled with the computer. On a PC one will usually have the ability to change the platform software. Application software: Application software is what most people think of when they think of software. Typical examples include office suites and video games. Application software is often purchased separately from computer hardware. Sometimes applications are bundled with the computer, but that does not change the fact that they run as independent applications. Applications are usually independent programs from the operating system, though they are often tailored for specific platforms. Most users think of compilers, databases, and other "system software" as applications. User-written software: End-user development tailors systems to meet users' specific needs. User software includes spreadsheet templates and word processor templates. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Depending on how competently the user-written software has been integrated into default application packages, many users may not be aware of the distinction between the original packages, and what has been added by co-workers. Execution Computer software has to be "loaded" into the computer's storage (such as the hard drive or memory). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation—moving data, carrying out a computation, or altering the control flow of instructions. Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly; this is sometimes avoided by using "pointers" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together. Quality and reliability Software quality is very important, especially for commercial and system software. If software is faulty, it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called "bugs" which are often discovered during alpha and beta testing. Software is often also a victim to what is known as software aging, the progressive performance degradation resulting from a combination of unseen bugs. Many bugs are discovered and fixed through software testing. However, software testing rarely—if ever—eliminates every bug; some programmers say that "every program has at least one more bug" (Lubarsky's Law). In the waterfall method of software development, separate testing teams are typically employed, but in newer approaches, collectively termed agile software development, developers often do all their own testing, and demonstrate the software to users/clients regularly to obtain feedback. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be large. Programs containing command software enable hardware engineering and system operations to function much easier together. License The software's license gives the user the right to use the software in the licensed environment, and in the case of free software licenses, also grants other rights such as the right to make copies. Proprietary software can be divided into two types: freeware, which includes the category of "free trial" software or "freemium" software (in the past, the term shareware was often used for free trial/freemium software). As the name suggests, freeware can be used for free, although in the case of free trials or freemium software, this is sometimes only true for a limited period of time or with limited functionality. software available for a fee, which can only be legally used on purchase of a license. Open-source software comes with a free software license, granting the recipient the rights to modify and redistribute the software. Patents Software patents, like other types of patents, are theoretically supposed to give an inventor an exclusive, time-limited license for a detailed idea (e.g. an algorithm) on how to implement a piece of software, or a component of a piece of software. Ideas for useful things that software could do, and user requirements, are not supposed to be patentable, and concrete implementations (i.e. the actual software packages implementing the patent) are not supposed to be patentable either—the latter are already covered by copyright, generally automatically. So software patents are supposed to cover the middle area, between requirements and concrete implementation. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid—although since all useful software has effects on the physical world, this requirement may be open to debate. Meanwhile, American copyright law was applied to various aspects of the writing of the software code. Software patents are controversial in the software industry with many people holding different views about them. One of the sources of controversy is that the aforementioned split between initial ideas and patent does not seem to be honored in practice by patent lawyers—for example the patent for aspect-oriented programming (AOP), which purported to claim rights over any programming tool implementing the idea of AOP, howsoever implemented. Another source of controversy is the effect on innovation, with many distinguished experts and companies arguing that software is such a fast-moving field that software patents merely create vast additional litigation costs and risks, and actually retard innovation. In the case of debates about software patents outside the United States, the argument has been made that large American corporations and patent lawyers are likely to be the primary beneficiaries of allowing or continue to allow software patents. Design and implementation Design and implementation of software vary depending on the complexity of the software. For instance, the design and creation of Microsoft Word took much more time than designing and developing Microsoft Notepad because the former has much more basic functionality. Software is usually developed in integrated development environments (IDE) like Eclipse, IntelliJ and Microsoft Visual Studio that can simplify the process and compile the software. As noted in a different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) can be categorized by their purpose. For instance, the Spring Framework is used for implementing enterprise applications, the Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. When a program is designed, it relies upon the API. For instance, a Microsoft Windows desktop application might call API functions in the .NET Windows Forms library like Form1.Close() and Form1.Show() to close or open the application. Without these APIs, the programmer needs to write these functionalities entirely themselves. Companies like Oracle and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them. Data structures such as hash tables, arrays, and binary trees, and algorithms such as quicksort, can be useful for creating software. Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods. A person who creates software is called a programmer, software engineer or software developer, terms that all have a similar meaning. More informal terms for programmer also exist such as "coder" and "hacker"although use of the latter word may cause confusion, because it is more often used to mean someone who illegally breaks into computer systems. See also Computer program Independent software vendor Open-source software Outline of software Software asset management Software release life cycle References Sources External links Software at Encyclopædia Britannica
5311
https://en.wikipedia.org/wiki/Computer%20programming
Computer programming
Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic. Auxiliary tasks accompanying and related to programming include analyzing requirements, testing, debugging (investigating and fixing problems), implementation of build systems, and management of derived artifacts, such as programs' machine code. While these are sometimes considered programming, often the term software development is used for this larger overall process – with the terms programming, implementation, and coding reserved for the writing and editing of code per se. Sometimes software development is known as software engineering, especially when it employs formal methods or follows an engineering design process. History Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the Book of Ingenious Devices. In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them. Code-breaking algorithms have also existed for centuries. In the 9th century, the Arab mathematician Al-Kindi described a cryptographic algorithm for deciphering encrypted code, in A Manuscript on Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest code-breaking algorithm. The first computer program is generally dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. However, Charles Babbage had already written his first program for the Analytical Engine in 1837. In the 1880s, Herman Hollerith invented the concept of storing data in machine-readable form. Later a control panel (plug board) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way, as were the first electronic computers. However, with the concept of the stored-program computer introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory. Machine language Machine code was the language of early programs, written in the instruction set of the particular machine, often in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, two machines with different instruction sets also have different assembly languages. Compiler languages High-level languages made the process of developing a program simpler and more understandable, and less bound to the underlying hardware. The first compiler related tool, the A-0 System, was developed in 1952 by Grace Hopper, who also coined the term 'compiler'. FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957, and many other languages were soon developed—in particular, COBOL aimed at commercial data processing, and Lisp for computer research. These compiled languages allow the programmer to write programs in terms that are syntactically richer, and more capable of abstracting the code, making it easy to target varying machine instruction sets via compilation declarations and heuristics. Compilers harnessed the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation. Source code entry Programs were mostly entered using punched cards or paper tape. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were also developed that allowed changes and corrections to be made much more easily than with punched cards. Modern programming Quality requirements Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important: Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors). Robustness: how well a program anticipates problems due to errors (not bugs). This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services, and network connections, user error, and unexpected power outages. Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical, and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness and completeness of a program's user interface. Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behavior of the hardware and operating system, and availability of platform-specific compilers (and sometimes libraries) for the language of the source code. Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or to customize, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term. Efficiency/performance: Measure of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes careful management of resources, for example cleaning up temporary files and eliminating memory leaks. This is often discussed under the shadow of a chosen programming language. Although the language certainly affects performance, even slower languages, such as Python, can execute programs instantly from a human perspective. Speed, resource usage, and performance are important for programs that bottleneck the system, but efficient use of programmer time is also important and is related to cost: more hardware may be cheaper. Readability of source code In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability. Readability is important because programmers spend the majority of their time reading, trying to understand, reusing and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it. Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include: Different indent styles (whitespace) Comments Decomposition Naming conventions for objects (such as variables, classes, functions, procedures, etc.) The presentation aspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by the source code editor, but the content aspects reflect the programmer's talent and skills. Various visual programming languages have also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display. Integrated development environments (IDEs) aim to integrate all such help. Techniques like Code refactoring can enhance readability. Algorithmic complexity The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problems. For this purpose, algorithms are classified into orders using so-called Big O notation, which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances. Methodologies The first step in most formal software development processes is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of different approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis. Many programmers use forms of Agile software development where the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process. Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA. A similar technique used for database design is Entity-Relationship Modeling (ER Modeling). Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages. Measuring language usage It is very difficult to determine what are the most popular modern programming languages. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books sold and courses teaching the language (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL). Some languages are very popular for particular kinds of applications, while some languages are regularly used to write many different kinds of applications. For example, COBOL is still strong in corporate data centers often on large mainframe computers, Fortran in engineering applications, scripting languages in Web development, and C in embedded software. Many applications use a mix of several languages in their construction and use. New languages are generally designed around the syntax of a prior language with new functionality added, (for example C++ adds object-orientation to C, and Java adds memory management and bytecode to C++, but as a result, loses efficiency and the ability for low-level manipulation). Debugging Debugging is a very important task in the software development process since having defects in a program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static code analysis tool can help detect some possible problems. Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial task, for example as with parallel processes or some unusual software bugs. Also, specific user environment and usage history can make it difficult to reproduce the problem. After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, when a bug in a compiler can make it crash when parsing some large source file, a simplification of the test case that results in only few lines from the original source file can be sufficient to reproduce the same crash. Trial-and-error/divide-and-conquer is needed: the programmer will try to remove some parts of the original test case and check if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user interaction from the original problem description and check if remaining actions are sufficient for bugs to appear. Scripting and breakpointing is also part of this process. Debugging is often done with IDEs. Standalone debuggers like GDB are also used, and these often provide less of a visual environment, usually using a command line. Some text editors such as Emacs allow GDB to be invoked through them, to provide a visual environment. Programming languages Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from "low-level" to "high-level"; "low-level" languages are typically more machine-oriented and faster to execute, whereas "high-level" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in "high-level" languages than in "low-level" ones. Programming languages are essential for software development. They are the building blocks for all software, from the simplest applications to the most sophisticated ones. Allen Downey, in his book How To Think Like A Computer Scientist, writes: The details look different in different languages, but a few basic instructions appear in just about every language: Input: Gather data from the keyboard, a file, or some other device. Output: Display data on the screen or send data to a file or other device. Arithmetic: Perform basic arithmetical operations like addition and multiplication. Conditional Execution: Check for certain conditions and execute the appropriate sequence of statements. Repetition: Perform some action repeatedly, usually with some variation. Many computer languages provide a mechanism to call functions provided by shared libraries. Provided the functions in a library follow the appropriate run-time conventions (e.g., method of passing arguments), then these functions may be written in any other language. Programmers Computer programmers are those who write computer software. Their jobs usually involve: Prototyping Coding Debugging Documentation Integration Maintenance Requirements analysis Software architecture Software testing Specification Although programming has been presented in the media as a somewhat mathematical subject, some research shows that good programmers have strong skills in natural human languages, and that learning to code is similar to learning a foreign language. See also ACCU Association for Computing Machinery Computer networking Hello world program Institution of Analysts and Programmers National Coding Week Object hierarchy Programming best practices System programming Computer programming in the punched card era The Art of Computer Programming Women in computing Timeline of women in computing References Sources Further reading A.K. Hartmann, Practical Guide to Computer Simulations, Singapore: World Scientific (2009) A. Hunt, D. Thomas, and W. Cunningham, The Pragmatic Programmer. From Journeyman to Master, Amsterdam: Addison-Wesley Longman (1999) Brian W. Kernighan, The Practice of Programming, Pearson (1999) Weinberg, Gerald M., The Psychology of Computer Programming, New York: Van Nostrand Reinhold (1971) Edsger W. Dijkstra, A Discipline of Programming, Prentice-Hall (1976) O.-J. Dahl, E.W.Dijkstra, C.A.R. Hoare, Structured Programming, Academic Press (1972) David Gries, The Science of Programming, Springer-Verlag (1981) External links Programming
5312
https://en.wikipedia.org/wiki/On%20the%20Consolation%20of%20Philosophy
On the Consolation of Philosophy
On the Consolation of Philosophy (), often titled as The Consolation of Philosophy or simply the Consolation, is a philosophical work by the Roman philosopher Boethius. Written in 523 while he was imprisoned and awaiting execution by the Ostrogothic King Theodoric, it is often described as the last great Western work of the Classical Period. Boethius' Consolation heavily influenced the philosophy of late antiquity, as well as Medieval and early Renaissance Christianity. Description On the Consolation of Philosophy was written in AD 523 during a one-year imprisonment Boethius served while awaiting trial—and eventual execution—for the alleged crime of treason under the Ostrogothic King Theodoric the Great. Boethius was at the very heights of power in Rome, holding the prestigious office of magister officiorum, and was brought down by treachery. This experience inspired the text, which reflects on how evil can exist in a world governed by God (the problem of theodicy), and how happiness is still attainable amidst fickle fortune, while also considering the nature of happiness and God. In 1891, the academic Hugh Fraser Stewart described the work as "by far the most interesting example of prison literature the world has ever seen." Boethius writes the book as a conversation between himself and a female personification of philosophy, referred to as "Lady Philosophy". Philosophy consoles Boethius by discussing the transitory nature of wealth, fame, and power ("no man can ever truly be secure until he has been forsaken by Fortune"), and the ultimate superiority of things of the mind, which she calls the "one true good". She contends that happiness comes from within, and that virtue is all that one truly has because it is not imperiled by the vicissitudes of fortune. Boethius engages with the nature of predestination and free will, the problem of evil and the "problem of desert", human nature, virtue, and justice. He speaks about the nature of free will and determinism when he asks if God knows and sees all, or does man have free will. On human nature, Boethius says that humans are essentially good, and only when they give in to "wickedness" do they "sink to the level of being an animal." On justice, he says criminals are not to be abused, but rather treated with sympathy and respect, using the analogy of doctor and patient to illustrate the ideal relationship between prosecutor and criminal. Outline On the Consolation of Philosophy is laid out as follows: Book I: Boethius laments his imprisonment before he is visited by Philosophy, personified as a woman. Book II: Philosophy illustrates the capricious nature of Fate by discussing the "wheel of Fortune"; she further argues that true happiness lies in the pursuit of wisdom. Book III: Building on the ideas laid out in the previous book, Philosophy explains how wisdom has a divine source; she also demonstrates how many earthly goods (e.g., wealth, beauty) are fleeting at best. Book IV: Philosophy and Boethius discuss the nature of good and evil, with Philosophy offering several explanations for why evil exists and why the wicked can never attain true happiness. Book V: Boethius asks Philosophy about the role Chance plays in the order of everything. Philosophy argues that Chance is guided by Providence. Boethius then asks Philosophy about the compatibility of an omniscient God and free will. Interpretation In the Consolation, Boethius answered religious questions without reference to Christianity, relying solely on natural philosophy and the Classical Greek tradition. He believed in the correspondence between faith and reason. The truths found in Christianity would be no different from the truths found in philosophy. In the words of Henry Chadwick, "If the Consolation contains nothing distinctively Christian, it is also relevant that it contains nothing specifically pagan either...[it] is a work written by a Platonist who is also a Christian." Boethius repeats the Macrobius model of the Earth in the center of a spherical cosmos. The philosophical message of the book fits well with the religious piety of the Middle Ages. Boethius encouraged readers not to pursue worldly goods such as money and power, but to seek internalized virtues. Evil had a purpose, to provide a lesson to help change for good; while suffering from evil was seen as virtuous. Because God ruled the universe through Love, prayer to God and the application of Love would lead to true happiness. The Middle Ages, with their vivid sense of an overruling fate, found in Boethius an interpretation of life closely akin to the spirit of Christianity. The Consolation stands, by its note of fatalism and its affinities with the Christian doctrine of humility, midway between the pagan philosophy of Seneca the Younger and the later Christian philosophy of consolation represented by Thomas à Kempis. The book is heavily influenced by Plato and his dialogues (as was Boethius himself). Its popularity can in part be explained by its Neoplatonic and Christian ethical messages, although current scholarly research is still far from clear exactly why and how the work became so vastly popular in the Middle Ages. Influence From the Carolingian epoch to the end of the Middle Ages and beyond, The Consolation of Philosophy was one of the most popular and influential philosophical works, read by statesmen, poets, historians, philosophers, and theologians. It is through Boethius that much of the thought of the Classical period was made available to the Western Medieval world. It has often been said Boethius was the "last of the Romans and the first of the Scholastics". Translations into the vernacular were done by famous notables, including King Alfred (Old English), Jean de Meun (Old French), Geoffrey Chaucer (Middle English), Queen Elizabeth I (Early Modern English) and Notker Labeo (Old High German). Boethius's Consolation of Philosophy was translated into Italian by Alberto della Piagentina (1332), Anselmo Tanso (Milan, 1520), Lodovico Domenichi (Florence, 1550), Benedetto Varchi (Florence, 1551), Cosimo Bartoli (Florence, 1551) and Tommaso Tamburini (Palermo, 1657). Found within the Consolation are themes that have echoed throughout the Western canon: the female figure of wisdom that informs Dante, the ascent through the layered universe that is shared with Milton, the reconciliation of opposing forces that find their way into Chaucer in The Knight's Tale, and the Wheel of Fortune so popular throughout the Middle Ages. Citations from it occur frequently in Dante's Divina Commedia. Of Boethius, Dante remarked: "The blessed soul who exposes the deceptive world to anyone who gives ear to him." Boethian influence can be found nearly everywhere in Geoffrey Chaucer's poetry, e.g. in Troilus and Criseyde, The Knight's Tale, The Clerk's Tale, The Franklin's Tale, The Parson's Tale and The Tale of Melibee, in the character of Lady Nature in The Parliament of Fowls and some of the shorter poems, such as Truth, The Former Age and Lak of Stedfastnesse. Chaucer translated the work in his Boece. The Italian composer Luigi Dallapiccola used some of the text in his choral work Canti di prigionia (1938). The Australian composer Peter Sculthorpe quoted parts of it in his opera or music theatre work Rites of Passage (1972–73), which was commissioned for the opening of the Sydney Opera House but was not ready in time. Tom Shippey in The Road to Middle-earth says how "Boethian" much of the treatment of evil is in Tolkien's The Lord of the Rings. Shippey says that Tolkien knew well the translation of Boethius that was made by King Alfred and he quotes some "Boethian" remarks from Frodo, Treebeard, and Elrond. Boethius and Consolatio Philosophiae are cited frequently by the main character Ignatius J. Reilly in the Pulitzer Prize-winning A Confederacy of Dunces (1980). It is a prosimetrical text, meaning that it is written in alternating sections of prose and metered verse. In the course of the text, Boethius displays a virtuosic command of the forms of Latin poetry. It is classified as a Menippean satire, a fusion of allegorical tale, platonic dialogue, and lyrical poetry. Edward Gibbon described the work as "a golden volume not unworthy of the leisure of Plato or Tully." In the 20th century, there were close to four hundred manuscripts still surviving, a testament to its popularity. Of the work, C. S. Lewis wrote: "To acquire a taste for it is almost to become naturalised in the Middle Ages." Reconstruction of lost songs Hundreds of Latin songs were recorded in neumes from the ninth century through to the thirteenth century, including settings of the poetic passages from Boethius's The Consolation of Philosophy. The music of this song repertory had long been considered irretrievably lost because the notational signs indicated only melodic outlines, relying on now-lapsed oral traditions to fill in the missing details. However, research conducted by Sam Barrett at the University of Cambridge, extended in collaboration with medieval music ensemble Sequentia, has shown that principles of musical setting for this period can be identified, providing crucial information to enable modern realisations. Sequentia performed the world premiere of the reconstructed songs from Boethius's The Consolation of Philosophy at Pembroke College, Cambridge, in April 2016, bringing to life music not heard in over 1,000 years; a number of the songs were subsequently recorded on the CD Boethius: Songs of Consolation. Metra from 11th-Century Canterbury (Glossa, 2018). The detective story behind the recovery of these lost songs is told in a documentary film, and a website launched by the University of Cambridge in 2018 provides further details of the reconstruction process, bringing together manuscripts, reconstructions, and video resources. See also Allegory in the Middle Ages Consolatio Girdle book Metres of Boethius Prosimetrum Stoicism The Wheel of Fortune References Sources Boethius, The Consolation of Philosophy. Trans. Joel C. Relihan, (Hackett Publishing), 2001. Trans. P. G. Walsh, (Oxford World's Classics), 2001. Trans. Richard H. Green, (Library of the Liberal Arts), 1962. Trans. Victor Watts, (Penguin Classics), 2000. Cochrane, Charles Norris., Christianity and Classical Culture, 1940, . Henry Chadwick, Boethius: The Consolations of Music, Logic, Theology and Philosophy, 1990, . . Relihan, Joel C., Ancient Menippean Satire, 1993, Relihan, Joel C., The Prisoner's Philosophy: Life and Death in Boethius's Consolation, 2007, . Sanderson Beck, The Consolation of Boethius an analysis and commentary. 1996. The Cambridge History of English and American Literature, Volume I Ch.6.5: De Consolatione Philosophiae, 1907–1921. External links Consolatio Philosophiae from Project Gutenberg, HTML conversion, originally translated by H. R. James, London 1897. Consolatio Philosophiae in the original Latin with English comments at the University of Georgetown First Performance in 1000 years: lost songs from the Middle Ages are brought back to life Medieval translations into Old English by Alfred the Great, Old High German by Notker Labeo, Middle (originally Old) French by Jean de Meun, and Middle English by Geoffrey Chaucer The Consolation of Philosophy, many translations and commentaries from Internet Archive The Consolation of Philosophy, Translated by: W.V. Cooper : J.M. Dent and Company London 1902 The Temple Classics, edited by Israel Golancz M.A. Online reading and multiple ebook formats at Ex-classics. 524 6th-century Latin books Dialogues Prose texts in Latin Medieval philosophical literature Prison writings Theodicy Visionary literature
5313
https://en.wikipedia.org/wiki/Crouching%20Tiger%2C%20Hidden%20Dragon
Crouching Tiger, Hidden Dragon
Crouching Tiger, Hidden Dragon is a 2000 Mandarin-language wuxia martial arts adventure film directed by Ang Lee and written for the screen by Wang Hui-ling, James Schamus, and Tsai Kuo-jung. The film stars Chow Yun-fat, Michelle Yeoh, Zhang Ziyi, and Chang Chen. It is based on the Chinese novel of the same name serialized between 1941 and 1942 by Wang Dulu, the fourth part of his Crane Iron pentalogy. A multinational venture, the film was made on a US$17 million budget, and was produced by Edko Films and Zoom Hunt Productions in collaboration with China Film Co-productions Corporation and Asian Union Film & Entertainment for Columbia Pictures Film Production Asia in association with Good Machine International. The film premiered at the Cannes Film Festival on 18 May 2000, and was theatrically released in the United States on 8 December. With dialogue in Standard Chinese, subtitled for various markets, Crouching Tiger, Hidden Dragon became a surprise international success, grossing $213.5 million worldwide. It grossed US$128 million in the United States, becoming the highest-grossing foreign-language film produced overseas in American history. The film was the first foreign-language film to break the $100 million mark in the United States. The film received universal acclaim from critics, praised for its story, direction, cinematography, and martial arts sequences. Crouching Tiger, Hidden Dragon won over 40 awards and was nominated for 10 Academy Awards in 2001, including Best Picture, and won Best Foreign Language Film, Best Art Direction, Best Original Score, and Best Cinematography, receiving the most nominations ever for a non-English-language film at the time, until 2018's Roma tied this record. The film also won four BAFTAs and two Golden Globe Awards, each of them for Best Foreign Film. For retrospective years, Crouching Tiger is often cited as one of the finest wuxia films ever made and has been widely regarded one of the greatest films in the 21st century. Plot In Qing dynasty China, Li Mu Bai is a renowned Wudang swordsman, and his friend Yu Shu Lien, a female warrior, heads a private security company. Shu Lien and Mu Bai have long had feelings for each other, but because Shu Lien had been engaged to Mu Bai's close friend, Meng Sizhao before his death, Shu Lien and Mu Bai feel bound by loyalty to Meng Sizhao and have not revealed their feelings to each other. Mu Bai, choosing to retire from the life of a swordsman, asks Shu Lien to give his fabled 400-year-old sword "Green Destiny" to their benefactor Sir Te in Beijing. Long ago, Mu Bai's teacher was killed by Jade Fox, a woman who sought to learn Wudang secrets. While at Sir Te's place, Shu Lien meets Yu Jiaolong, or Jen, who is the daughter of the rich and powerful Governor Yu and is about to get married. One evening, a masked thief sneaks into Sir Te's estate and steals the Green Destiny. Sir Te's servant Master Bo and Shu Lien trace the theft to Governor Yu's compound, where Jade Fox had been posing as Jen's governess for many years. Soon after, Mu Bai arrives in Beijing and discusses the theft with Shu Lien. Master Bo makes the acquaintance of Inspector Tsai, a police investigator from the provinces, and his daughter May, who have come to Beijing in pursuit of Fox. Fox challenges the pair and Master Bo to a showdown that night. Following a protracted battle, the group is on the verge of defeat when Mu Bai arrives and outmaneuvers Fox. She reveals that she killed Mu Bai's teacher because he would sleep with her, but refuse to take a woman as a disciple, and she felt it poetic justice for him to die at a woman's hand. Just as Mu Bai is about to kill her, the masked thief reappears and helps Fox. Fox kills Tsai before fleeing with the thief (who is revealed to be Jen). After seeing Jen fight Mu Bai, Fox realizes Jen had been secretly studying the Wudang manual. Fox is illiterate and could only follow the diagrams, whereas Jen's ability to read the manual allowed her to surpass her teacher in martial arts. At night, a bandit named Lo breaks into Jen's bedroom and asks her to leave with him. In the past, when Governor Yu and his family were traveling in the western deserts of Xinjiang, Lo and his bandits raided Jen's caravan and Lo stole her comb. She pursued him to his desert cave to retrieve her comb. However, the pair soon fell in love. Lo eventually convinced Jen to return to her family, though not before telling her a legend of a man who jumped off a mountain to make his wishes come true. Because the man's heart was pure, his wish was granted and he was unharmed, but flew away never to be seen again. Lo has come now to Beijing to persuade Jen not to go through with her arranged marriage. However, Jen refuses to leave with him. Later, Lo interrupts Jen's wedding procession, begging her to leave with him. Shu Lien and Mu Bai convince Lo to wait for Jen at Mount Wudang, where he will be safe from Jen's family, who are furious with him. Jen runs away from her husband on their wedding night before the marriage can be consummated. Disguised in men's clothing, she is accosted at an inn by a large group of warriors; armed with the Green Destiny and her own superior combat skills, she emerges victorious. Jen visits Shu Lien, who tells her that Lo is waiting for her at Mount Wudang. After an angry exchange, the two women engage in a duel. Shu Lien is the superior fighter, but Jen wields the Green Destiny and is able to destroy each weapon that Shu Lien wields, until Shu Lien finally manages to defeat Jen with a broken sword. When Shu Lien shows mercy, Jen wounds Shu Lien in the arm. Mu Bai arrives and pursues Jen into a bamboo forest, where he offers to take her as his student. Jen agrees if he can take Green Destiny from her in three moves. Mu Bai is able to take the sword in only one move, but Jen reneges on her promise, and Mu Bai throws the sword over a waterfall. Jen dives after the sword and is rescued by Fox. Fox puts Jen into a drugged sleep and places her in a cavern, where Mu Bai and Shu Lien discover her. Fox suddenly attacks them with poisoned needles. Mu Bai mortally wounds Fox, only to realize that one of the needles has hit him in the neck. Before dying, Fox confesses that her goal had been to kill Jen because Jen had hidden the secrets of Wudang's fighting techniques from her. Contrite, Jen leaves to prepare an antidote for the poisoned dart. With his last breath, Mu Bai finally confesses his love for Shu Lien. He dies in her arms as Jen returns. Shu Lien forgives Jen, telling her to go to Lo and always be true to herself. The Green Destiny is returned to Sir Te. Jen goes to Mount Wudang and spends the night with Lo. The next morning, Lo finds Jen standing on a bridge overlooking the edge of the mountain. In an echo of the legend that they spoke about in the desert, she asks him to make a wish. Lo wishes for them to be together again, back in the desert. Jen leaps from the bridge, falling into the mists below. Cast Credits from British Film Institute: Chow Yun-fat as Li Mu Bai (, ) Michelle Yeoh as Yu Shu Lien (, ) Zhang Ziyi as Jen Yu (, ) Chang Chen as Lo "Dark Cloud" Xiao Hou (, ) Lang Sihung as Sir Te (, ) Cheng Pei-pei as Jade Fox (, ) Li Fazeng as Governor Yu (, ) Wang Deming as Inspector Tsai (, ) Li Li as Tsai May (, ) Hai Yan as Madam Yu (, ) Gao Xi'an as Bo (, ) Huang Suying as Aunt Wu (, ) Zhang Jinting as De Lu (, ) Du Zhenxi as Uncle Jiao (, ) Li Kai as Gou Jun Pei (, ) Feng Jianhua as Shining Phoenix Mountain Gou (, ) Ma Zhongxuan as Iron Arm Mi (, ) Li Bao-Cheng as Flying Machete Chang (, ) Yang Yongde as Monk Jing (, ) Themes and interpretations Title The title "Crouching Tiger, Hidden Dragon" is a literal translation of the Chinese idiom "臥虎藏龍" which describes a place or situation that is full of unnoticed masters. It is from a poem of the ancient Chinese poet Yu Xin (513–581) that reads "暗石疑藏虎,盤根似臥龍", which means "behind the rock in the dark probably hides a tiger, and the coiling giant root resembles a crouching dragon". The title also has several other layers of meaning. On one level, the Chinese characters in the title connect to the narrative that the last character in Xiaohu and Jiaolong's names mean "tiger" and "dragon", respectively. On another level, the Chinese idiomatic phrase is an expression referring to the undercurrents of emotion, passion, and secret desire that lie beneath the surface of polite society and civil behavior, which alludes to the film's storyline. Gender roles The success of the Disney animated feature Mulan (1998) popularized the image of the Chinese woman warrior in the west. The storyline of Crouching Tiger, Hidden Dragon is mostly driven by the three female characters. In particular, Jen is driven by her desire to be free from the gender role imposed on her, while Shu Lien, herself oppressed by the gender role, tries to lead Jen back into the role deemed appropriate for her. Some prominent martial arts disciplines are traditionally held to have been originated by women, e.g., Wing Chun. The film's title refers to masters one does not notice, which necessarily includes mostly women, and therefore suggests the advantage of a female bodyguard. Poison Poison is also a significant theme in the film. The Chinese word "毒" (dú) means not only physical poison but also cruelty and sinfulness. In the world of martial arts, the use of poison is considered an act of one who is too cowardly and dishonorable to fight; and indeed, the only character who explicitly fits these characteristics is Jade Fox. The poison is a weapon of her bitterness and quest for vengeance: she poisons the master of Wudang, attempts to poison Jen, and succeeds in killing Mu Bai using a poisoned needle. In further play on this theme by the director, Jade Fox, as she dies, refers to the poison from a young child, "the deceit of an eight-year-old girl", referring to what she considers her own spiritual poisoning by her young apprentice Jen. Li Mu Bai himself warns that, without guidance, Jen could become a "poison dragon". China of the imagination The story is set during the Qing dynasty (1644–1912), but it does not specify an exact time. Lee sought to present a "China of the imagination" rather than an accurate vision of Chinese history. At the same time, Lee also wanted to make a film that Western audiences would want to see. Thus, the film is shot for a balance between Eastern and Western aesthetics. There are some scenes showing uncommon artistry for the typical martial arts film such as an airborne battle among wispy bamboo plants. Production The film was adapted from the novel Crouching Tiger, Hidden Dragon by Wang Dulu, serialized between 1941 and 1942 in Qingdao Xinmin News. The novel is the fourth in a sequence of five. In the contract reached between Columbia Pictures and Ang Lee and Hsu Li-kong, they agreed to invest US$6 million in filming, but the stipulated recovery amount must be more than six times before the two parties will start to pay dividends. Casting Shu Qi was Ang Lee's first choice for the role of Jen, but she turned it down. Filming Although its Academy Award for Best Foreign Language Film was presented to Taiwan, Crouching Tiger, Hidden Dragon was in fact an international co-production between companies in four regions: the Chinese company China Film Co-production Corporation, the American companies Columbia Pictures Film Production Asia, Sony Pictures Classics, and Good Machine, the Hong Kong company Edko Films, and the Taiwanese Zoom Hunt Productions, as well as the unspecified United China Vision and Asia Union Film & Entertainment, created solely for this film. The film was made in Beijing, with location shooting in Urumchi, Western Provinces, Taklamakan Plateau, Shanghai and Anji of China. The first phase of shooting was in the Gobi Desert where it consistently rained. Director Ang Lee noted, "I didn't take one break in eight months, not even for half a day. I was miserable—I just didn't have the extra energy to be happy. Near the end, I could hardly breathe. I thought I was about to have a stroke." The stunt work was mostly performed by the actors themselves and Ang Lee stated in an interview that computers were used "only to remove the safety wires that held the actors" aloft. "Most of the time you can see their faces," he added. "That's really them in the trees." Another compounding issue was the difference between accents of the four lead actors: Chow Yun-fat is from Hong Kong and speaks Cantonese natively; Michelle Yeoh is from Malaysia and grew up speaking English and Malay, so she learned the Standard Chinese lines phonetically; Chang Chen is from Taiwan and he speaks Standard Chinese in a Taiwanese accent. Only Zhang Ziyi spoke with a native Mandarin accent that Ang Lee wanted. Chow Yun Fat said, on "the first day [of shooting], I had to do 28 takes just because of the language. That's never happened before in my life." The film specifically targeted Western audiences rather than the domestic audiences who were already used to Wuxia films. As a result, high-quality English subtitles were needed. Ang Lee, who was educated in the West, personally edited the subtitles to ensure they were satisfactory for Western audiences. Soundtrack The score was composed by Dun TAN in 1999. It was played for the movie by the Shanghai Symphony Orchestra, the Shanghai National Orchestra and the Shanghai Percussion Ensemble. It features solo passages for cello played by Yo-Yo Ma. The "last track" ("A Love Before Time") features Coco Lee, who later sang it at the Academy Awards. The composer Chen Yuanlin also collaborated in the project. The music for the entire film was produced in two weeks. Tan the next year (2000) adapted his filmscore as a cello concerto called simply "Crouching Tiger." Release Marketing The film was adapted into a video game and a series of comics, and it led to the original novel being adapted into a 34-episode Taiwanese television series. The latter was released in 2004 as New Crouching Tiger, Hidden Dragon for Northern American release. Home media The film was released on VHS and DVD on 5 June 2001 by Columbia TriStar Home Entertainment. It was also released on UMD on 26 June 2005. In the United Kingdom, it was watched by viewers on television in 2004, making it the year's most-watched foreign-language film on television. Restoration The film was re-released in a 4K restoration by Sony Pictures Classics in 2023. Reception Box office The film premiered in cinemas on 8 December 2000, in limited release within the United States. During its opening weekend, the film opened in 15th place, grossing $663,205 in business, showing at 16 locations. On 12 January 2001, Crouching Tiger, Hidden Dragon premiered in cinemas in wide release throughout the U.S., grossing $8,647,295 in business, ranking in sixth place. The film Save the Last Dance came in first place during that weekend, grossing $23,444,930. The film's revenue dropped by almost 30% in its second week of release, earning $6,080,357. For that particular weekend, the film fell to eighth place, screening in 837 theaters. Save the Last Dance remained unchanged in first place, grossing $15,366,047 in box-office revenue. During its final week in release, Crouching Tiger, Hidden Dragon opened in a distant 50th place with $37,233 in revenue. The film went on to top out domestically at $128,078,872 in total ticket sales through a 31-week theatrical run. Internationally, the film took in an additional $85,446,864 in box-office business for a combined worldwide total of $213,525,736. For 2000 as a whole, the film cumulatively ranked at a worldwide box-office performance position of 19. Critical response Crouching Tiger, Hidden Dragon was widely acclaimed in the Western world, receiving numerous awards. On Rotten Tomatoes, the film holds an approval rating of 98% based on 168 reviews, with an average rating of 8.6/10. The site's critical consensus states: "The movie that catapulted Ang Lee into the ranks of upper echelon Hollywood filmmakers, Crouching Tiger, Hidden Dragon features a deft mix of amazing martial arts battles, beautiful scenery, and tasteful drama." Metacritic reported the film had an average score of 94 out of 100, based on 32 reviews, indicating "universal acclaim". Some Chinese-speaking viewers were bothered by the accents of the leading actors. Neither Chow (a native Cantonese speaker) nor Yeoh (who was born and raised in Malaysia) spoke Mandarin Chinese as a mother tongue. All four main actors spoke Standard Chinese with vastly different accents: Chow speaks with a Cantonese accent, Yeoh with a Malaysian accent, Chang Chen with a Taiwanese accent, and Zhang Ziyi with a Beijing accent. Yeoh responded to this complaint in a 28 December 2000, interview with Cinescape. She argued, "My character lived outside of Beijing, and so I didn't have to do the Beijing accent." When the interviewer, Craig Reid, remarked, "My mother-in-law has this strange Sichuan-Mandarin accent that's hard for me to understand," Yeoh responded: "Yes, provinces all have their very own strong accents. When we first started the movie, Cheng Pei Pei was going to have her accent, and Chang Zhen was going to have his accent, and this person would have that accent. And in the end nobody could understand what they were saying. Forget about us, even the crew from Beijing thought this was all weird." The film led to a boost in popularity of Chinese wuxia films in the western world, where they were previously little known, and led to films such as Hero and House of Flying Daggers, both directed by Zhang Yimou, being marketed towards Western audiences. The film also provided the breakthrough role for Zhang Ziyi's career, who noted: Film Journal noted that Crouching Tiger, Hidden Dragon "pulled off the rare trifecta of critical acclaim, boffo box-office and gestalt shift", in reference to its ground-breaking success for a subtitled film in the American market. Accolades Gathering widespread critical acclaim at the Toronto and New York film festivals, the film also became a favorite when Academy Awards nominations were announced in 2001. The film was screened out of competition at the 2000 Cannes Film Festival. The film received ten Academy Award nominations, which was the highest ever for a non-English language film, up until it was tied by Roma (2018). The film is ranked at number 497 on Empire's 2008 list of the 500 greatest movies of all time. and at number 66 in the magazine's 100 Best Films of World Cinema, published in 2010. In 2010, the Independent Film & Television Alliance selected the film as one of the 30 Most Significant Independent Films of the last 30 years. In 2016, it was voted the 35th-best film of the 21st century as picked by 177 film critics from around the world in a poll conducted by BBC. The film was included in BBC's 2018 list of The 100 greatest foreign language films ranked by 209 critics from 43 countries around the world. In 2019, The Guardian ranked the film 51st in its 100 best films of the 21st century list. Sequel A sequel to the film, Crouching Tiger, Hidden Dragon: Sword of Destiny, was released in 2016. It was directed by Yuen Wo-ping, who was the action choreographer for the first film. It is a co-production between Pegasus Media, China Film Group Corporation, and the Weinstein Company. Unlike the original film, the sequel was filmed in English for international release and dubbed into Chinese for Chinese releases. Sword of Destiny is based on Iron Knight, Silver Vase, the next (and last) novel in the Crane–Iron Pentalogy. It features a mostly new cast, headed by Donnie Yen. Michelle Yeoh reprised her role from the original. Zhang Ziyi was also approached to appear in Sword of Destiny but refused, stating that she would only appear in a sequel if Ang Lee were directing it. In the West, the sequel was for the most part not shown in theaters, instead being distributed direct-to-video by the streaming service Netflix. Posterity The theme of Janet Jackson's song "China Love" was related to the film by MTV News, in which Jackson sings of the daughter of an emperor in love with a warrior, unable to sustain relations when forced to marry into royalty. The names of the pterosaur genus Kryptodrakon and the ceratopsian genus Yinlong (both meaning "hidden dragon" in Greek and Chinese respectively) allude to the film. The character of Lo, or "Dark Cloud" the desert bandit, influenced the development of the protagonist of the Prince of Persia series of video games. In the video game Def Jam Fight for NY: The Takeover there are two hybrid fighting styles that pay homage to this movie. Which have the following combinations: Crouching tiger (Martial Arts + Streetfighting + Submissions) and Hidden Dragon (Martial Arts + Streetfighting + Kickboxing). See also Anji County Clear waters and green mountains References Further reading – Collection of articles External links 2000 films 2000 fantasy films 2000 martial arts films 2000s adventure films American martial arts films Martial arts fantasy films BAFTA winners (films) Best Film HKFA Best Foreign Language Film Academy Award winners Best Foreign Language Film BAFTA Award winners Best Foreign Language Film Golden Globe winners Chinese martial arts films Films based on Chinese novels Films directed by Ang Lee Films scored by Tan Dun Films set in 18th-century Qing dynasty Films set in Beijing Films set in the 1770s Films that won the Best Original Score Academy Award Films whose art director won the Best Art Direction Academy Award Films whose cinematographer won the Best Cinematography Academy Award Films whose director won the Best Direction BAFTA Award Films whose director won the Best Director Golden Globe Films with screenplays by James Schamus Georges Delerue Award winners Hong Kong martial arts films Hugo Award for Best Dramatic Presentation winning works Independent Spirit Award for Best Film winners Toronto International Film Festival People's Choice Award winners Magic realism films 2000s Mandarin-language films Nebula Award for Best Script-winning works Sony Pictures Classics films Taiwanese martial arts films Wuxia films 2000s American films 2000s Chinese films 2000s Hong Kong films Chinese-language American films
5314
https://en.wikipedia.org/wiki/Charlemagne
Charlemagne
Charlemagne ( ) or Charles the Great (, Frankish: Karl; 2 April 747 – 28 January 814), a member of the Carolingian dynasty, was King of the Franks from 768, King of the Lombards from 774, and was crowned as the Emperor of the Romans by Pope Leo III in 800. Charlemagne succeeded in uniting the majority of western and central Europe and was the first recognized emperor to rule from western Europe after the fall of the Western Roman Empire approximately three centuries earlier. The expanded Frankish state that Charlemagne founded was the Carolingian Empire, which is considered the first phase in the history of the Holy Roman Empire. He was canonized by Antipope Paschal III—an act later treated as invalid—and he is now regarded by some as beatified (which is a step on the path to sainthood) in the Catholic Church. Charlemagne was the eldest son of Pepin the Short and Bertrada of Laon. He was born before their canonical marriage. He became king of the Franks in 768 following his father's death, and was initially co-ruler with his brother Carloman I until the latter's death in 771. As sole ruler, he continued his father's policy towards the protection of the papacy and became its sole defender, removing the Lombards from power in northern Italy and leading an incursion into Muslim Spain. He also campaigned against the Saxons to his east, Christianizing them (upon penalty of death). These mass executions led to events such as the Massacre of Verden. He reached the height of his power in 800 when he was crowned Emperor of the Romans by Pope Leo III on Christmas Day at Old St. Peter's Basilica in Rome. Charlemagne has been called the "Father of Europe" (Pater Europae), as he united most of Western Europe for the first time since the classical era of the Roman Empire, as well as uniting parts of Europe that had never been under Frankish or Roman rule. His reign spurred the Carolingian Renaissance, a period of energetic cultural and intellectual activity within the Western Church. The Eastern Orthodox Church viewed Charlemagne less favourably, due to his support of the filioque and the Pope's preference of him as emperor over the Byzantine Empire's first female monarch, Irene of Athens. These and other disputes led to the eventual split of Rome and Constantinople in the Great Schism of 1054. Charlemagne died in 814 after contracting an infectious lung disease. He was laid to rest in the Aachen Cathedral, in his imperial capital city of Aachen. He married at least four times, and three of his legitimate sons lived to adulthood. Only the youngest of them, Louis the Pious, survived to succeed him. Charlemagne is a direct ancestor of many of Europe's royal houses, including the Capetian dynasty, the Ottonian dynasty, the House of Luxembourg, the House of Ivrea and the House of Habsburg. Names and nicknames The name Charlemagne ( ), by which the emperor is normally known in English, comes from the French Charles-le-magne, meaning "Charles the Great". In modern German, Karl der Große has the same meaning. His given name in his native Frankish dialect was Karl ("Charles", Latin: Carolus; Old High German: Karlus; Gallo-Romance: Karlo). He was named after his grandfather, Charles Martel, a choice which intentionally marked him as Martel's true heir. The nickname magnus (great) may have been associated with him already in his lifetime, but this is not certain. The contemporary Latin Royal Frankish Annals routinely call him Carolus magnus rex, "Charles the great king". As a nickname, it is only certainly attested in the works of the Poeta Saxo around 900 and it only became standard in all the lands of his former empire around 1000. Charles' achievements gave a new meaning to his name. In many languages of Europe, the very word for "king" derives from his name; e.g., , , , , , , , , , , . This development parallels that of the name of the Caesars in the original Roman Empire, which became kaiser and tsar (or czar), among others. Political background By the 6th century, the western Germanic tribe of the Franks had been Christianised, due in considerable measure to the Catholic conversion of Clovis I. Francia, ruled by the Merovingians, was the most powerful of the kingdoms that succeeded the Western Roman Empire. Following the Battle of Tertry, the Merovingians declined into powerlessness, for which they have been dubbed the rois fainéants ("do-nothing kings"). Almost all government powers were exercised by their chief officer, the mayor of the palace. In 687, Pepin of Herstal, mayor of the palace of Austrasia, ended the strife between various kings and their mayors with his victory at Tertry. He became the sole governor of the entire Frankish kingdom. Pepin was the grandson of two important figures of the Austrasian Kingdom: Saint Arnulf of Metz and Pepin of Landen. Pepin of Herstal was eventually succeeded by his son Charles, later known as Charles Martel (Charles the Hammer). After 737, Charles governed the Franks in lieu of a king and declined to call himself king. Charles was succeeded in 741 by his sons Carloman and Pepin the Short, the father of Charlemagne. In 743, the brothers placed Childeric III on the throne to curb separatism in the periphery. He was the last Merovingian king. Carloman resigned office in 746, preferring to enter the church as a monk. Pepin brought the question of the kingship before Pope Zachary, asking whether it was logical for a king to have no royal power. The pope handed down his decision in 749, decreeing that it was better for Pepin to be called king, as he had the powers of high office as Mayor, so as not to confuse the hierarchy. He, therefore, ordered him to become the true king. In 750, Pepin was elected by an assembly of the Franks, anointed by the archbishop, and then raised to the office of king. The Pope branded Childeric III as "the false king" and ordered him into a monastery. The Merovingian dynasty was thereby replaced by the Carolingian dynasty, named after Charles Martel. In 753, Pope Stephen II fled from Italy to Francia, appealing to Pepin for assistance for the rights of St. Peter. He was supported in this appeal by Carloman, Charles' brother. In return, the pope could provide only legitimacy. He did this by again anointing and confirming Pepin, this time adding his young sons Carolus (Charlemagne) and Carloman to the royal patrimony. They thereby became heirs to the realm that already covered most of western Europe. In 754, Pepin accepted the Pope's invitation to visit Italy on behalf of St. Peter's rights, dealing successfully with the Lombards. Under the Carolingians, the Frankish kingdom spread to encompass an area including most of Western Europe; the later east–west division of the kingdom formed the basis for modern France and Germany. Orman portrays the Treaty of Verdun (843) between the warring grandsons of Charlemagne as the foundation event of an independent France under its first king Charles the Bald; an independent Germany under its first king Louis the German; and an independent intermediate state stretching from the Low Countries along the borderlands to south of Rome under Lothair I, who retained the title of emperor and the capitals Aachen and Rome without the jurisdiction. The middle kingdom had broken up by 890 and partly absorbed into the Western kingdom (later France) and the Eastern kingdom (Germany) and the rest developing into smaller "buffer" states that exist between France and Germany to this day, namely Benelux and Switzerland. Rise to power Early life The most likely date of Charlemagne's birth is reconstructed from several sources. The date of 742—calculated from Einhard's date of death of January 814 at age 72—predates the marriage of his parents in 744. The year given in the Annales Petaviani, 747, would be more likely, except that it contradicts Einhard and a few other sources in making Charlemagne sixty-seven years old at his death. The month and day of 2 April are based on a calendar from Lorsch Abbey. Charlemagne claimed descent from the Roman emperor, Constantine I. In 747, Easter fell on 2 April, a coincidence that likely would have been remarked upon by chroniclers. If Easter was being used as the beginning of the calendar year, then 2 April 747 could have been, by modern reckoning, April 748 (not on Easter). The date favoured by the preponderance of evidence is 2 April 742, based on Charlemagne's age at the time of his death. This date supports the concept that Charlemagne was technically an illegitimate child, although that is not mentioned by Einhard in either since he was born out of wedlock; Pepin and Bertrada were bound by a private contract or Friedelehe at the time of his birth, but did not marry until 744. Charlemagne's exact birthplace is unknown, although historians have suggested Aachen in modern-day Germany, and Liège (Herstal) in present-day Belgium as possible locations. Aachen and Liège are close to the region whence the Merovingian and Carolingian families originated. Other cities have been suggested, including Düren, Gauting, Mürlenbach, Quierzy, and Prüm. No definitive evidence resolves the question. Ancestry Charlemagne was the eldest child of Pepin the Short (714 – 24 September 768, reigned from 751) and his wife Bertrada of Laon (720 – 12 July 783), daughter of Caribert of Laon. Many historians consider Charlemagne (Charles) to have been illegitimate, although some state that this is arguable, because Pepin did not marry Bertrada until 744, which was after Charles' birth; this status did not exclude him from the succession. Records name only Carloman, Gisela, and three short-lived children named Pepin, Chrothais and Adelais as his younger siblings. Ambiguous high office The most powerful officers of the Frankish people, the Mayor of the Palace (Maior Domus) and one or more kings (rex, reges), were appointed by the election of the people. Elections were not periodic, but were held as required to elect officers ad quos summa imperii pertinebat, "to whom the highest matters of state pertained". Evidently, interim decisions could be made by the Pope, which ultimately needed to be ratified using an assembly of the people that met annually. Before he was elected king in 751, Pepin was initially a mayor, a high office he held "as though hereditary" (velut hereditario fungebatur). Einhard explains that "the honour" was usually "given by the people" to the distinguished, but Pepin the Great and his brother Carloman the Wise received it as though hereditary, as had their father, Charles Martel. There was, however, a certain ambiguity about quasi-inheritance. The office was treated as joint property: one Mayorship held by two brothers jointly. Each, however, had his own geographic jurisdiction. When Carloman decided to resign, becoming ultimately a Benedictine at Monte Cassino, the question of the disposition of his quasi-share was settled by the pope. He converted the mayorship into a kingship and awarded the joint property to Pepin, who gained the right to pass it on by inheritance. This decision was not accepted by all family members. Carloman had consented to the temporary tenancy of his own share, which he intended to pass on to his son, Drogo, when the inheritance should be settled at someone's death. By the Pope's decision, in which Pepin had a hand, Drogo was to be disqualified as an heir in favour of his cousin Charles. He took up arms in opposition to the decision and was joined by Grifo, a half-brother of Pepin and Carloman, who had been given a share by Charles Martel, but was stripped of it and held under loose arrest by his half-brothers after an attempt to seize their shares by military action. Grifo perished in combat in the Battle of Saint-Jean-de-Maurienne while Drogo was hunted down and taken into custody. According to the Life, Pepin died in Paris on 24 September 768, whereupon the kingship passed jointly to his sons, "with divine assent" (divino nutu). The Franks "in general assembly" (generali conventu) gave them both the rank of a king (reges) but "partitioned the whole body of the kingdom equally" (totum regni corpus ex aequo partirentur). The annals tell a slightly different version, with the king dying at St-Denis, near Paris. The two "lords" (domni) were "elevated to kingship" (elevati sunt in regnum), Charles on 9 October in Noyon, Carloman on an unspecified date in Soissons. If born in 742, Charles was 26 years old, but he had been campaigning at his father's right hand for several years, which may help to account for his military skill. Carloman was 17. The language, in either case, suggests that there were not two inheritances, which would have created distinct kings ruling over distinct kingdoms, but a single joint inheritance and a joint kingship tenanted by two equal kings, Charles and his brother Carloman. As before, distinct jurisdictions were awarded. Charles received Pepin's original share as Mayor: the outer parts of the kingdom bordering on the sea, namely Neustria, western Aquitaine, and the northern parts of Austrasia; while Carloman was awarded his uncle's former share, the inner parts: southern Austrasia, Septimania, eastern Aquitaine, Burgundy, Provence, and Swabia, lands bordering Italy. The question of whether these jurisdictions were joint shares reverting to the other brother if one brother died or were inherited property passed on to the descendants of the brother who died was never definitely settled. It came up repeatedly over the succeeding decades until the grandsons of Charlemagne created distinct sovereign kingdoms. Aquitainian rebellion Formation of a new Aquitaine In southern Gaul, Aquitaine had been Romanised and people spoke a Romance language. Similarly, Hispania had been populated by peoples who spoke various languages, including Celtic, but these had now been mostly replaced by Romance languages. Between Aquitaine and Hispania were the Euskaldunak, Latinised to Vascones, or Basques, whose country, Vasconia, extended, according to the distributions of place names attributable to the Basques, mainly in the western Pyrenees but also as far south as the upper river Ebro in Spain and as far north as the river Garonne in France. The French name Gascony derives from Vasconia. The Romans were never able to subjugate the whole of Vasconia. The border with Aquitaine was at Toulouse. In about 660, the Duchy of Vasconia united with the Duchy of Aquitaine to form a single realm under Felix of Aquitaine, ruling from Toulouse. This was a joint kingship with a Basque Duke, Lupus I. Lupus is the Latin translation of Basque Otsoa, "wolf". At Felix's death in 670 the joint property of the kingship reverted entirely to Lupus. As the Basques had no law of joint inheritance but relied on primogeniture, Lupus in effect founded a hereditary dynasty of Basque rulers of an expanded Aquitaine. Acquisition of Aquitaine by the Carolingians The Latin chronicles of the end of Visigothic Hispania omit many details, such as identification of characters, filling in the gaps and reconciliation of numerous contradictions. Muslim sources, however, present a more coherent view, such as in the Ta'rikh iftitah al-Andalus ("History of the Conquest of al-Andalus") by Ibn al-Qūṭiyya ("the son of the Gothic woman", referring to the granddaughter of Wittiza, the last Visigothic king of a united Hispania, who married a Moor). Ibn al-Qūṭiyya, who had another, much longer name, must have been relying to some degree on family oral tradition. According to Ibn al-Qūṭiyya Wittiza, the last Visigothic king of a united Hispania, died before his three sons, Almund, Romulo, and Ardabast reached maturity. Their mother was queen regent at Toledo, but Roderic, army chief of staff, staged a rebellion, capturing Córdoba. He chose to impose a joint rule over distinct jurisdictions on the true heirs. Evidence of a division of some sort can be found in the distribution of coins imprinted with the name of each king and in the king lists. Wittiza was succeeded by Roderic, who reigned for seven and a half years, followed by Achila (Aquila), who reigned three and a half years. If the reigns of both terminated with the incursion of the Saracens, then Roderic appears to have reigned a few years before the majority of Achila. The latter's kingdom was securely placed to the northeast, while Roderic seems to have taken the rest, notably modern Portugal. The Saracens crossed the mountains to claim Ardo's Septimania, only to encounter the Basque dynasty of Aquitaine, always the allies of the Goths. Odo the Great of Aquitaine was at first victorious at the Battle of Toulouse in 721. Saracen troops gradually massed in Septimania and, in 732, an army under Emir Abdul Rahman Al Ghafiqi advanced into Vasconia, and Odo was defeated at the Battle of the River Garonne. They took Bordeaux and were advancing towards Tours when Odo, powerless to stop them, appealed to his arch-enemy, Charles Martel, mayor of the Franks. In one of the first of the lightning marches for which the Carolingian kings became famous, Charles and his army appeared in the path of the Saracens between Tours and Poitiers, and in the Battle of Tours decisively defeated and killed al-Ghafiqi. The Moors returned twice more, each time suffering defeat at Charles' hands—at the River Berre near Narbonne in 737 and in the Dauphiné in 740. Odo's price for salvation from the Saracens was incorporation into the Frankish kingdom, a decision that was repugnant to him and also to his heirs. Loss and recovery of Aquitaine After the death of his father, Hunald I allied himself with free Lombardy. However, Odo had ambiguously left the kingdom jointly to his two sons, Hunald and Hatto. The latter, loyal to Francia, now went to war with his brother over full possession. Victorious, Hunald blinded and imprisoned his brother, only to be so stricken by conscience that he resigned and entered the church as a monk to do penance. The story is told in Annales Mettenses priores. His son Waifer took an early inheritance, becoming duke of Aquitaine and ratifying the alliance with Lombardy. Waifer, deciding to honour it, repeated his father's decision, which he justified by arguing that any agreements with Charles Martel became invalid on Martel's death. Since Aquitaine was now Pepin's inheritance because of the earlier assistance given by Charles Martel, according to some, the latter and his son, the young Charles, hunted down Waifer, who could only conduct a guerrilla war, and executed him. Among the contingents of the Frankish army were Bavarians under Tassilo III, Duke of Bavaria, an Agilofing, the hereditary Bavarian ducal family. Grifo had installed himself as Duke of Bavaria, but Pepin replaced him with a member of the ducal family yet a child, Tassilo, whose protector he had become after the death of his father. The loyalty of the Agilolfings was perpetually in question, but Pepin exacted numerous oaths of loyalty from Tassilo. However, the latter had married Liutperga, a daughter of Desiderius, king of Lombardy. At a critical point in the campaign, Tassilo left the field with all his Bavarians. Out of reach of Pepin, he repudiated all loyalty to Francia. Pepin had no chance to respond as he grew ill and died within a few weeks after Waifer's execution. The first event of the brothers' reign was the uprising of the Aquitainians and Gascons in 769, in that territory split between the two kings. One year earlier, Pepin had finally defeated Waifer, Duke of Aquitaine, after waging a destructive, ten-year war against Aquitaine. Now, Hunald II led the Aquitainians as far north as Angoulême. Charles met Carloman, but Carloman refused to participate and returned to Burgundy. Charles went to war, leading an army to Bordeaux, where he built a fortified camp on the mound at Fronsac. Hunald was forced to flee to the court of Duke Lupus II of Gascony. Lupus, fearing Charles, turned Hunald over in exchange for peace, and Hunald was put in a monastery. Gascon lords also surrendered, and Aquitaine and Gascony were finally fully subdued by the Franks. Marriage to Desiderata The brothers maintained lukewarm relations with the assistance of their mother Bertrada, but in 770 Charles signed a treaty with Duke Tassilo III of Bavaria and married a Lombard Princess (commonly known today as Desiderata), the daughter of King Desiderius, to surround Carloman with his own allies. Though Pope Stephen III first opposed the marriage with the Lombard princess, he found little to fear from a Frankish-Lombard alliance. Less than a year after his marriage, Charlemagne repudiated Desiderata and married a 13-year-old Swabian named Hildegard. The repudiated Desiderata returned to her father's court at Pavia. Her father's wrath was now aroused, and he would have gladly allied with Carloman to defeat Charles. Before any open hostilities could be declared, however, Carloman died on 5 December 771, apparently of natural causes. Carloman's widow Gerberga fled to Desiderius' court with her sons for protection. Wives, concubines, and children Charlemagne had eighteen children with seven of his ten known wives or concubines. Nonetheless, he had only four legitimate grandsons, the four sons of his fourth son, Louis. In addition, he had a grandson (Bernard of Italy, the only son of his third son, Pepin of Italy), who was illegitimate but included in the line of inheritance. Among his descendants are several royal dynasties, including the Habsburg, and Capetian dynasties. By consequence, most if not all established European noble families ever since can genealogically trace some of their background to Charlemagne. Children During the first peace of any substantial length (780–782), Charles began to appoint his sons to positions of authority. In 781, during a visit to Rome, he made his two youngest sons kings, crowned by the Pope. The elder of these two, Carloman, was made the king of Italy, taking the Iron Crown that his father had first worn in 774, and in the same ceremony was renamed "Pepin" (not to be confused with Charlemagne's eldest, possibly illegitimate son, Pepin the Hunchback). The younger of the two, Louis, became King of Aquitaine. Charlemagne ordered Pepin and Louis to be raised in the customs of their kingdoms, and he gave their regents some control of their subkingdoms, but kept the real power, though he intended his sons to inherit their realms. He did not tolerate insubordination in his sons: in 792, he banished Pepin the Hunchback to Prüm Abbey because the young man had joined a rebellion against him. Charles was determined to have his children educated, including his daughters, as his parents had instilled the importance of learning in him at an early age. His children were also taught skills in accord with their aristocratic status, which included training in riding and weaponry for his sons, and embroidery, spinning and weaving for his daughters. The sons fought many wars on behalf of their father. Charles was mostly preoccupied with the Bretons, whose border he shared and who insurrected on at least two occasions and were easily put down. He also fought the Saxons on multiple occasions. In 805 and 806, he was sent into the Böhmerwald (modern Bohemia) to deal with the Slavs living there (Bohemian tribes, ancestors of the modern Czechs). He subjected them to Frankish authority and devastated the valley of the Elbe, forcing tribute from them. Pippin had to hold the Avar and Beneventan borders and fought the Slavs to his north. He was uniquely poised to fight the Byzantine Empire when that conflict arose after Charlemagne's imperial coronation and a Venetian rebellion. Finally, Louis was in charge of the Spanish March and fought the Duke of Benevento in southern Italy on at least one occasion. He took Barcelona in a great siege in 801. Charlemagne kept his daughters at home with him and refused to allow them to contract sacramental marriages (though he originally condoned an engagement between his eldest daughter Rotrude and Constantine VI of Byzantium, this engagement was annulled when Rotrude was 11). Charlemagne's opposition to his daughters' marriages may possibly have intended to prevent the creation of cadet branches of the family to challenge the main line, as had been the case with Tassilo of Bavaria. However, he tolerated their extramarital relationships, even rewarding their common-law husbands and treasuring the illegitimate grandchildren they produced for him. He also refused to believe stories of their wild behaviour. After his death the surviving daughters were banished from the court by their brother, the pious Louis, to take up residence in the convents they had been bequeathed by their father. At least one of them, Bertha, had a recognised relationship, if not a marriage, with Angilbert, a member of Charlemagne's court circle. Italian campaigns Conquest of the Lombard kingdom At his succession in 772, Pope Adrian I demanded the return of certain cities in the former exarchate of Ravenna in accordance with a promise at the succession of Desiderius. Instead, Desiderius took over certain papal cities and invaded the Pentapolis, heading for Rome. Adrian sent ambassadors to Charlemagne in autumn requesting he enforce the policies of his father, Pepin. Desiderius sent his own ambassadors denying the pope's charges. The ambassadors met at Thionville, and Charlemagne upheld the pope's side. Charlemagne demanded what the pope had requested, but Desiderius swore never to comply. Charlemagne and his uncle Bernard crossed the Alps in 773 and chased the Lombards back to Pavia, which they then besieged. Charlemagne temporarily left the siege to deal with Adelchis, son of Desiderius, who was raising an army at Verona. The young prince was chased to the Adriatic littoral and fled to Constantinople to plead for assistance from Constantine V, who was waging war with Bulgaria. The siege lasted until the spring of 774 when Charlemagne visited the pope in Rome. There he confirmed his father's grants of land, with some later chronicles falsely claiming that he also expanded them, granting Tuscany, Emilia, Venice and Corsica. The pope granted him the title patrician. He then returned to Pavia, where the Lombards were on the verge of surrendering. In return for their lives, the Lombards surrendered and opened the gates in early summer. Desiderius was sent to the abbey of Corbie, and his son Adelchis died in Constantinople, a patrician. Charles, unusually, had himself crowned with the Iron Crown and made the magnates of Lombardy pay homage to him at Pavia. Only Duke Arechis II of Benevento refused to submit and proclaimed independence. Charlemagne was then master of Italy as king of the Lombards. He left Italy with a garrison in Pavia and a few Frankish counts in place the same year. Instability continued in Italy. In 776, Dukes Hrodgaud of Friuli and Hildeprand of Spoleto rebelled. Charlemagne rushed back from Saxony and defeated the Duke of Friuli in battle; the Duke was slain. The Duke of Spoleto signed a treaty. Their co-conspirator, Arechis, was not subdued, and Adelchis, their candidate in Byzantium, never left that city. Northern Italy was now faithfully his. Southern Italy In 787, Charlemagne directed his attention towards the Duchy of Benevento, where Arechis II was reigning independently with the self-given title of Princeps. Charlemagne's siege of Salerno forced Arechis into submission, and in return for peace, Arechis recognized Charlemagne's suzerainty and handed his son Grimoald III over as a hostage. After Arechis' death in 787, Grimoald was allowed to return to Benevento. In 788, the principality was invaded by Byzantine troops led by Adelchis, but his attempts were thwarted by Grimoald. The Franks assisted in the repulsion of Adelchis, but, in turn, attacked Benevento's territories several times, obtaining small gains, notably the annexation of Chieti to the duchy of Spoleto. Later, Grimoald tried to throw off Frankish suzerainty, but Charles' sons, Pepin of Italy and Charles the Younger, forced him to submit in 792. Carolingian expansion to the south Vasconia and the Pyrenees The destructive war led by Pepin in Aquitaine, although brought to a satisfactory conclusion for the Franks, proved the Frankish power structure south of the Loire was feeble and unreliable. After the defeat and death of Waifer in 768, while Aquitaine submitted again to the Carolingian dynasty, a new rebellion broke out in 769 led by Hunald II, a possible son of Waifer. He took refuge with the ally Duke Lupus II of Gascony, but probably out of fear of Charlemagne's reprisal, Lupus handed him over to the new King of the Franks to whom he pledged loyalty, which seemed to confirm the peace in the Basque area south of the Garonne. In the campaign of 769, Charlemagne seems to have followed a policy of "overwhelming force" and avoided a major pitched battle Wary of new Basque uprisings, Charlemagne seems to have tried to contain Duke Lupus's power by appointing Seguin as the Count of Bordeaux (778) and other counts of Frankish background in bordering areas (Toulouse, County of Fézensac). The Basque Duke, in turn, seems to have contributed decisively or schemed the Battle of Roncevaux Pass (referred to as "Basque treachery"). The defeat of Charlemagne's army in Roncevaux (778) confirmed his determination to rule directly by establishing the Kingdom of Aquitaine (ruled by Louis the Pious) based on a power base of Frankish officials, distributing lands among colonisers and allocating lands to the Church, which he took as an ally. A Christianisation programme was put in place across the high Pyrenees (778). The new political arrangement for Vasconia did not sit well with local lords. As of 788 Adalric was fighting and capturing Chorson, Carolingian Count of Toulouse. He was eventually released, but Charlemagne, enraged at the compromise, decided to depose him and appointed his trustee William of Gellone. William, in turn, fought the Basques and defeated them after banishing Adalric (790). From 781 (Pallars, Ribagorça) to 806 (Pamplona under Frankish influence), taking the County of Toulouse for a power base, Charlemagne asserted Frankish authority over the Pyrenees by subduing the south-western marches of Toulouse (790) and establishing vassal counties on the southern Pyrenees that were to make up the Marca Hispanica. As of 794, a Frankish vassal, the Basque lord Belasko (al-Galashki, 'the Gaul') ruled Álava, but Pamplona remained under Cordovan and local control up to 806. Belasko and the counties in the Marca Hispánica provided the necessary base to attack the Andalusians (an expedition led by William Count of Toulouse and Louis the Pious to capture Barcelona in 801). Events in the Duchy of Vasconia (rebellion in Pamplona, count overthrown in Aragon, Duke Seguin of Bordeaux deposed, uprising of the Basque lords, etc.) were to prove it ephemeral upon Charlemagne's death. Roncesvalles campaign According to the Muslim historian Ibn al-Athir, the Diet of Paderborn had received the representatives of the Muslim rulers of Zaragoza, Girona, Barcelona and Huesca. Their masters had been cornered in the Iberian peninsula by Abd ar-Rahman I, the Umayyad emir of Cordova. These "Saracen" (Moorish and Muwallad) rulers offered their homage to the king of the Franks in return for military support. Seeing an opportunity to extend Christendom and his own power, and believing the Saxons to be a fully conquered nation, Charlemagne agreed to go to Spain. In 778, he led the Neustrian army across the Western Pyrenees, while the Austrasians, Lombards, and Burgundians passed over the Eastern Pyrenees. The armies met at Saragossa and Charlemagne received the homage of the Muslim rulers, Sulayman al-Arabi and Kasmin ibn Yusuf, but the city did not fall for him. Indeed, Charlemagne faced the toughest battle of his career. The Muslims forced him to retreat, so he decided to go home, as he could not trust the Basques, whom he had subdued by conquering Pamplona. He turned to leave Iberia, but as his army was crossing back through the Pass of Roncesvalles, one of the most famous events of his reign occurred: the Basques attacked and destroyed his rearguard and baggage train. The Battle of Roncevaux Pass, though less a battle than a skirmish, left many famous dead, including the seneschal Eggihard, the count of the palace Anselm, and the warden of the Breton March, Roland, inspiring the subsequent creation of The Song of Roland (La Chanson de Roland), regarded as the first major work in the French language. Contact with Muslims The conquest of Italy brought Charlemagne in contact with Muslims who, at the time, controlled the Mediterranean. Charlemagne's eldest son, Pepin the Hunchback, was much occupied with Muslims in Italy. Charlemagne conquered Corsica and Sardinia at an unknown date and in 799 the Balearic Islands. The islands were often attacked by Muslim pirates, but the counts of Genoa and Tuscany (Boniface) controlled them with large fleets until the end of Charlemagne's reign. Charlemagne even had contact with the caliphal court in Baghdad. In 797 (or possibly 801), the caliph of Baghdad, Harun al-Rashid, presented Charlemagne with an Asian elephant named Abul-Abbas and a clock. Wars with the Moors In Hispania, the struggle against Islam continued unabated throughout the latter half of his reign. Louis was in charge of the Spanish border. In 785, his men captured Girona permanently and extended Frankish control into the Catalan littoral for the duration of Charlemagne's reign (the area remained nominally Frankish until the Treaty of Corbeil in 1258). The Muslim chiefs in the northeast of Islamic Spain were constantly rebelling against Cordovan authority, and they often turned to the Franks for help. The Frankish border was slowly extended until 795, when Girona, Cardona, Ausona and Urgell were united into the new Spanish March, within the old duchy of Septimania. In 797, Barcelona, the greatest city of the region, fell to the Franks when Zeid, its governor, rebelled against Cordova and, failing, handed it to them. The Umayyad authority recaptured it in 799. However, Louis of Aquitaine marched the entire army of his kingdom over the Pyrenees and besieged it for two years, wintering there from 800 to 801, when it capitulated. The Franks continued to press forward against the emir. They probably took Tarragona and forced the submission of Tortosa in 809. The last conquest brought them to the mouth of the Ebro and gave them raiding access to Valencia, prompting the Emir al-Hakam I to recognise their conquests in 813. Eastern campaigns Saxon Wars Charlemagne was engaged in almost constant warfare throughout his reign, often at the head of his elite scara bodyguard squadrons. In the Saxon Wars, spanning thirty years and eighteen battles, he conquered Saxonia and proceeded to convert it to Christianity. The Germanic Saxons were divided into four subgroups in four regions. Nearest to Austrasia was Westphalia and farthest away was Eastphalia. Between them was Engria and north of these three, at the base of the Jutland peninsula, was Nordalbingia. In his first campaign, in 773, Charlemagne forced the Engrians to submit and cut down an Irminsul pillar near Paderborn. The campaign was cut short by his first expedition to Italy. He returned in 775, marching through Westphalia and conquering the Saxon fort at Sigiburg. He then crossed Engria, where he defeated the Saxons again. Finally, in Eastphalia, he defeated a Saxon force, and its leader converted to Christianity. Charlemagne returned through Westphalia, leaving encampments at Sigiburg and Eresburg, which had been important Saxon bastions. He then controlled Saxony with the exception of Nordalbingia, but Saxon resistance had not ended. Following his subjugation of the Dukes of Friuli and Spoleto, Charlemagne returned rapidly to Saxony in 776, where a rebellion had destroyed his fortress at Eresburg. The Saxons were once again defeated, but their main leader, Widukind, escaped to Denmark, his wife's home. Charlemagne built a new camp at Karlstadt. In 777, he called a national diet at Paderborn to integrate Saxony fully into the Frankish kingdom. Many Saxons were baptised as Christians. In the summer of 779, he again invaded Saxony and reconquered Eastphalia, Engria and Westphalia. At a diet near Lippe, he divided the land into missionary districts and himself assisted in several mass baptisms (780). He then returned to Italy and, for the first time, the Saxons did not immediately revolt. Saxony was peaceful from 780 to 782. He returned to Saxony in 782 and instituted a code of law and appointed counts, both Saxon and Frank. The laws were draconian on religious issues; for example, the Capitulatio de partibus Saxoniae prescribed death to Saxon pagans who refused to convert to Christianity. This led to renewed conflict. That year, in autumn, Widukind returned and led a new revolt. In response, at Verden in Lower Saxony, Charlemagne is recorded as having ordered the execution of 4,500 Saxon prisoners by beheading, known as the Massacre of Verden ("Verdener Blutgericht"). The killings triggered three years of renewed bloody warfare. During this war, the East Frisians between the Lauwers and the Weser joined the Saxons in revolt and were finally subdued. The war ended with Widukind accepting baptism. The Frisians afterwards asked for missionaries to be sent to them and a bishop of their own nation, Ludger, was sent. Charlemagne also promulgated a law code, the Lex Frisonum, as he did for most subject peoples. Thereafter, the Saxons maintained the peace for seven years, but in 792 Westphalia again rebelled. The Eastphalians and Nordalbingians joined them in 793, but the insurrection was unpopular and was put down by 794. An Engrian rebellion followed in 796, but the presence of Charlemagne, Christian Saxons and Slavs quickly crushed it. The last insurrection occurred in 804, more than thirty years after Charlemagne's first campaign against them, but also failed. According to Einhard: Submission of Bavaria By 774, Charlemagne had invaded the Kingdom of Lombardy, and he later annexed the Lombardian territories and assumed its crown, placing the Papal States under Frankish protection. The Duchy of Spoleto south of Rome was acquired in 774, while in the central western parts of Europe, the Duchy of Bavaria was absorbed and the Bavarian policy continued of establishing tributary marches, (borders protected in return for tribute or taxes) among the Slavic Sorbs and Czechs. The remaining power confronting the Franks in the east were the Avars. However, Charlemagne acquired other Slavic areas, including Bohemia, Moravia, Austria and Croatia. In 789, Charlemagne turned to Bavaria. He claimed that Tassilo III, Duke of Bavaria was an unfit ruler, due to his oath-breaking. The charges were exaggerated, but Tassilo was deposed anyway and put in the monastery of Jumièges. In 794, Tassilo was made to renounce any claim to Bavaria for himself and his family (the Agilolfings) at the synod of Frankfurt; he formally handed over to the king all of the rights he had held. Bavaria was subdivided into Frankish counties, as had been done with Saxony. Avar campaigns In 788, the Avars, an Asian nomadic group that had settled down in what is today Hungary (Einhard called them Huns), invaded Friuli and Bavaria. Charlemagne was preoccupied with other matters until 790 when he marched down the Danube and ravaged Avar territory to the Győr. A Lombard army under Pippin then marched into the Drava valley and ravaged Pannonia. The campaigns ended when the Saxons revolted again in 792. For the next two years, Charlemagne was occupied, along with the Slavs, against the Saxons. Pippin and Duke Eric of Friuli continued, however, to assault the Avars' ring-shaped strongholds. The great Ring of the Avars, their capital fortress, was taken twice. The booty was sent to Charlemagne at his capital, Aachen, and redistributed to his followers and to foreign rulers, including King Offa of Mercia. Soon the Avar tuduns had lost the will to fight and travelled to Aachen to become vassals to Charlemagne and to become Christians. Charlemagne accepted their surrender and sent one native chief, baptised Abraham, back to Avaria with the ancient title of khagan. Abraham kept his people in line, but in 800, the Bulgarians under Khan Krum attacked the remains of the Avar state. In 803, Charlemagne sent a Bavarian army into Pannonia, defeating and bringing an end to the Avar confederation. In November of the same year, Charlemagne went to Regensburg where the Avar leaders acknowledged him as their ruler. In 805, the Avar khagan, who had already been baptised, went to Aachen to ask permission to settle with his people south-eastward from Vienna. The Transdanubian territories became integral parts of the Frankish realm, which was abolished by the Magyars in 899–900. Northeast Slav expeditions In 789, in recognition of his new pagan neighbours, the Slavs, Charlemagne marched an Austrasian-Saxon army across the Elbe into Obotrite territory. The Slavs ultimately submitted, led by their leader Witzin. Charlemagne then accepted the surrender of the Veleti under Dragovit and demanded many hostages. He also demanded permission to send missionaries into this pagan region unmolested. The army marched to the Baltic before turning around and marching to the Rhine, winning much booty with no harassment. The tributary Slavs became loyal allies. In 795, when the Saxons broke the peace, the Abotrites and Veleti rebelled with their new ruler against the Saxons. Witzin died in battle and Charlemagne avenged him by harrying the Eastphalians on the Elbe. Thrasuco, his successor, led his men to conquest over the Nordalbingians and handed their leaders over to Charlemagne, who honoured him. The Abotrites remained loyal until Charles' death and fought later against the Danes. Southeast Slav expeditions When Charlemagne incorporated much of Central Europe, he brought the Frankish state face to face with the Avars and Slavs in the southeast. The most southeast Frankish neighbours were Croats, who settled in Lower Pannonia and Duchy of Croatia. While fighting the Avars, the Franks had called for their support. During the 790s, he won a major victory over them in 796. Duke Vojnomir of Lower Pannonia aided Charlemagne, and the Franks made themselves overlords over the Croats of northern Dalmatia, Slavonia and Pannonia. The Frankish commander Eric of Friuli wanted to extend his dominion by conquering the Littoral Croat Duchy. During that time, Dalmatian Croatia was ruled by Duke Višeslav of Croatia. In the Battle of Trsat, the forces of Eric fled their positions and were routed by the forces of Višeslav. Eric was among those killed which was a great blow for the Carolingian Empire. Charlemagne also directed his attention to the Slavs to the west of the Avar khaganate: the Carantanians and Carniolans. These people were subdued by the Lombards and Bavarii and made tributaries, but were never fully incorporated into the Frankish state. Imperium Coronation In 799, Pope Leo III had been assaulted by some of the Romans, who tried to pull out his eyes and tear out his tongue. Leo escaped and fled to Charlemagne at Paderborn. Charlemagne, advised by scholar Alcuin, travelled to Rome, in November 800 and held a synod. On 23 December, Leo swore an oath of innocence to Charlemagne. His position having thereby been weakened, the Pope sought to restore his status. Two days later, at Mass, on Christmas Day (25 December), when Charlemagne knelt at the altar to pray, the Pope crowned him Imperator Romanorum ("Emperor of the Romans") in Saint Peter's Basilica. In so doing, the Pope rejected the legitimacy of Empress Irene of Constantinople: Charlemagne's coronation as Emperor, though intended to represent the continuation of the unbroken line of Emperors from Augustus to Constantine VI, had the effect of setting up two separate (and often opposing) Empires and two separate claims to imperial authority. It led to war in 802, and for centuries to come, the Emperors of both West and East would make competing claims of sovereignty over the whole. Einhard says that Charlemagne was ignorant of the Pope's intent and did not want any such coronation: A number of modern scholars, however, suggest that Charlemagne was indeed aware of the coronation; certainly, he cannot have missed the bejewelled crown waiting on the altar when he came to pray—something even contemporary sources support. Debate Historians have debated for centuries whether Charlemagne was aware before the coronation of the Pope's intention to crown him Emperor (Charlemagne declared that he would not have entered Saint Peter's had he known, according to chapter twenty-eight of Einhard's Vita Karoli Magni), but that debate obscured the more significant question of why the Pope granted the title and why Charlemagne accepted it. Collins points out "[t]hat the motivation behind the acceptance of the imperial title was a romantic and antiquarian interest in reviving the Roman Empire is highly unlikely." For one thing, such romance would not have appealed either to Franks or Roman Catholics at the turn of the ninth century, both of whom viewed the Classical heritage of the Roman Empire with distrust. The Franks took pride in having "fought against and thrown from their shoulders the heavy yoke of the Romans" and "from the knowledge gained in baptism, clothed in gold and precious stones the bodies of the holy martyrs whom the Romans had killed by fire, by the sword and by wild animals", as Pepin III described it in a law of 763 or 764. Furthermore, the new title—carrying with it the risk that the new emperor would "make drastic changes to the traditional styles and procedures of government" or "concentrate his attentions on Italy or on Mediterranean concerns more generally"—risked alienating the Frankish leadership. For both the Pope and Charlemagne, the Roman Empire remained a significant power in European politics at this time. The Byzantine Empire, based in Constantinople, continued to hold a substantial portion of Italy, with borders not far south of Rome. Charles' sitting in judgment of the Pope could be seen as usurping the prerogatives of the Emperor in Constantinople: For the Pope, then, there was "no living Emperor at that time" though Henri Pirenne disputes this saying that the coronation "was not in any sense explained by the fact that at this moment a woman was reigning in Constantinople". Nonetheless, the Pope took the extraordinary step of creating one. The papacy had since 727 been in conflict with Irene's predecessors in Constantinople over a number of issues, chiefly the continued Byzantine adherence to the doctrine of iconoclasm, the destruction of Christian images; while from 750, the secular power of the Byzantine Empire in central Italy had been nullified. By bestowing the Imperial crown upon Charlemagne, the Pope arrogated to himself "the right to appoint ... the Emperor of the Romans, ... establishing the imperial crown as his own personal gift but simultaneously granting himself implicit superiority over the Emperor whom he had created." And "because the Byzantines had proved so unsatisfactory from every point of view—political, military and doctrinal—he would select a westerner: the one man who by his wisdom and statesmanship and the vastness of his dominions ... stood out head and shoulders above his contemporaries." With Charlemagne's coronation, therefore, "the Roman Empire remained, so far as either of them [Charlemagne and Leo] were concerned, one and indivisible, with Charles as its Emperor", though there can have been "little doubt that the coronation, with all that it implied, would be furiously contested in Constantinople". Alcuin writes hopefully in his letters of an Imperium Christianum ("Christian Empire"), wherein, "just as the inhabitants of the [Roman Empire] had been united by a common Roman citizenship", presumably this new empire would be united by a common Christian faith. This is the view of Pirenne when he says "Charles was the Emperor of the ecclesia as the Pope conceived it, of the Roman Church, regarded as the universal Church". The Imperium Christianum was further supported at a number of synods all across Europe by Paulinus of Aquileia. What is known, from the Byzantine chronicler Theophanes, is that Charlemagne's reaction to his coronation was to take the initial steps towards securing the Constantinopolitan throne by sending envoys of marriage to Irene, and that Irene reacted somewhat favourably to them. Distinctions between the universalist and localist conceptions of the empire remain controversial among historians. According to the former, the empire was a universal monarchy, a "commonwealth of the whole world, whose sublime unity transcended every minor distinction"; and the emperor "was entitled to the obedience of Christendom". According to the latter, the emperor had no ambition for universal dominion; his realm was limited in the same way as that of every other ruler, and when he made more far-reaching claims his object was normally to ward off the attacks either of the Pope or of the Byzantine emperor. According to this view, also, the origin of the empire is to be explained by specific local circumstances rather than by overarching theories. According to Ohnsorge, for a long time, it had been the custom of Byzantium to designate the German princes as spiritual "sons" of the Romans. What might have been acceptable in the fifth century had become provoking and insulting to the Franks in the eighth century. Charles came to believe that the Roman emperor, who claimed to head the world hierarchy of states, was, in reality, no greater than Charles himself, a king as other kings, since beginning in 629 he had entitled himself "Basileus" (translated literally as "king"). Ohnsorge finds it significant that the chief wax seal of Charles, which bore only the inscription: "Christe, protege Carolum regem Francorum" [Christ, protect Charles, king of the Franks], was used from 772 to 813, even during the imperial period and was not replaced by a special imperial seal; indicating that Charles felt himself to be just the king of the Franks. Finally, Ohnsorge points out that in the spring of 813 at Aachen, Charles crowned his only surviving son, Louis, as the emperor without recourse to Rome with only the acclamation of his Franks. The form in which this acclamation was offered was Frankish-Christian rather than Roman. This implies both independence from Rome and a Frankish (non-Roman) understanding of empire. Mayr-Harting argues that the Imperial title was Charlemagne's face-saving offer to incorporate the recently conquered Saxons. Since the Saxons did not have an institution of kingship for their own ethnicity, claiming the right to rule them as King of the Saxons was not possible. Hence, it is argued, Charlemagne used the supra-ethnic Imperial title to incorporate the Saxons, which helped to cement the diverse peoples under his rule. Imperial title Charlemagne used these circumstances to claim that he was the "renewer of the Roman Empire", which had declined under the Byzantines. In his official charters, Charles preferred the style Karolus serenissimus Augustus a Deo coronatus magnus pacificus imperator Romanum gubernans imperium ("Charles, most serene Augustus crowned by God, the great, peaceful emperor ruling the Roman empire") to the more direct Imperator Romanorum ("Emperor of the Romans"). The title of Emperor remained in the Carolingian family for years to come, but divisions of territory and in-fighting over supremacy of the Frankish state weakened its significance. The papacy itself never forgot the title nor abandoned the right to bestow it. When the family of Charles ceased to produce worthy heirs, the Pope gladly crowned whichever Italian magnate could best protect him from his local enemies. The empire would remain in continuous existence for over a millennium, as the Holy Roman Empire, a true imperial successor to Charles. Imperial diplomacy The iconoclasm of the Byzantine Isaurian Dynasty was endorsed by the Franks. The Second Council of Nicaea reintroduced the veneration of icons under Empress Irene. The council was not recognised by Charlemagne since no Frankish emissaries had been invited, even though Charlemagne ruled more than three provinces of the classical Roman empire and was considered equal in rank to the Byzantine emperor. And while the Pope supported the reintroduction of the iconic veneration, he politically digressed from Byzantium. He certainly desired to increase the influence of the papacy, to honour his saviour Charlemagne, and to solve the constitutional issues then most troubling to European jurists in an era when Rome was not in the hands of an emperor. Thus, Charlemagne's assumption of the imperial title was not a usurpation in the eyes of the Franks or Italians. It was, however, seen as such in Byzantium, where it was protested by Irene and her successor Nikephoros I—neither of whom had any great effect in enforcing their protests. The East Romans, however, still held several territories in Italy: Venice (what was left of the Exarchate of Ravenna), Reggio (in Calabria), Otranto (in Apulia), and Naples (the Ducatus Neapolitanus). These regions remained outside of Frankish hands until 804, when the Venetians, torn by infighting, transferred their allegiance to the Iron Crown of Pippin, Charles' son. The Pax Nicephori ended. Nicephorus ravaged the coasts with a fleet, initiating the only instance of war between the Byzantines and the Franks. The conflict lasted until 810 when the pro-Byzantine party in Venice gave their city back to the Byzantine Emperor, and the two emperors of Europe made peace: Charlemagne received the Istrian peninsula and in 812 the emperor Michael I Rangabe recognised his status as Emperor, although not necessarily as "Emperor of the Romans". Danish attacks After the conquest of Nordalbingia, the Frankish frontier was brought into contact with Scandinavia. The pagan Danes, "a race almost unknown to his ancestors, but destined to be only too well known to his sons" as Charles Oman described them, inhabiting the Jutland peninsula, had heard many stories from Widukind and his allies who had taken refuge with them about the dangers of the Franks and the fury which their Christian king could direct against pagan neighbours. In 808, the king of the Danes, Godfred, expanded the vast Danevirke across the isthmus of Schleswig. This defence, last employed in the Danish-Prussian War of 1864, was at its beginning a long earthenwork rampart. The Danevirke protected Danish land and gave Godfred the opportunity to harass Frisia and Flanders with pirate raids. He also subdued the Frank-allied Veleti and fought the Abotrites. Godfred invaded Frisia, joked of visiting Aachen, but was murdered before he could do any more, either by a Frankish assassin or by one of his own men. Godfred was succeeded by his nephew Hemming, who concluded the Treaty of Heiligen with Charlemagne in late 811. Death In 813, Charlemagne called Louis the Pious, king of Aquitaine, his only surviving legitimate son, to his court. There Charlemagne crowned his son as co-emperor and sent him back to Aquitaine. He then spent the autumn hunting before returning to Aachen on 1 November. In January, he fell ill with pleurisy. In deep depression (mostly because many of his plans were not yet realised), he took to his bed on 21 January and as Einhard tells it: He was buried that same day, in Aachen Cathedral. The earliest surviving planctus, the Planctus de obitu Karoli, was composed by a monk of Bobbio, which he had patronised. A later story, told by Otho of Lomello, Count of the Palace at Aachen in the time of Emperor Otto III, would claim that he and Otto had discovered Charlemagne's tomb: Charlemagne, they claimed, was seated upon a throne, wearing a crown and holding a sceptre, his flesh almost entirely incorrupt. In 1165, Emperor Frederick I re-opened the tomb again and placed the emperor in a sarcophagus beneath the floor of the cathedral. In 1215 Emperor Frederick II re-interred him in a casket made of gold and silver known as the Karlsschrein. Charlemagne's death emotionally affected many of his subjects, particularly those of the literary clique who had surrounded him at Aachen. An anonymous monk of Bobbio lamented: Louis succeeded him as Charles had intended. He left a testament allocating his assets in 811 that was not updated prior to his death. He left most of his wealth to the Church, to be used for charity. His empire lasted only another generation in its entirety; its division, according to custom, between Louis's own sons after their father's death laid the foundation for the modern states of Germany and France. Administration Organisation The Carolingian king exercised the bannum, the right to rule and command. Under the Franks, it was a royal prerogative but could be delegated. He had supreme jurisdiction in judicial matters, made legislation, led the army, and protected both the Church and the poor. His administration was an attempt to organise the kingdom, church and nobility around him. As an administrator, Charlemagne stands out for his many reforms: monetary, governmental, military, cultural and ecclesiastical. He is the main protagonist of the "Carolingian Renaissance". Military Charlemagne's success rested primarily on novel siege technologies and excellent logistics rather than the long-claimed "cavalry revolution" led by Charles Martel in 730s. However, the stirrup, which made the "shock cavalry" lance charge possible, was not introduced to the Frankish kingdom until the late eighth century. Horses were used extensively by the Frankish military because they provided a quick, long-distance method of transporting troops, which was critical to building and maintaining the large empire. Economic and monetary reforms Charlemagne had an important role in determining Europe's immediate economic future. Pursuing his father's reforms, Charlemagne abolished the monetary system based on the gold . Instead, he and the Anglo-Saxon King Offa of Mercia took up Pippin's system for pragmatic reasons, notably a shortage of the metal. The gold shortage was a direct consequence of the conclusion of peace with Byzantium, which resulted in ceding Venice and Sicily to the East and losing their trade routes to Africa. The resulting standardisation economically harmonised and unified the complex array of currencies that had been in use at the commencement of his reign, thus simplifying trade and commerce. Charlemagne established a new standard, the (from the Latin , the modern pound), which was based upon a pound of silver—a unit of both money and weight—worth 20 sous (from the Latin [which was primarily an accounting device and never actually minted], the modern shilling) or 240 (from the Latin , the modern penny). During this period, the and the were counting units; only the was a coin of the realm. Charlemagne instituted principles for accounting practice by means of the Capitulare de villis of 802, which laid down strict rules for the way in which incomes and expenses were to be recorded. Charlemagne applied this system to much of the European continent, and Offa's standard was voluntarily adopted by much of England. After Charlemagne's death, continental coinage degraded, and most of Europe resorted to using the continued high-quality English coin until about 1100. Jews in Charlemagne's realm Early in Charlemagne's rule he tacitly allowed Jews to monopolise money lending. He invited Italian Jews to immigrate, as royal clients independent of the feudal landowners, and form trading communities in the agricultural regions of Provence and the Rhineland. Their trading activities augmented the otherwise almost exclusively agricultural economies of these regions. His personal physician was Jewish, and he employed a Jew named Isaac as his personal representative to the Muslim caliphate of Baghdad. Education reforms Part of Charlemagne's success as a warrior, an administrator and ruler can be traced to his admiration for learning and education. His reign is often referred to as the Carolingian Renaissance because of the flowering of scholarship, literature, art and architecture that characterise it. Charlemagne came into contact with the culture and learning of other countries (especially Moorish Spain, Anglo-Saxon England, and Lombard Italy) due to his vast conquests. He greatly increased the provision of monastic schools and scriptoria (centres for book-copying) in Francia. Charlemagne was a lover of books, sometimes having them read to him during meals. He was thought to enjoy the works of Augustine of Hippo. His court played a key role in producing books that taught elementary Latin and different aspects of the church. It also played a part in creating a royal library that contained in-depth works on language and Christian faith. Charlemagne encouraged clerics to translate Christian creeds and prayers into their respective vernaculars as well to teach grammar and music. Due to the increased interest of intellectual pursuits and the urging of their king, the monks accomplished so much copying that almost every manuscript from that time was preserved. At the same time, at the urging of their king, scholars were producing more secular books on many subjects, including history, poetry, art, music, law, theology, etc. Due to the increased number of titles, private libraries flourished. These were mainly supported by aristocrats and churchmen who could afford to sustain them. At Charlemagne's court, a library was founded and a number of copies of books were produced, to be distributed by Charlemagne. Book production was completed slowly by hand and took place mainly in large monastic libraries. Books were so in demand during Charlemagne's time that these libraries lent out some books, but only if that borrower offered valuable collateral in return. Most of the surviving works of classical Latin were copied and preserved by Carolingian scholars. Indeed, the earliest manuscripts available for many ancient texts are Carolingian. It is almost certain that a text which survived to the Carolingian age survives still. The pan-European nature of Charlemagne's influence is indicated by the origins of many of the men who worked for him: Alcuin, an Anglo-Saxon from York; Theodulf, a Visigoth, probably from Septimania; Paul the Deacon, Lombard; Italians Peter of Pisa and Paulinus of Aquileia; and Franks Angilbert, Angilram, Einhard and Waldo of Reichenau. Charlemagne promoted the liberal arts at court, ordering that his children and grandchildren be well-educated, and even studying himself (in a time when even leaders who promoted education did not take time to learn themselves) under the tutelage of Peter of Pisa, from whom he learned grammar; Alcuin, with whom he studied rhetoric, dialectic (logic), and astronomy (he was particularly interested in the movements of the stars); and Einhard, who tutored him in arithmetic. His great scholarly failure, as Einhard relates, was his inability to write: when in his old age he attempted to learn—practising the formation of letters in his bed during his free time on books and wax tablets he hid under his pillow—"his effort came too late in life and achieved little success", and his ability to read—which Einhard is silent about, and which no contemporary source supports—has also been called into question. In 800, Charlemagne enlarged the hostel at the Muristan in Jerusalem and added a library to it. He certainly had not been personally in Jerusalem. Church reforms Charlemagne expanded the reform Church's programme unlike his father, Pippin, and uncle, Carloman. The deepening of the spiritual life was later to be seen as central to public policy and royal governance. His reform focused on strengthening the church's power structure, improving clergy's skill and moral quality, standardising liturgical practices, improvements on the basic tenets of the faith and the rooting out of paganism. His authority extended over church and state. He could discipline clerics, control ecclesiastical property and define orthodox doctrine. Despite the harsh legislation and sudden change, he had developed support from clergy who approved his desire to deepen the piety and morals of his subjects. In 809–810, Charlemagne called a church council in Aachen, which confirmed the unanimous belief in the West that the Holy Spirit proceeds from the Father and the Son (ex Patre Filioque) and sanctioned inclusion in the Nicene Creed of the phrase Filioque (and the Son). For this Charlemagne sought the approval of Pope Leo III. The Pope, while affirming the doctrine and approving its use in teaching, opposed its inclusion in the text of the Creed as adopted in the 381 First Council of Constantinople. This spoke of the procession of the Holy Spirit from the Father, without adding phrases such as "and the Son", "through the Son", or "alone". Stressing his opposition, the Pope had the original text inscribed in Greek and Latin on two heavy shields that were displayed in Saint Peter's Basilica. Writing reforms During Charles' reign, the Roman half uncial script and its cursive version, which had given rise to various continental minuscule scripts, were combined with features from the insular scripts in use in Irish and English monasteries. Carolingian minuscule was created partly under the patronage of Charlemagne. Alcuin, who ran the palace school and scriptorium at Aachen, was probably a chief influence. The revolutionary character of the Carolingian reform, however, can be overemphasised; efforts at taming Merovingian and Germanic influence had been underway before Alcuin arrived at Aachen. The new minuscule was disseminated first from Aachen and later from the influential scriptorium at Tours, where Alcuin retired as an abbot. Political reforms Charlemagne engaged in many reforms of Frankish governance while continuing many traditional practices, such as the division of the kingdom among sons. Divisio regnorum In 806, Charlemagne first made provision for the traditional division of the empire on his death. For Charles the Younger he designated Austrasia and Neustria, Saxony, Burgundy and Thuringia. To Pippin, he gave Italy, Bavaria, and Swabia. Louis received Aquitaine, the Spanish March and Provence. The imperial title was not mentioned, which led to the suggestion that, at that particular time, Charlemagne regarded the title as an honorary achievement that held no hereditary significance. Pepin died in 810 and Charles in 811. Charlemagne then reconsidered the matter, and in 813, crowned his youngest son, Louis, co-emperor and co-King of the Franks, granting him a half-share of the empire and the rest upon Charlemagne's own death. The only part of the Empire that Louis was not promised was Italy, which Charlemagne specifically bestowed upon Pippin's illegitimate son Bernard. Appearance Manner Einhard tells in his twenty-fourth chapter: Charlemagne threw grand banquets and feasts for special occasions such as religious holidays and four of his weddings. When he was not working, he loved Christian books, horseback riding, swimming, bathing in natural hot springs with his friends and family, and hunting. Franks were well known for horsemanship and hunting skills. Charles was a light sleeper and would stay in his bed chambers for entire days at a time due to restless nights. During these days, he would not get out of bed when a quarrel occurred in his kingdom, instead summoning all members of the situation into his bedroom to be given orders. Einhard tells again in the twenty-fourth chapter: "In summer after the midday meal, he would eat some fruit, drain a single cup, put off his clothes and shoes, just as he did for the night, and rest for two or three hours. He was in the habit of awaking and rising from bed four or five times during the night." Language Charlemagne probably spoke a Rhenish Franconian dialect. He also spoke Latin and had at least some understanding of Greek, according to Einhard (Grecam vero melius intellegere quam pronuntiare poterat, "he could understand Greek better than he could speak it"). The largely fictional account of Charlemagne's Iberian campaigns by Pseudo-Turpin, written some three centuries after his death, gave rise to the legend that the king also spoke Arabic. Physical appearance Charlemagne's personal appearance is known from a good description by Einhard after his death in the biography Vita Karoli Magni. Einhard states: The physical portrait provided by Einhard is confirmed by contemporary depictions such as coins and his bronze statuette kept in the Louvre. In 1861, Charlemagne's tomb was opened by scientists who reconstructed his skeleton and estimated it to be measured . A 2010 estimate of his height from an X-ray and CT scan of his tibia was . This puts him in the 99th percentile of height for his period, given that average male height of his time was . The width of the bone suggested he was gracile in body build. Dress Charlemagne wore the traditional costume of the Frankish people, described by Einhard thus: He wore a blue cloak and always carried a sword typically of a golden or silver hilt. He wore intricately jeweled swords to banquets or ambassadorial receptions. Nevertheless: On great feast days, he wore embroidery and jewels on his clothing and shoes. He had a golden buckle for his cloak on such occasions and would appear with his great diadem, but he despised such apparel according to Einhard, and usually dressed like the common people. Homes Charlemagne had residences across his kingdom, including numerous private estates that were governed in accordance with the Capitulare de villis. A 9th-century document detailing the inventory of an estate at Asnapium listed amounts of livestock, plants and vegetables and kitchenware including cauldrons, drinking cups, brass kettles and firewood. The manor contained seventeen houses built inside the courtyard for nobles and family members and was separated from its supporting villas. Beatification Charlemagne was revered as a saint in the Holy Roman Empire and some other locations after the twelfth century. The Apostolic See did not recognise his invalid canonisation by Antipope Paschal III, done to gain the favour of Frederick Barbarossa in 1165. The Apostolic See annulled all of Paschal's ordinances at the Third Lateran Council in 1179. He is not enumerated among the 28 saints named "Charles" in the Roman Martyrology. His beatification has been acknowledged as cultus confirmed and is celebrated on 28 January. Cultural impact Middle Ages The author of the Visio Karoli Magni written around 865 uses facts gathered apparently from Einhard and his own observations on the decline of Charlemagne's family after the dissensions war (840–43) as the basis for a visionary tale of Charles' meeting with a prophetic spectre in a dream. Charlemagne was a model knight as one of the Nine Worthies who enjoyed an important legacy in European culture. One of the great medieval literary cycles, the Charlemagne cycle or the Matter of France, centres on his deeds—the Emperor with the Flowing Beard of Roland fame—and his historical commander of the border with Brittany, Roland, and the 12 paladins. These are analogous to, and inspired the myth of, the Knights of the Round Table of King Arthur's court. Their tales constitute the first chansons de geste. In the 12th century, Geoffrey of Monmouth based his stories of Arthur largely on stories of Charlemagne. During the Hundred Years' War in the 14th century, there was considerable cultural conflict in England, where the Norman rulers were aware of their French roots and identified with Charlemagne, Anglo-Saxon natives felt more affinity for Arthur, whose own legends were relatively primitive. Therefore, storytellers in England adapted legends of Charlemagne and his 12 Peers to the Arthurian tales. In the Divine Comedy, the spirit of Charlemagne appears to Dante in the Heaven of Mars, among the other "warriors of the faith". 19th century Charlemagne's capitularies were quoted by Pope Benedict XIV in his apostolic constitution 'Providas' against freemasonry: "For in no way are we able to understand how they can be faithful to us, who have shown themselves unfaithful to God and disobedient to their Priests". Charlemagne appears in Adelchi, the second tragedy by Italian writer Alessandro Manzoni, first published in 1822. In 1867, an equestrian statue of Charlemagne was made by Louis Jehotte and was inaugurated in 1868 on the Boulevard d'Avroy in Liège. In the niches of the neo-roman pedestal are six statues of Charlemagne's ancestors (Sainte Begge, Pépin de Herstal, Charles Martel, Bertrude, Pépin de Landen and Pépin le Bref). The North Wall Frieze in the courtroom of the Supreme Court of the United States depicts Charlemagne as a legal reformer. 20th century The city of Aachen has, since 1949, awarded an international prize (called the Karlspreis der Stadt Aachen) in honour of Charlemagne. It is awarded annually to "personages of merit who have promoted the idea of Western unity by their political, economic and literary endeavours." Winners of the prize include Richard von Coudenhove-Kalergi, the founder of the pan-European movement, Alcide De Gasperi, and Winston Churchill. In its national anthem, "El Gran Carlemany", the microstate of Andorra credits Charlemagne with its independence. In 1964, young French singer France Gall released the hit song "Sacré Charlemagne" in which the lyrics blame the great king for imposing the burden of compulsory education on French children. Charlemagne is quoted by Henry Jones, Sr. in Indiana Jones and the Last Crusade. After using his umbrella to induce a flock of seagulls to smash through the glass cockpit of a pursuing German fighter plane, Henry Jones remarks, "I suddenly remembered my Charlemagne: 'Let my armies be the rocks and the trees and the birds in the sky. Despite the quote's popularity since the movie, there is no evidence that Charlemagne actually said this. 21st century A 2010 episode of QI discussed the mathematics completed by Mark Humphrys that calculated that all modern Europeans are highly likely to share Charlemagne as a common ancestor (see most recent common ancestor). The Economist featured a weekly column entitled "Charlemagne", focusing generally on European affairs and, more usually and specifically, on the European Union and its politics. Actor and singer Christopher Lee's symphonic metal concept album Charlemagne: By the Sword and the Cross and its heavy metal follow-up Charlemagne: The Omens of Death feature the events of Charlemagne's life. In April 2014, on the occasion of the 1200th anniversary of Charlemagne's death, public art Mein Karl by Ottmar Hörl at Katschhof place was installed between city hall and the Aachen cathedral, displaying 500 Charlemagne statues. Charlemagne features as a playable character in the 2014 Charlemagne expansion for the grand strategy video game Crusader Kings 2. Charlemagne is a playable character in the Mobile/PC Game Rise of Kingdoms. In the 2018 video game Fate/Extella Link, Charlemagne appears as a Heroic Spirit separated into two Saint Graphs: the adventurous hero Charlemagne, who embodies the fantasy aspect as leader of the Twelve Paladins, and the villain Karl de Große, who embodies the historical aspect as Holy Roman Emperor. In July 2022, Charlemagne featured as a character in an episode of The Family Histories Podcast, and it references his role as an ancestor of all modern Europeans. He is portrayed here in later life, and is speaking Latin, which is translated by a device. He is returned to 9th Century Aquitaine by the end of the episode after a DNA sample has been extracted. Notes References Citations Bibliography Charlemagne, from Encyclopædia Britannica, full-article, latest edition. Comprises the Annales regni Francorum and The History of the Sons of Louis the Pious External links The Making of Charlemagne's Europe (freely available database of prosopographical and socio-economic data from legal documents dating to Charlemagne's reign, produced by King's College London) The Sword of Charlemagne (myArmoury.com article) Charter given by Charlemagne for St. Emmeram's Abbey showing the Emperor's seal, 22.2.794 . Taken from the collections of the Lichtbildarchiv älterer Originalurkunden at Marburg University An interactive map of Charlemagne's travels 740s births 814 deaths 8th-century dukes of Bavaria 8th-century Frankish kings 8th-century Lombard monarchs 9th-century dukes of Bavaria 9th-century Holy Roman Emperors 9th-century kings of Italy Beatifications by Pope Benedict XIV Captains General of the Church Carolingian dynasty Chansons de geste Characters in Orlando Innamorato and Orlando Furioso Characters in The Song of Roland Christian royal saints Deaths from respiratory disease Founding monarchs Frankish warriors French bibliophiles French Christians German bibliophiles German Christians Matter of France Medieval Low Countries Medieval Roman consuls
5315
https://en.wikipedia.org/wiki/Character%20encodings%20in%20HTML
Character encodings in HTML
While Hypertext Markup Language (HTML) has been in use since 1991, HTML 4.0 from December 1997 was the first standardized version where international characters were given reasonably complete treatment. When an HTML document includes special characters outside the range of seven-bit ASCII, two goals are worth considering: the information's integrity, and universal browser display. Specifying the document's character encoding There are two general ways to specify which character encoding is used in the document. First, the web server can include the character encoding or "charset" in the Hypertext Transfer Protocol (HTTP) Content-Type header, which would typically look like this: Content-Type: text/html; charset=utf-8 This method gives the HTTP server a convenient way to alter document's encoding according to content negotiation; certain HTTP server software can do it, for example Apache with the module mod_charset_lite. Second, a declaration can be included within the document itself. For HTML it is possible to include this information inside the head element near the top of the document: <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> HTML5 also allows the following syntax to mean exactly the same: <meta charset="utf-8"> XHTML documents have a third option: to express the character encoding via XML declaration, as follows: <?xml version="1.0" encoding="utf-8"?> With this second approach, because the character encoding cannot be known until the declaration is parsed, there is a problem knowing which character encoding is used in the document up to and including the declaration itself. If the character encoding is an ASCII extension then the content up to and including the declaration itself should be pure ASCII and this will work correctly. For character encodings that are not ASCII extensions (i.e. not a superset of ASCII), such as UTF-16BE and UTF-16LE, a processor of HTML, such as a web browser, should be able to parse the declaration in some cases through the use of heuristics. Encoding detection algorithm As of HTML5 the recommended charset is UTF-8. An "encoding sniffing algorithm" is defined in the specification to determine the character encoding of the document based on multiple sources of input, including: Explicit user instruction An explicit meta tag within the first 1024 bytes of the document A byte order mark (BOM) within the first three bytes of the document The HTTP Content-Type or other transport layer information Analysis of the document bytes looking for specific sequences or ranges of byte values, and other tentative detection mechanisms. Characters outside of the printable ASCII range (32 to 126) usually appear incorrectly. This presents few problems for English-speaking users, but other languages regularly—in some cases, always—require characters outside that range. In Chinese, Japanese, and Korean (CJK) language environments where there are several different multi-byte encodings in use, auto-detection is also often employed. Finally, browsers usually permit the user to override incorrect charset label manually as well. It is increasingly common for multilingual websites and websites in non-Western languages to use UTF-8, which allows use of the same encoding for all languages. UTF-16 or UTF-32, which can be used for all languages as well, are less widely used because they can be harder to handle in programming languages that assume a byte-oriented ASCII superset encoding, and they are less efficient for text with a high frequency of ASCII characters, which is usually the case for HTML documents. Successful viewing of a page is not necessarily an indication that its encoding is specified correctly. If the page's creator and reader are both assuming some platform-specific character encoding, and the server does not send any identifying information, then the reader will nonetheless see the page as the creator intended, but other readers on different platforms or with different native languages will not see the page as intended. Permitted encodings The WHATWG Encoding Standard, referenced by recent HTML standards (the current WHATWG HTML Living Standard, as well as the formerly competing W3C HTML 5.0 and 5.1) specifies a list of encodings which browsers must support. The HTML standards forbid support of other encodings. The Encoding Standard further stipulates that new formats, new protocols (even when existing formats are used) and authors of new documents are required to use UTF-8 exclusively. Besides UTF-8, the following encodings are explicitly listed in the HTML standard itself, with reference to the Encoding Standard: The following additional encodings are listed in the Encoding Standard, and support for them is therefore also required: The following encodings are listed as explicit examples of forbidden encodings: The standard also defines a "replacement" decoder, which maps all content labelled as certain encodings to the replacement character (�), refusing to process it at all. This is intended to prevent attacks (e.g. cross site scripting) which may exploit a difference between the client and server in what encodings are supported in order to mask malicious content. Although the same security concern applies to ISO-2022-JP and UTF-16, which also allow sequences of ASCII bytes to be interpreted differently, this approach was not seen as feasible for them since they are comparatively more frequently used in deployed content. The following encodings receive this treatment: Character references In addition to native character encodings, characters can also be encoded as character references, which can be numeric character references (decimal or hexadecimal) or character entity references. Character entity references are also sometimes referred to as named entities, or HTML entities for HTML. HTML's usage of character references derives from SGML. HTML character references A numeric character reference in HTML refers to a character by its Universal Character Set/Unicode code point, and uses the format &#nnnn; or &#xhhhh; where nnnn is the code point in decimal form, and hhhh is the code point in hexadecimal form. The x must be lowercase in XML documents. The nnnn or hhhh may be any number of digits and may include leading zeros. The hhhh may mix uppercase and lowercase, though uppercase is the usual style. Not all web browsers or email clients used by receivers of HTML documents, or text editors used by authors of HTML documents, will be able to render all HTML characters. Most modern software is able to display most or all of the characters for the user's language, and will draw a box or other clear indicator for characters they cannot render. For codes from 0 to 127, the original 7-bit ASCII standard set, most of these characters can be used without a character reference. Codes from 160 to 255 can all be created using character entity names. Only a few higher-numbered codes can be created using entity names, but all can be created by decimal number character reference. Character entity references can also have the format &name; where name is a case-sensitive alphanumeric string. For example, "λ" can also be encoded as &lambda; in an HTML document. The character entity references &lt;, &gt;, &quot; and &amp; are predefined in HTML and SGML, because <, >, " and & are already used to delimit markup. This notably did not include XML's &apos; (') entity prior to HTML5. For a list of all named HTML character entity references along with the versions in which they were introduced, see List of XML and HTML character entity references. Unnecessary use of HTML character references may significantly reduce HTML readability. If the character encoding for a web page is chosen appropriately, then HTML character references are usually only required for markup delimiting characters as mentioned above, and for a few special characters (or none at all if a native Unicode encoding like UTF-8 is used). Incorrect HTML entity escaping may also open up security vulnerabilities for injection attacks such as cross-site scripting. If HTML attributes are left unquoted, certain characters, most importantly whitespace, such as space and tab, must be escaped using entities. Other languages related to HTML have their own methods of escaping characters. XML character references Unlike traditional HTML with its large range of character entity references, in XML there are only five predefined character entity references. These are used to escape characters that are markup sensitive in certain contexts: All other character entity references have to be defined before they can be used. For example, use of &eacute; (which gives é, Latin lower-case E with acute accent, U+00E9 in Unicode) in an XML document will generate an error unless the entity has already been defined. XML also requires that the x in hexadecimal numeric references be in lowercase: for example &#xA1b rather than &#XA1b. XHTML, which is an XML application, supports the HTML entity set, along with XML's predefined entities. See also Charset sniffing – used by many browsers when character encoding metadata is not available Unicode and HTML Language code List of XML and HTML character entity references References External links Online HTML entity encoder & decoder tool Character entity references in HTML4 The Definitive Guide to Web Character Encoding HTML Entity Encoding chapter of Browser Security Handbook – more information about current browsers and their entity handling The Open Web Application Security Project's wiki article on cross-site scripting (XSS) HTML World Wide Web Consortium standards
5320
https://en.wikipedia.org/wiki/Carbon%20nanotube
Carbon nanotube
A carbon nanotube (CNT) is a tube made of carbon with a diameter in the nanometer range (nanoscale). They are one of the allotropes of carbon. Single-walled carbon nanotubes (SWCNTs) have diameters around 0.5–2.0 nanometers, about 100,000 times smaller than the width of a human hair. They can be idealized as cutouts from a two-dimensional graphene sheet rolled up to form a hollow cylinder. Multi-walled carbon nanotubes (MWCNTs) consist of nested single-wall carbon nanotubes in a nested, tube-in-tube structure. Double- and triple-walled carbon nanotubes are special cases of MWCNT. Carbon nanotubes can exhibit remarkable properties, such as exceptional tensile strength and thermal conductivity because of their nanostructure and strength of the bonds between carbon atoms. Some SWCNT structures exhibit high electrical conductivity while others are semiconductors. In addition, carbon nanotubes can be chemically modified. These properties are expected to be valuable in many areas of technology, such as electronics, optics, composite materials (replacing or complementing carbon fibers), nanotechnology, and other applications of materials science. The predicted properties for SWCNTs were tantalizing, but a path to synthesizing them was lacking until 1993, when Iijima and Ichihashi at NEC and Bethune et al. at IBM independently discovered that co-vaporizing carbon and transition metals such as iron and cobalt could specifically catalyze SWCNT formation. These discoveries triggered research that succeeded in greatly increasing the efficiency of the catalytic production technique, and led to an explosion of work to characterize and find applications for SWCNTs. Structure of SWNTs Basic details The structure of an ideal (infinitely long) single-walled carbon nanotube is that of a regular hexagonal lattice drawn on an infinite cylindrical surface, whose vertices are the positions of the carbon atoms. Since the length of the carbon-carbon bonds is fairly fixed, there are constraints on the diameter of the cylinder and the arrangement of the atoms on it. In the study of nanotubes, one defines a zigzag path on a graphene-like lattice as a path that turns 60 degrees, alternating left and right, after stepping through each bond. It is also conventional to define an armchair path as one that makes two left turns of 60 degrees followed by two right turns every four steps. On some carbon nanotubes, there is a closed zigzag path that goes around the tube. One says that the tube is of the zigzag type or configuration, or simply is a zigzag nanotube. If the tube is instead encircled by a closed armchair path, it is said to be of the armchair type, or an armchair nanotube. An infinite nanotube that is of the zigzag (or armchair) type consists entirely of closed zigzag (or armchair) paths, connected to each other. The zigzag and armchair configurations are not the only structures that a single-walled nanotube can have. To describe the structure of a general infinitely long tube, one should imagine it being sliced open by a cut parallel to its axis, that goes through some atom A, and then unrolled flat on the plane, so that its atoms and bonds coincide with those of an imaginary graphene sheet—more precisely, with an infinitely long strip of that sheet. The two halves of the atom A will end up on opposite edges of the strip, over two atoms A1 and A2 of the graphene. The line from A1 to A2 will correspond to the circumference of the cylinder that went through the atom A, and will be perpendicular to the edges of the strip. In the graphene lattice, the atoms can be split into two classes, depending on the directions of their three bonds. Half the atoms have their three bonds directed the same way, and half have their three bonds rotated 180 degrees relative to the first half. The atoms A1 and A2, which correspond to the same atom A on the cylinder, must be in the same class. It follows that the circumference of the tube and the angle of the strip are not arbitrary, because they are constrained to the lengths and directions of the lines that connect pairs of graphene atoms in the same class. Let u and v be two linearly independent vectors that connect the graphene atom A1 to two of its nearest atoms with the same bond directions. That is, if one numbers consecutive carbons around a graphene cell with C1 to C6, then u can be the vector from C1 to C3, and v be the vector from C1 to C5. Then, for any other atom A2 with same class as A1, the vector from A1 to A2 can be written as a linear combination n u + m v, where n and m are integers. And, conversely, each pair of integers (n,m) defines a possible position for A2. Given n and m, one can reverse this theoretical operation by drawing the vector w on the graphene lattice, cutting a strip of the latter along lines perpendicular to w through its endpoints A1 and A2, and rolling the strip into a cylinder so as to bring those two points together. If this construction is applied to a pair (k,0), the result is a zigzag nanotube, with closed zigzag paths of 2k atoms. If it is applied to a pair (k,k), one obtains an armchair tube, with closed armchair paths of 4k atoms. Types The structure of the nanotube is not changed if the strip is rotated by 60 degrees clockwise around A1 before applying the hypothetical reconstruction above. Such a rotation changes the corresponding pair (n,m) to the pair (−2m,n+m). It follows that many possible positions of A2 relative to A1 — that is, many pairs (n,m) — correspond to the same arrangement of atoms on the nanotube. That is the case, for example, of the six pairs (1,2), (−2,3), (−3,1), (−1,−2), (2,−3), and (3,−1). In particular, the pairs (k,0) and (0,k) describe the same nanotube geometry. These redundancies can be avoided by considering only pairs (n,m) such that n > 0 and m ≥ 0; that is, where the direction of the vector w lies between those of u (inclusive) and v (exclusive). It can be verified that every nanotube has exactly one pair (n,m) that satisfies those conditions, which is called the tube's type. Conversely, for every type there is a hypothetical nanotube. In fact, two nanotubes have the same type if and only if one can be conceptually rotated and translated so as to match the other exactly. Instead of the type (n,m), the structure of a carbon nanotube can be specified by giving the length of the vector w (that is, the circumference of the nanotube), and the angle α between the directions of u and w, may range from 0 (inclusive) to 60 degrees clockwise (exclusive). If the diagram is drawn with u horizontal, the latter is the tilt of the strip away from the vertical. Chirality and mirror symmetry A nanotube is chiral if it has type (n,m), with m > 0 and m ≠ n; then its enantiomer (mirror image) has type (m,n), which is different from (n,m). This operation corresponds to mirroring the unrolled strip about the line L through A1 that makes an angle of 30 degrees clockwise from the direction of the u vector (that is, with the direction of the vector u+v). The only types of nanotubes that are achiral are the (k,0) "zigzag" tubes and the (k,k) "armchair" tubes. If two enantiomers are to be considered the same structure, then one may consider only types (n,m) with 0 ≤ m ≤ n and n > 0. Then the angle α between u and w, which may range from 0 to 30 degrees (inclusive both), is called the "chiral angle" of the nanotube. Circumference and diameter From n and m one can also compute the circumference c, which is the length of the vector w, which turns out to be: in picometres. The diameter of the tube is then , that is also in picometres. (These formulas are only approximate, especially for small n and m where the bonds are strained; and they do not take into account the thickness of the wall.) The tilt angle α between u and w and the circumference c are related to the type indices n and m by: where arg(x,y) is the clockwise angle between the X-axis and the vector (x,y); a function that is available in many programming languages as atan2(y,x). Conversely, given c and α, one can get the type (n,m) by the formulas: which must evaluate to integers. Physical limits Narrowest examples If n and m are too small, the structure described by the pair (n,m) will describe a molecule that cannot be reasonably called a "tube", and may not even be stable. For example, the structure theoretically described by the pair (1,0) (the limiting "zigzag" type) would be just a chain of carbons. That is a real molecule, the carbyne; which has some characteristics of nanotubes (such as orbital hybridization, high tensile strength, etc.) — but has no hollow space, and may not be obtainable as a condensed phase. The pair (2,0) would theoretically yield a chain of fused 4-cycles; and (1,1), the limiting "armchair" structure, would yield a chain of bi-connected 4-rings. These structures may not be realizable. The thinnest carbon nanotube proper is the armchair structure with type (2,2), which has a diameter of 0.3 nm. This nanotube was grown inside a multi-walled carbon nanotube. Assigning of the carbon nanotube type was done by a combination of high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, and density functional theory (DFT) calculations. The thinnest freestanding single-walled carbon nanotube is about 0.43 nm in diameter. Researchers suggested that it can be either (5,1) or (4,2) SWCNT, but the exact type of the carbon nanotube remains questionable. (3,3), (4,3), and (5,1) carbon nanotubes (all about 0.4 nm in diameter) were unambiguously identified using aberration-corrected high-resolution transmission electron microscopy inside double-walled CNTs. Length The observation of the longest carbon nanotubes grown so far, around 0.5 metre (550 mm) long, was reported in 2013. These nanotubes were grown on silicon substrates using an improved chemical vapor deposition (CVD) method and represent electrically uniform arrays of single-walled carbon nanotubes. The shortest carbon nanotube can be considered to be the organic compound cycloparaphenylene, which was synthesized in 2008 by Ramesh Jasti. Other small molecule carbon nanotubes have been synthesized since. Density The highest density of CNTs was achieved in 2013, grown on a conductive titanium-coated copper surface that was coated with co-catalysts cobalt and molybdenum at lower than typical temperatures of 450 °C. The tubes averaged a height of 380 nm and a mass density of 1.6 g cm−3. The material showed ohmic conductivity (lowest resistance ~22 kΩ). Variants There is no consensus on some terms describing carbon nanotubes in scientific literature: both "-wall" and "-walled" are being used in combination with "single", "double", "triple", or "multi", and the letter C is often omitted in the abbreviation, for example, multi-walled carbon nanotube (MWNT). The International Standards Organization uses single-wall or multi-wall in its documents. Multi-walled Multi-walled nanotubes (MWNTs) consist of multiple rolled layers (concentric tubes) of graphene. There are two models that can be used to describe the structures of multi-walled nanotubes. In the Russian Doll model, sheets of graphite are arranged in concentric cylinders, e.g., a (0,8) single-walled nanotube (SWNT) within a larger (0,17) single-walled nanotube. In the Parchment model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper. The interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite, approximately 3.4 Å. The Russian Doll structure is observed more commonly. Its individual shells can be described as SWNTs, which can be metallic or semiconducting. Because of statistical probability and restrictions on the relative diameters of the individual tubes, one of the shells, and thus the whole MWNT, is usually a zero-gap metal. Double-walled carbon nanotubes (DWNTs) form a special class of nanotubes because their morphology and properties are similar to those of SWNTs but they are more resistant to attacks by chemicals. This is especially important when it is necessary to graft chemical functions to the surface of the nanotubes (functionalization) to add properties to the CNT. Covalent functionalization of SWNTs will break some C=C double bonds, leaving "holes" in the structure on the nanotube and thus modifying both its mechanical and electrical properties. In the case of DWNTs, only the outer wall is modified. DWNT synthesis on the gram-scale by the CCVD technique was first proposed in 2003 from the selective reduction of oxide solutions in methane and hydrogen. The telescopic motion ability of inner shells and their unique mechanical properties will permit the use of multi-walled nanotubes as the main movable arms in upcoming nanomechanical devices. The retraction force that occurs to telescopic motion is caused by the Lennard-Jones interaction between shells, and its value is about 1.5 nN. Junctions and crosslinking Junctions between two or more nanotubes have been widely discussed theoretically. Such junctions are quite frequently observed in samples prepared by arc discharge as well as by chemical vapor deposition. The electronic properties of such junctions were first considered theoretically by Lambin et al., who pointed out that a connection between a metallic tube and a semiconducting one would represent a nanoscale heterojunction. Such a junction could therefore form a component of a nanotube-based electronic circuit. The adjacent image shows a junction between two multiwalled nanotubes. Junctions between nanotubes and graphene have been considered theoretically and studied experimentally. Nanotube-graphene junctions form the basis of pillared graphene, in which parallel graphene sheets are separated by short nanotubes. Pillared graphene represents a class of three-dimensional carbon nanotube architectures. Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>100 nm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical-initiated thermal crosslinking method to fabricate macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano-structured pores, and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices, implants, and sensors. Other morphologies Carbon nanobuds are a newly created material combining two previously discovered allotropes of carbon: carbon nanotubes and fullerenes. In this new material, fullerene-like "buds" are covalently bonded to the outer sidewalls of the underlying carbon nanotube. This hybrid material has useful properties of both fullerenes and carbon nanotubes. In particular, they have been found to be exceptionally good field emitters. In composite materials, the attached fullerene molecules may function as molecular anchors preventing slipping of the nanotubes, thus improving the composite's mechanical properties. A carbon peapod is a novel hybrid carbon material which traps fullerene inside a carbon nanotube. It can possess interesting magnetic properties with heating and irradiation. It can also be applied as an oscillator during theoretical investigations and predictions. In theory, a nanotorus is a carbon nanotube bent into a torus (doughnut shape). Nanotori are predicted to have many unique properties, such as magnetic moments 1000 times larger than that previously expected for certain specific radii. Properties such as magnetic moment, thermal stability, etc. vary widely depending on the radius of the torus and the radius of the tube. Graphenated carbon nanotubes are a relatively new hybrid that combines graphitic foliates grown along the sidewalls of multiwalled or bamboo style CNTs. The foliate density can vary as a function of deposition conditions (e.g., temperature and time) with their structure ranging from a few layers of graphene (< 10) to thicker, more graphite-like. The fundamental advantage of an integrated graphene-CNT structure is the high surface area three-dimensional framework of the CNTs coupled with the high edge density of graphene. Depositing a high density of graphene foliates along the length of aligned CNTs can significantly increase the total charge capacity per unit of nominal area as compared to other carbon nanostructures. Cup-stacked carbon nanotubes (CSCNTs) differ from other quasi-1D carbon structures, which normally behave as quasi-metallic conductors of electrons. CSCNTs exhibit semiconducting behavior because of the stacking microstructure of graphene layers. Properties Many properties of single-walled carbon nanotubes depend significantly on the (n,m) type, and this dependence is non-monotonic (see Kataura plot). In particular, the band gap can vary from zero to about 2 eV and the electrical conductivity can show metallic or semiconducting behavior. Mechanical Carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus. This strength results from the covalent sp2 bonds formed between the individual carbon atoms. In 2000, a multiwalled carbon nanotube was tested to have a tensile strength of . (For illustration, this translates into the ability to endure tension of a weight equivalent to on a cable with cross-section of ). Further studies, such as one conducted in 2008, revealed that individual CNT shells have strengths of up to ≈, which is in agreement with quantum/atomistic models. Because carbon nanotubes have a low density for a solid of 1.3 to 1.4 g/cm3, its specific strength of up to 48,000 kN·m·kg−1 is the best of known materials, compared to high-carbon steel's 154 kN·m·kg−1. Although the strength of individual CNT shells is extremely high, weak shear interactions between adjacent shells and tubes lead to significant reduction in the effective strength of multiwalled carbon nanotubes and carbon nanotube bundles down to only a few GPa. This limitation has been recently addressed by applying high-energy electron irradiation, which crosslinks inner shells and tubes, and effectively increases the strength of these materials to ≈60 GPa for multiwalled carbon nanotubes and ≈17 GPa for double-walled carbon nanotube bundles. CNTs are not nearly as strong under compression. Because of their hollow structure and high aspect ratio, they tend to undergo buckling when placed under compressive, torsional, or bending stress. On the other hand, there was evidence that in the radial direction they are rather soft. The first transmission electron microscope observation of radial elasticity suggested that even van der Waals forces can deform two adjacent nanotubes. Later, nanoindentations with an atomic force microscope were performed by several groups to quantitatively measure radial elasticity of multiwalled carbon nanotubes and tapping/contact mode atomic force microscopy was also performed on single-walled carbon nanotubes. Young's modulus of on the order of several GPa showed that CNTs are in fact very soft in the radial direction. It was reported in 2020, CNT-filled polymer nanocomposites with 4 wt% and 6 wt% loadings are the most optimal concentrations, as they provide a good balance between mechanical properties and resilience of mechanical properties against UV exposure for the offshore umbilical sheathing layer. Electrical Unlike graphene, which is a two-dimensional semimetal, carbon nanotubes are either metallic or semiconducting along the tubular axis. For a given (n,m) nanotube, if n = m, the nanotube is metallic; if n − m is a multiple of 3 and n ≠ m, then the nanotube is quasi-metallic with a very small band gap, otherwise the nanotube is a moderate semiconductor. Thus, all armchair (n = m) nanotubes are metallic, and nanotubes (6,4), (9,1), etc. are semiconducting. Carbon nanotubes are not semimetallic because the degenerate point (the point where the π [bonding] band meets the π* [anti-bonding] band, at which the energy goes to zero) is slightly shifted away from the K point in the Brillouin zone because of the curvature of the tube surface, causing hybridization between the σ* and π* anti-bonding bands, modifying the band dispersion. The rule regarding metallic versus semiconductor behavior has exceptions because curvature effects in small-diameter tubes can strongly influence electrical properties. Thus, a (5,0) SWCNT that should be semiconducting in fact is metallic according to the calculations. Likewise, zigzag and chiral SWCNTs with small diameters that should be metallic have a finite gap (armchair nanotubes remain metallic). In theory, metallic nanotubes can carry an electric current density of 4 × 109 A/cm2, which is more than 1,000 times greater than those of metals such as copper, where for copper interconnects, current densities are limited by electromigration. Carbon nanotubes are thus being explored as interconnects and conductivity-enhancing components in composite materials, and many groups are attempting to commercialize highly conducting electrical wire assembled from individual carbon nanotubes. There are significant challenges to be overcome however, such as undesired current saturation under voltage, and the much more resistive nanotube-to-nanotube junctions and impurities, all of which lower the electrical conductivity of the macroscopic nanotube wires by orders of magnitude, as compared to the conductivity of the individual nanotubes. Because of its nanoscale cross-section, electrons propagate only along the tube's axis. As a result, carbon nanotubes are frequently referred to as one-dimensional conductors. The maximum electrical conductance of a single-walled carbon nanotube is 2G0, where G0 = 2e2/h is the conductance of a single ballistic quantum channel. Because of the role of the π-electron system in determining the electronic properties of graphene, doping in carbon nanotubes differs from that of bulk crystalline semiconductors from the same group of the periodic table (e.g., silicon). Graphitic substitution of carbon atoms in the nanotube wall by boron or nitrogen dopants leads to p-type and n-type behavior, respectively, as would be expected in silicon. However, some non-substitutional (intercalated or adsorbed) dopants introduced into a carbon nanotube, such as alkali metals and electron-rich metallocenes, result in n-type conduction because they donate electrons to the π-electron system of the nanotube. By contrast, π-electron acceptors such as FeCl3 or electron-deficient metallocenes function as p-type dopants because they draw π-electrons away from the top of the valence band. Intrinsic superconductivity has been reported, although other experiments found no evidence of this, leaving the claim a subject of debate. In 2021, Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT, published department findings on the use of carbon nanotubes to create an electric current. By immersing the structures in an organic solvent, the liquid drew electrons out of the carbon particles. Strano was quoted as saying, "This allows you to do electrochemistry, but with no wires," and represents a significant breakthrough in the technology. Future applications include powering micro- or nanoscale robots, as well as driving alcohol oxidation reactions, which are important in the chemicals industry. Crystallographic defects also affect the tube's electrical properties. A common result is lowered conductivity through the defective region of the tube. A defect in metallic armchair-type tubes (which can conduct electricity) can cause the surrounding region to become semiconducting, and single monatomic vacancies induce magnetic properties. Optical Carbon nanotubes have useful absorption, photoluminescence (fluorescence), and Raman spectroscopy properties. Spectroscopic methods offer the possibility of quick and non-destructive characterization of relatively large amounts of carbon nanotubes. There is a strong demand for such characterization from the industrial point of view: numerous parameters of nanotube synthesis can be changed, intentionally or unintentionally, to alter the nanotube quality, such as the non-tubular carbon content, structure (chirality) of the produced nanotubes, and structural defects. These features then determine nearly all other significant optical, mechanical, and electrical properties. Carbon nanotube optical properties have been explored for use in applications such as for light-emitting diodes (LEDs) and photo-detectors based on a single nanotube have been produced in the lab. Their unique feature is not the efficiency, which is yet relatively low, but the narrow selectivity in the wavelength of emission and detection of light and the possibility of its fine tuning through the nanotube structure. In addition, bolometer and optoelectronic memory devices have been realised on ensembles of single-walled carbon nanotubes. Nanotube fluorescence has been investigated for the purposes of imaging and sensing in biomedical applications. Thermal All nanotubes are expected to be very good thermal conductors along the tube, exhibiting a property known as "ballistic conduction", but good insulators lateral to the tube axis. Measurements show that an individual SWNT has a room-temperature thermal conductivity along its axis of about 3500 W·m−1·K−1; compare this to copper, a metal well known for its good thermal conductivity, which transmits 385 W·m−1·K−1. An individual SWNT has a room-temperature thermal conductivity lateral to its axis (in the radial direction) of about 1.52 W·m−1·K−1, which is about as thermally conductive as soil. Macroscopic assemblies of nanotubes such as films or fibres have reached up to 1500 W·m−1·K−1 so far. Networks composed of nanotubes demonstrate different values of thermal conductivity, from the level of thermal insulation with the thermal conductivity of 0.1 W·m−1·K−1 to such high values. That is dependent on the amount of contribution to the thermal resistance of the system caused by the presence of impurities, misalignments and other factors. The temperature stability of carbon nanotubes is estimated to be up to 2800 °C in vacuum and about 750 °C in air. Crystallographic defects strongly affect the tube's thermal properties. Such defects lead to phonon scattering, which in turn increases the relaxation rate of the phonons. This reduces the mean free path and reduces the thermal conductivity of nanotube structures. Phonon transport simulations indicate that substitutional defects such as nitrogen or boron will primarily lead to scattering of high-frequency optical phonons. However, larger-scale defects such as Stone–Wales defects cause phonon scattering over a wide range of frequencies, leading to a greater reduction in thermal conductivity. Synthesis Techniques have been developed to produce nanotubes in sizeable quantities, including arc discharge, laser ablation, chemical vapor deposition (CVD) and high-pressure carbon monoxide disproportionation (HiPCO). Among these arc discharge, laser ablation are batch by batch process, Chemical Vapor Deposition can be used both for batch by batch or continuous processes, and HiPCO is gas phase continuous process. Most of these processes take place in a vacuum or with process gases. The CVD growth method is popular, as it yields high quantity and has a degree of control over diameter, length and morphology. Using particulate catalysts, large quantities of nanotubes can be synthesized by these methods, and industrialisation is well on its way, with several CNT and CNT fibers factory in the world. One problem of CVD processes is the high variability in the nanotube's characteristics The HiPCO process advances in catalysis and continuous growth are making CNTs more commercially viable. The HiPCO process helps in producing high purity single walled carbon nanotubes in higher quantity. The HiPCO reactor operates at high temperature 900-1100 °C and high pressure ~30-50 bar. It uses carbon monoxide as the carbon source and iron pentacarbonyl or nickel tetracarbonyl as a catalyst. These catalysts provide a nucleation site for the nanotubes to grow, while cheaper iron based catalysts like Ferrocene can be used for CVD process. Vertically aligned carbon nanotube arrays are also grown by thermal chemical vapor deposition. A substrate (quartz, silicon, stainless steel, carbon fibers, etc.) is coated with a catalytic metal (Fe, Co, Ni) layer. Typically that layer is iron and is deposited via sputtering to a thickness of 1–5 nm. A 10–50 nm underlayer of alumina is often also put down on the substrate first. This imparts controllable wetting and good interfacial properties. When the substrate is heated to the growth temperature (~600 to 850 °C), the continuous iron film breaks up into small islands with each island then nucleating a carbon nanotube. The sputtered thickness controls the island size and this in turn determines the nanotube diameter. Thinner iron layers drive down the diameter of the islands and drive down the diameter of the nanotubes grown. The amount of time the metal island can sit at the growth temperature is limited as they are mobile and can merge into larger (but fewer) islands. Annealing at the growth temperature reduces the site density (number of CNT/mm2) while increasing the catalyst diameter. The as-prepared carbon nanotubes always have impurities such as other forms of carbon (amorphous carbon, fullerene, etc.) and non-carbonaceous impurities (metal used for catalyst). These impurities need to be removed to make use of the carbon nanotubes in applications. Functionalization CNTs are known to have weak dispersibility in many solvents such as water as a consequence of strong intermolecular p–p interactions. This hinders the processability of CNTs in industrial applications. In order to tackle the issue, various techniques have been developed to modify the surface of CNTs in order to improve their stability and solubility in water. This enhances the processing and manipulation of insoluble CNTs rendering them useful for synthesizing innovative CNT nanofluids with impressive properties that are tunable for a wide range of applications. Chemical routes such as covalent functionalization have been studied extensively, which involves the oxidation of CNTs via strong acids (e.g. sulfuric acid, nitric acid, or a mixture of both) in order to set the carboxylic groups onto the surface of the CNTs as the final product or for further modification by esterification or amination. Free radical grafting is a promising technique among covalent functionalization methods, in which alkyl or aryl peroxides, substituted anilines, and diazonium salts are used as the starting agents. Free radical grafting of macromolecules (as the functional group) onto the surface of CNTs can improve the solubility of CNTs compared to common acid treatments which involve the attachment of small molecules such as hydroxyl onto the surface of CNTs. The solubility of CNTs can be improved significantly by free-radical grafting because the large functional molecules facilitate the dispersion of CNTs in a variety of solvents even at a low degree of functionalization. Recently an innovative environmentally friendly approach has been developed for the covalent functionalization of multi-walled carbon nanotubes (MWCNTs) using clove buds. This approach is innovative and green because it does not use toxic and hazardous acids which are typically used in common carbon nanomaterial functionalization procedures. The MWCNTs are functionalized in one pot using a free radical grafting reaction. The clove-functionalized MWCNTs are then dispersed in water producing a highly stable multi-walled carbon nanotube aqueous suspension (nanofluids). Modeling Carbon nanotubes are modelled in a similar manner as traditional composites in which a reinforcement phase is surrounded by a matrix phase. Ideal models such as cylindrical, hexagonal and square models are common. The size of the micromechanics model is highly function of the studied mechanical properties. The concept of representative volume element (RVE) is used to determine the appropriate size and configuration of computer model to replicate the actual behavior of CNT reinforced nanocomposite. Depending on the material property of interest (thermal, electrical, modulus, creep), one RVE might predict the property better than the alternatives. While the implementation of ideal model is computationally efficient, they do not represent microstructural features observed in scanning electron microscopy of actual nanocomposites. To incorporate realistic modeling, computer models are also generated to incorporate variability such as waviness, orientation and agglomeration of multiwall or single wall carbon nanotubes. Metrology There are many metrology standards and reference materials available for carbon nanotubes. For single-wall carbon nanotubes, ISO/TS 10868 describes a measurement method for the diameter, purity, and fraction of metallic nanotubes through optical absorption spectroscopy, while ISO/TS 10797 and ISO/TS 10798 establish methods to characterize the morphology and elemental composition of single-wall carbon nanotubes, using transmission electron microscopy and scanning electron microscopy respectively, coupled with energy dispersive X-ray spectrometry analysis. NIST SRM 2483 is a soot of single-wall carbon nanotubes used as a reference material for elemental analysis, and was characterized using thermogravimetric analysis, prompt gamma activation analysis, induced neutron activation analysis, inductively coupled plasma mass spectroscopy, resonant Raman scattering, UV-visible-near infrared fluorescence spectroscopy and absorption spectroscopy, scanning electron microscopy, and transmission electron microscopy. The Canadian National Research Council also offers a certified reference material SWCNT-1 for elemental analysis using neutron activation analysis and inductively coupled plasma mass spectroscopy. NIST RM 8281 is a mixture of three lengths of single-wall carbon nanotube. For multiwall carbon nanotubes, ISO/TR 10929 identifies the basic properties and the content of impurities, while ISO/TS 11888 describes morphology using scanning electron microscopy, transmission electron microscopy, viscometry, and light scattering analysis. ISO/TS 10798 is also valid for multiwall carbon nanotubes. Chemical modification Carbon nanotubes can be functionalized to attain desired properties that can be used in a wide variety of applications. The two main methods of carbon nanotube functionalization are covalent and non-covalent modifications. Because of their apparent hydrophobic nature, carbon nanotubes tend to agglomerate hindering their dispersion in solvents or viscous polymer melts. The resulting nanotube bundles or aggregates reduce the mechanical performance of the final composite. The surface of the carbon nanotubes can be modified to reduce the hydrophobicity and improve interfacial adhesion to a bulk polymer through chemical attachment. The surface of carbon nanotubes can be chemically modified by coating spinel nanoparticles by hydrothermal synthesis and can be used for water oxidation purposes. In addition, the surface of carbon nanotubes can be fluorinated or halofluorinated by heating while in contact with a fluoroorganic substance, thereby forming partially fluorinated carbons (so called Fluocar materials) with grafted (halo)fluoroalkyl functionality. Applications Carbon nanotubes are currently used in multiple industrial and consumer applications. These include battery components, polymer composites, to improve the mechanical, thermal and electrical properties of the bulk product, and as a highly absorptive black paint. Many other applications are under development, including field effect transistors for electronics, high-strength fabrics, biosensors for biomedical and agricultural applications, and many others. Current industrial applications Easton-Bell Sports, Inc. have been in partnership with Zyvex Performance Materials, using CNT technology in a number of their bicycle components – including flat and riser handlebars, cranks, forks, seatposts, stems and aero bars. Amroy Europe Oy manufactures Hybtonite carbon nano-epoxy resins where carbon nanotubes have been chemically activated to bond to epoxy, resulting in a composite material that is 20% to 30% stronger than other composite materials. It has been used for wind turbines, marine paints and a variety of sports gear such as skis, ice hockey sticks, baseball bats, hunting arrows, and surfboards. Surrey NanoSystems synthesizes carbon nanotubes to create vantablack ultra-absorptive black paint. "Gecko tape" (also called "nano tape") is often commercially sold as double-sided adhesive tape. It can be used to hang lightweight items such as pictures and decorative items on smooth walls without punching holes in the wall. The carbon nanotube arrays comprising the synthetic setae leave no residue after removal and can stay sticky in extreme temperatures. Tips for atomic force microscope probes. Applications under development Applications of nanotubes in development in academia and industry include: Utilizing carbon nanotubes as the channel material of carbon nanotube field-effect transistors. Using carbon nanotubes as a scaffold for diverse microfabrication techniques. Energy dissipation in self-organized nanostructures under influence of an electric field. Using carbon nanotubes for environmental monitoring due to their active surface area and their ability to absorb gases. Jack Andraka used carbon nanotubes in his pancreatic cancer test. His method of testing won the Intel International Science and Engineering Fair Gordon E. Moore Award in the spring of 2012. The Boeing Company has patented the use of carbon nanotubes for structural health monitoring of composites used in aircraft structures. This technology will greatly reduce the risk of an in-flight failure caused by structural degradation of aircraft. Zyvex Technologies has also built a 54' maritime vessel, the Piranha Unmanned Surface Vessel, as a technology demonstrator for what is possible using CNT technology. CNTs help improve the structural performance of the vessel, resulting in a lightweight 8,000 lb boat that can carry a payload of 15,000 lb over a range of 2,500 miles. IMEC is using carbon nanotubes for pellicles in semiconductor lithography. In tissue engineering, carbon nanotubes have been used as scaffolding for bone growth. Carbon nanotubes can serve as additives to various structural materials. For instance, nanotubes form a tiny portion of the material(s) in some (primarily carbon fiber) baseball bats, golf clubs, car parts, or damascus steel. IBM expected carbon nanotube transistors to be used on Integrated Circuits by 2020. Potential/Future The strength and flexibility of carbon nanotubes makes them of potential use in controlling other nanoscale structures, which suggests they will have an important role in nanotechnology engineering. The highest tensile strength of an individual multi-walled carbon nanotube has been tested to be 63 GPa. Carbon nanotubes were found in Damascus steel from the 17th century, possibly helping to account for the legendary strength of the swords made of it. Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>1mm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical initiated thermal crosslinking method to fabricated macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano- structured pores and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices and implants. CNTs are potential candidates for future via and wire material in nano-scale VLSI circuits. Eliminating electromigration reliability concerns that plague today's Cu interconnects, isolated (single and multi-wall) CNTs can carry current densities in excess of 1000 MA/cm2 without electromigration damage. Single-walled nanotubes are likely candidates for miniaturizing electronics. The most basic building block of these systems is an electric wire, and SWNTs with diameters of an order of a nanometre can be excellent conductors. One useful application of SWNTs is in the development of the first intermolecular field-effect transistors (FET). The first intermolecular logic gate using SWCNT FETs was made in 2001. A logic gate requires both a p-FET and an n-FET. Because SWNTs are p-FETs when exposed to oxygen and n-FETs otherwise, it is possible to expose half of an SWNT to oxygen and protect the other half from it. The resulting SWNT acts as a not logic gate with both p- and n-type FETs in the same molecule. Large quantities of pure CNTs can be made into a freestanding sheet or film by surface-engineered tape-casting (SETC) fabrication technique which is a scalable method to fabricate flexible and foldable sheets with superior properties. Another reported form factor is CNT fiber (a.k.a. filament) by wet spinning. The fiber is either directly spun from the synthesis pot or spun from pre-made dissolved CNTs. Individual fibers can be turned into a yarn. Apart from its strength and flexibility, the main advantage is making an electrically conducting yarn. The electronic properties of individual CNT fibers (i.e. bundle of individual CNT) are governed by the two-dimensional structure of CNTs. The fibers were measured to have a resistivity only one order of magnitude higher than metallic conductors at 300K. By further optimizing the CNTs and CNT fibers, CNT fibers with improved electrical properties could be developed. CNT-based yarns are suitable for applications in energy and electrochemical water treatment when coated with an ion-exchange membrane. Also, CNT-based yarns could replace copper as a winding material. Pyrhönen et al. (2015) have built a motor using CNT winding. Safety and health The National Institute for Occupational Safety and Health (NIOSH) is the leading United States federal agency conducting research and providing guidance on the occupational safety and health implications and applications of nanomaterials. Early scientific studies have indicated that nanoscale particles may pose a greater health risk than bulk materials due to a relative increase in surface area per unit mass. Increase in length and diameter of CNT is correlated to increased toxicity and pathological alterations in lung. The biological interactions of nanotubes are not well understood, and the field is open to continued toxicological studies. It is often difficult to separate confounding factors, and since carbon is relatively biologically inert, some of the toxicity attributed to carbon nanotubes may be instead due to residual metal catalyst contamination. In previous studies, only Mitsui-7 was reliably demonstrated to be carcinogenic, although for unclear/unknown reasons. Unlike many common mineral fibers (such as asbestos), most SWCNTs and MWCNTs do not fit the size and aspect-ratio criteria to be classified as respirable fibers. In 2013, given that the long-term health effects have not yet been measured, NIOSH published a Current Intelligence Bulletin detailing the potential hazards and recommended exposure limit for carbon nanotubes and fibers. The U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits (RELs) of 1 μg/m3 for carbon nanotubes and carbon nanofibers as background-corrected elemental carbon as an 8-hour time-weighted average (TWA) respirable mass concentration. Although CNT caused pulmonary inflammation and toxicity in mice, exposure to aerosols generated from sanding of composites containing polymer-coated MWCNTs, representative of the actual end-product, did not exert such toxicity. As of October 2016, single wall carbon nanotubes have been registered through the European Union's Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) regulations, based on evaluation of the potentially hazardous properties of SWCNT. Based on this registration, SWCNT commercialization is allowed in the EU up to 10 metric tons. Currently, the type of SWCNT registered through REACH is limited to the specific type of single wall carbon nanotubes manufactured by OCSiAl, which submitted the application. History The true identity of the discoverers of carbon nanotubes is a subject of some controversy. A 2006 editorial written by Marc Monthioux and Vladimir Kuznetsov in the journal Carbon described the origin of the carbon nanotube. A large percentage of academic and popular literature attributes the discovery of hollow, nanometre-size tubes composed of graphitic carbon to Sumio Iijima of NEC in 1991. His paper initiated a flurry of excitement and could be credited with inspiring the many scientists now studying applications of carbon nanotubes. Though Iijima has been given much of the credit for discovering carbon nanotubes, it turns out that the timeline of carbon nanotubes goes back much further than 1991. In 1952, L. V. Radushkevich and V. M. Lukyanovich published clear images of 50 nanometre diameter tubes made of carbon in the Journal of Physical Chemistry Of Russia. This discovery was largely unnoticed, as the article was published in Russian, and Western scientists' access to Soviet press was limited during the Cold War. Monthioux and Kuznetsov mentioned in their Carbon editorial: In 1976, Morinobu Endo of CNRS observed hollow tubes of rolled up graphite sheets synthesised by a chemical vapour-growth technique. The first specimens observed would later come to be known as single-walled carbon nanotubes (SWNTs). Endo, in his early review of vapor-phase-grown carbon fibers (VPCF), also reminded us that he had observed a hollow tube, linearly extended with parallel carbon layer faces near the fiber core. This appears to be the observation of multi-walled carbon nanotubes at the center of the fiber. The mass-produced MWCNTs today are strongly related to the VPGCF developed by Endo. In fact, they call it the "Endo-process", out of respect for his early work and patents. In 1979, John Abrahamson presented evidence of carbon nanotubes at the 14th Biennial Conference of Carbon at Pennsylvania State University. The conference paper described carbon nanotubes as carbon fibers that were produced on carbon anodes during arc discharge. A characterization of these fibers was given, as well as hypotheses for their growth in a nitrogen atmosphere at low pressures. In 1981, a group of Soviet scientists published the results of chemical and structural characterization of carbon nanoparticles produced by a thermocatalytic disproportionation of carbon monoxide. Using TEM images and XRD patterns, the authors suggested that their "carbon multi-layer tubular crystals" were formed by rolling graphene layers into cylinders. They speculated that via this rolling, many different arrangements of graphene hexagonal nets are possible. They suggested two such possible arrangements: circular arrangement (armchair nanotube); and a spiral, helical arrangement (chiral tube). In 1987, Howard G. Tennent of Hyperion Catalysis was issued a U.S. patent for the production of "cylindrical discrete carbon fibrils" with a "constant diameter between about 3.5 and about 70 nanometers..., length 102 times the diameter, and an outer region of multiple essentially continuous layers of ordered carbon atoms and a distinct inner core...." Helping to create the initial excitement associated with carbon nanotubes were Iijima's 1991 discovery of multi-walled carbon nanotubes in the insoluble material of arc-burned graphite rods; and Mintmire, Dunlap, and White's independent prediction that if single-walled carbon nanotubes could be made, they would exhibit remarkable conducting properties. Nanotube research accelerated greatly following the independent discoveries by Iijima and Ichihashi at NEC and Bethune et al. at IBM of methods to specifically produce single-walled carbon nanotubes by adding transition-metal catalysts to the carbon in an arc discharge. Thess et al. refined this catalytic method by vaporizing the carbon/transition-metal combination in a high temperature furnace, which greatly improved the yield and purity of the SWNTs and made them widely available for characterization and application experiments. The arc discharge technique, well known to produce the famed Buckminsterfullerene , thus played a role in the discoveries of both multi- and single-wall nanotubes, extending the run of serendipitous discoveries relating to fullerenes. The discovery of nanotubes remains a contentious issue. Many believe that Iijima's report in 1991 is of particular importance because it brought carbon nanotubes into the awareness of the scientific community as a whole. In 2020, during archaeological excavation of Keezhadi in Tamil Nadu, India, ~2600-year-old pottery was discovered whose coatings appear to contain carbon nanotubes. The robust mechanical properties of the nanotubes are partially why the coatings have lasted for so many years, say the scientists. See also Buckypaper Carbide-derived carbon Carbon nanocone Carbon nanofibers Carbon nanoscrolls Carbon nanotube computer Carbon nanotubes in photovoltaics Colossal carbon tube Diamond nanothread Filamentous carbon Molecular modelling Nanoflower Ninithi (nanotube modelling software) Optical properties of carbon nanotubes Organic semiconductor References This article incorporates public domain text from National Institute of Environmental Health Sciences (NIEHS) as quoted. External links Nanocarbon: From Graphene to Buckyballs. Interactive 3D models of cyclohexane, benzene, graphene, graphite, chiral & non-chiral nanotubes, and C60 Buckyballs - WeCanFigureThisOut.org. The Nanotube site . Last updated 2013.04.12 EU Marie Curie Network CARBIO: Multifunctional carbon nanotubes for biomedical applications C60 and Carbon Nanotubes a short video explaining how nanotubes can be made from modified graphite sheets and the three different types of nanotubes that are formed Learning module for Bandstructure of Carbon Nanotubes and Nanoribbons Selection of free-download articles on carbon nanotubes WOLFRAM Demonstrations Project: Electronic Band Structure of a Single-Walled Carbon Nanotube by the Zone-Folding Method WOLFRAM Demonstrations Project: Electronic Structure of a Single-Walled Carbon Nanotube in Tight-Binding Wannier Representation Electrospinning Allotropes of carbon Emerging technologies Transparent electrodes Refractory materials Space elevator Discovery and invention controversies Nanomaterials
5321
https://en.wikipedia.org/wiki/Czech%20Republic
Czech Republic
The Czech Republic, also known as Czechia, is a landlocked country in Central Europe. Historically known as Bohemia, it is bordered by Austria to the south, Germany to the west, Poland to the northeast, and Slovakia to the southeast. The Czech Republic has a hilly landscape that covers an area of with a mostly temperate continental and oceanic climate. The capital and largest city is Prague; other major cities and urban areas include Brno, Ostrava, Plzeň and Liberec. The Duchy of Bohemia was founded in the late 9th century under Great Moravia. It was formally recognized as an Imperial State of the Holy Roman Empire in 1002 and became a kingdom in 1198. Following the Battle of Mohács in 1526, all of the Crown lands of Bohemia were gradually integrated into the Habsburg monarchy. Nearly a hundred years later, the Protestant Bohemian Revolt led to the Thirty Years' War. After the Battle of White Mountain, the Habsburgs consolidated their rule. With the dissolution of the Holy Roman Empire in 1806, the Crown lands became part of the Austrian Empire. In the 19th century, the Czech lands became more industrialized, and in 1918 most of it became part of the First Czechoslovak Republic following the collapse of Austria-Hungary after World War I. Czechoslovakia was the only country in Central and Eastern Europe to remain a parliamentary democracy during the entirety of the interwar period. After the Munich Agreement in 1938, Nazi Germany systematically took control over the Czech lands. Czechoslovakia was restored in 1945 and three years later became an Eastern Bloc communist state following a coup d'état in 1948. Attempts to liberalize the government and economy were suppressed by a Soviet-led invasion of the country during the Prague Spring in 1968. In November 1989, the Velvet Revolution ended communist rule in the country and restored democracy. On 31 December 1992, Czechoslovakia was peacefully dissolved, with its constituent states becoming the independent states of the Czech Republic and Slovakia. The Czech Republic is a unitary parliamentary republic and developed country with an advanced, high-income social market economy. It is a welfare state with a European social model, universal health care and free-tuition university education. It ranks 32nd in the Human Development Index. The Czech Republic is a member of the United Nations, NATO, the European Union, the OECD, the OSCE, the Council of Europe and the Visegrád Group. Name The traditional English name "Bohemia" derives from Latin: Boiohaemum, which means "home of the Boii" (a Gallic tribe). The current English name ultimately comes from the Czech word . The name comes from the Slavic tribe () and, according to legend, their leader Čech, who brought them to Bohemia, to settle on Říp Mountain. The etymology of the word can be traced back to the Proto-Slavic root , meaning "member of the people; kinsman", thus making it cognate to the Czech word (a person). The country has been traditionally divided into three lands, namely Bohemia () in the west, Moravia () in the east, and Czech Silesia (; the smaller, south-eastern part of historical Silesia, most of which is located within modern Poland) in the northeast. Known as the lands of the Bohemian Crown since the 14th century, a number of other names for the country have been used, including Czech/Bohemian lands, Bohemian Crown, Czechia and the lands of the Crown of Saint Wenceslaus. When the country regained its independence after the dissolution of the Austro-Hungarian empire in 1918, the new name of Czechoslovakia was coined to reflect the union of the Czech and Slovak nations within one country. After Czechoslovakia dissolved on the last day of 1992, was adopted as the Czech short name for the new state and the Ministry of Foreign Affairs of the Czech Republic recommended Czechia for the English-language equivalent. This form was not widely adopted at the time, leading to the long name Czech Republic being used in English in nearly all circumstances. The Czech government directed use of Czechia as the official English short name in 2016. The short name has been listed by the United Nations and is used by other organizations such as the European Union, NATO, the CIA, Google Maps, and the European Broadcasting Union. In 2022, the American AP Stylebook stated in its entry on the country that "Czechia, the Czech Republic. Both are acceptable. The shorter name Czechia is preferred by the Czech government. If using Czechia, clarify in the story that the country is more widely known in English as the Czech Republic." History Prehistory Archaeologists have found evidence of prehistoric human settlements in the area, dating back to the Paleolithic era. In the classical era, as a result of the 3rd century BC Celtic migrations, Bohemia became associated with the Boii. The Boii founded an oppidum near the site of modern Prague. Later in the 1st century, the Germanic tribes of the Marcomanni and Quadi settled there. Slavs from the Black Sea–Carpathian region settled in the area (their migration was pushed by an invasion of peoples from Siberia and Eastern Europe into their area: Huns, Avars, Bulgars and Magyars). In the sixth century, the Huns had moved westwards into Bohemia, Moravia, and some of present-day Austria and Germany. During the 7th century, the Frankish merchant Samo, supporting the Slavs fighting against nearby settled Avars, became the ruler of the first documented Slavic state in Central Europe, Samo's Empire. The principality of Great Moravia, controlled by Moymir dynasty, arose in the 8th century. It reached its zenith in the 9th (during the reign of Svatopluk I of Moravia), holding off the influence of the Franks. Great Moravia was Christianized, with a role being played by the Byzantine mission of Cyril and Methodius. They codified the Old Church Slavonic language, the first literary and liturgical language of the Slavs, and the Glagolitic script. Bohemia The Duchy of Bohemia emerged in the late 9th century when it was unified by the Přemyslid dynasty. Bohemia was from 1002 until 1806 an Imperial Estate of the Holy Roman Empire. In 1212, Přemysl Ottokar I extracted the Golden Bull of Sicily from the emperor, confirming Ottokar and his descendants' royal status; the Duchy of Bohemia was raised to a Kingdom. German immigrants settled in the Bohemian periphery in the 13th century. The Mongols in the invasion of Europe carried their raids into Moravia but were defensively defeated at Olomouc. After a series of dynastic wars, the House of Luxembourg gained the Bohemian throne. Efforts for a reform of the church in Bohemia started already in the late 14th century. Jan Hus' followers seceded from some practices of the Roman Church and in the Hussite Wars (1419–1434) defeated five crusades organized against them by Sigismund. During the next two centuries, 90% of the population in Bohemia and Moravia were considered Hussites. The pacifist thinker Petr Chelčický inspired the movement of the Moravian Brethren (by the middle of the 15th century) that completely separated from the Roman Catholic Church. On 21 December 1421, Jan Žižka, a successful military commander and mercenary, led his group of forces in the Battle of Kutná Hora, resulting in a victory for the Hussites. He is honoured to this day as a national hero. After 1526 Bohemia came increasingly under Habsburg control as the Habsburgs became first the elected and then in 1627 the hereditary rulers of Bohemia. Between 1583 and 1611 Prague was the official seat of the Holy Roman Emperor Rudolf II and his court. The Defenestration of Prague and subsequent revolt against the Habsburgs in 1618 marked the start of the Thirty Years' War. In 1620, the rebellion in Bohemia was crushed at the Battle of White Mountain and the ties between Bohemia and the Habsburgs' hereditary lands in Austria were strengthened. The leaders of the Bohemian Revolt were executed in 1621. The nobility and the middle class Protestants had to either convert to Catholicism or leave the country. The following era of 1620 to the late 18th century became known as the "Dark Age". During the Thirty Years' War, the population of the Czech lands declined by a third through the expulsion of Czech Protestants as well as due to the war, disease and famine. The Habsburgs prohibited all Christian confessions other than Catholicism. The flowering of Baroque culture shows the ambiguity of this historical period. Ottoman Turks and Tatars invaded Moravia in 1663. In 1679–1680 the Czech lands faced the Great Plague of Vienna and an uprising of serfs. There were peasant uprisings influenced by famine. Serfdom was abolished between 1781 and 1848. Several battles of the Napoleonic Wars took place on the current territory of the Czech Republic. The end of the Holy Roman Empire in 1806 led to degradation of the political status of Bohemia which lost its position of an electorate of the Holy Roman Empire as well as its own political representation in the Imperial Diet. Bohemian lands became part of the Austrian Empire. During the 18th and 19th century the Czech National Revival began its rise, with the purpose to revive Czech language, culture, and national identity. The Revolution of 1848 in Prague, striving for liberal reforms and autonomy of the Bohemian Crown within the Austrian Empire, was suppressed. It seemed that some concessions would be made also to Bohemia, but in the end, the Emperor Franz Joseph I affected a compromise with Hungary only. The Austro-Hungarian Compromise of 1867 and the never realized coronation of Franz Joseph as King of Bohemia led to a disappointment of some Czech politicians. The Bohemian Crown lands became part of the so-called Cisleithania. The Czech Social Democratic and progressive politicians started the fight for universal suffrage. The first elections under universal male suffrage were held in 1907. Czechoslovakia In 1918, during the collapse of the Habsburg monarchy at the end of World War I, the independent republic of Czechoslovakia, which joined the winning Allied powers, was created, with Tomáš Garrigue Masaryk in the lead. This new country incorporated the Bohemian Crown. The First Czechoslovak Republic comprised only 27% of the population of the former Austria-Hungary, but nearly 80% of the industry, which enabled it to compete with Western industrial states. In 1929 compared to 1913, the gross domestic product increased by 52% and industrial production by 41%. In 1938 Czechoslovakia held 10th place in the world industrial production. Czechoslovakia was the only country in Central and Eastern Europe to remain a liberal democracy throughout the entire interwar period. Although the First Czechoslovak Republic was a unitary state, it provided certain rights to its minorities, the largest being Germans (23.6% in 1921), Hungarians (5.6%) and Ukrainians (3.5%). Western Czechoslovakia was occupied by Nazi Germany, which placed most of the region into the Protectorate of Bohemia and Moravia. The Protectorate was proclaimed part of the Third Reich, and the president and prime minister were subordinated to Nazi Germany's Reichsprotektor. One Nazi concentration camp was located within the Czech territory at Terezín, north of Prague. The vast majority of the Protectorate's Jews were murdered in Nazi-run concentration camps. The Nazi called for the extermination, expulsion, Germanization or enslavement of most or all Czechs for the purpose of providing more living space for the German people. There was Czechoslovak resistance to Nazi occupation as well as reprisals against the Czechoslovaks for their anti-Nazi resistance. The German occupation ended on 9 May 1945, with the arrival of the Soviet and American armies and the Prague uprising. Most of Czechoslovakia's German-speakers were forcibly expelled from the country, first as a result of local acts of violence and then under the aegis of an "organized transfer" confirmed by the Soviet Union, the United States, and Great Britain at the Potsdam Conference. In the 1946 elections, the Communist Party gained 38% of the votes and became the largest party in the Czechoslovak parliament, formed a coalition with other parties, and consolidated power. A coup d'état came in 1948 and a single-party government was formed. For the next 41 years, the Czechoslovak Communist state conformed to Eastern Bloc economic and political features. The Prague Spring political liberalization was stopped by the 1968 Warsaw Pact invasion of Czechoslovakia. Analysts believe that the invasion caused the communist movement to fracture, ultimately leading to the Revolutions of 1989. Czech Republic In November 1989, Czechoslovakia again became a liberal democracy through the Velvet Revolution. However, Slovak national aspirations strengthened (Hyphen War) and on 31 December 1992, the country peacefully split into the independent countries of the Czech Republic and Slovakia. Both countries went through economic reforms and privatizations, with the intention of creating a market economy, as they have been trying to do since 1990, when Czechs and Slovaks still shared the common state. This process was largely successful; in 2006 the Czech Republic was recognized by the World Bank as a "developed country", and in 2009 the Human Development Index ranked it as a nation of "Very High Human Development". From 1991, the Czech Republic, originally as part of Czechoslovakia and since 1993 in its own right, has been a member of the Visegrád Group and from 1995, the OECD. The Czech Republic joined NATO on 12 March 1999 and the European Union on 1 May 2004. On 21 December 2007 the Czech Republic joined the Schengen Area. Until 2017, either the centre-left Czech Social Democratic Party or the centre-right Civic Democratic Party led the governments of the Czech Republic. In October 2017, the populist movement ANO 2011, led by the country's second-richest man, Andrej Babiš, won the elections with three times more votes than its closest rival, the Civic Democrats. In December 2017, Czech president Miloš Zeman appointed Andrej Babiš as the new prime minister. In the 2021 elections, ANO 2011 was narrowly defeated and Petr Fiala became the new prime minister. He formed a government coalition of the alliance SPOLU (Civic Democratic Party, KDU-ČSL and TOP 09) and the alliance of Pirates and Mayors. In January 2023, retired general Petr Pavel won the presidential election, becoming new Czech president to succeed Miloš Zeman. Following the 2022 Russian invasion of Ukraine, the country took in half a million Ukrainian refugees, the largest number per capita in the world. Geography The Czech Republic lies mostly between latitudes 48° and 51° N and longitudes 12° and 19° E. Bohemia, to the west, consists of a basin drained by the Elbe () and the Vltava rivers, surrounded by mostly low mountains, such as the Krkonoše range of the Sudetes. The highest point in the country, Sněžka at , is located here. Moravia, the eastern part of the country, is also hilly. It is drained mainly by the Morava River, but it also contains the source of the Oder River (). Water from the Czech Republic flows to three different seas: the North Sea, Baltic Sea, and Black Sea. The Czech Republic also leases the Moldauhafen, a lot in the middle of the Hamburg Docks, which was awarded to Czechoslovakia by Article 363 of the Treaty of Versailles, to allow the landlocked country a place where goods transported down river could be transferred to seagoing ships. The territory reverts to Germany in 2028. Phytogeographically, the Czech Republic belongs to the Central European province of the Circumboreal Region, within the Boreal Kingdom. According to the World Wide Fund for Nature, the territory of the Czech Republic can be subdivided into four ecoregions: the Western European broadleaf forests, Central European mixed forests, Pannonian mixed forests, and Carpathian montane conifer forests. There are four national parks in the Czech Republic. The oldest is Krkonoše National Park (Biosphere Reserve), and the others are Šumava National Park (Biosphere Reserve), Podyjí National Park, and Bohemian Switzerland. The three historical lands of the Czech Republic (formerly some countries of the Bohemian Crown) correspond with the river basins of the Elbe and the Vltava basin for Bohemia, the Morava one for Moravia, and the Oder river basin for Czech Silesia (in terms of the Czech territory). Climate The Czech Republic has a temperate climate, situated in the transition zone between the oceanic and continental climate types, with warm summers and cold, cloudy and snowy winters. The temperature difference between summer and winter is due to the landlocked geographical position. Temperatures vary depending on the elevation. In general, at higher altitudes, the temperatures decrease and precipitation increases. The wettest area in the Czech Republic is found around Bílý Potok in Jizera Mountains and the driest region is the Louny District to the northwest of Prague. Another factor is the distribution of the mountains. At the highest peak of Sněžka (), the average temperature is , whereas in the lowlands of the South Moravian Region, the average temperature is as high as . The country's capital, Prague, has a similar average temperature, although this is influenced by urban factors. The coldest month is usually January, followed by February and December. During these months, there is snow in the mountains and sometimes in the cities and lowlands. During March, April, and May, the temperature usually increases, especially during April, when the temperature and weather tends to vary during the day. Spring is also characterized by higher water levels in the rivers, due to melting snow with occasional flooding. The warmest month of the year is July, followed by August and June. On average, summer temperatures are about higher than during winter. Summer is also characterized by rain and storms. Autumn generally begins in September, which is still warm and dry. During October, temperatures usually fall below or and deciduous trees begin to shed their leaves. By the end of November, temperatures usually range around the freezing point. The coldest temperature ever measured was in Litvínovice near České Budějovice in 1929, at and the hottest measured, was at in Dobřichovice in 2012. Most rain falls during the summer. Sporadic rainfall is throughout the year (in Prague, the average number of days per month experiencing at least of rain varies from 12 in September and October to 16 in November) but concentrated rainfall (days with more than per day) are more frequent in the months of May to August (average around two such days per month). Severe thunderstorms, producing damaging straight-line winds, hail, and occasional tornadoes occur, especially during the summer period. Environment As of 2020, the Czech Republic ranks as the 21st most environmentally conscious country in the world in Environmental Performance Index. It had a 2018 Forest Landscape Integrity Index mean score of 1.71/10, ranking it 160th globally out of 172 countries. The Czech Republic has four National Parks (Šumava National Park, Krkonoše National Park, České Švýcarsko National Park, Podyjí National Park) and 25 Protected Landscape Areas. Government The Czech Republic is a pluralist multi-party parliamentary representative democracy. The Parliament (Parlament České republiky) is bicameral, with the Chamber of Deputies (, 200 members) and the Senate (, 81 members). The members of the Chamber of Deputies are elected for a four-year term by proportional representation, with a 5% election threshold. There are 14 voting districts, identical to the country's administrative regions. The Chamber of Deputies, the successor to the Czech National Council, has the powers and responsibilities of the now defunct federal parliament of the former Czechoslovakia. The members of the Senate are elected in single-seat constituencies by two-round runoff voting for a six-year term, with one-third elected every even year in the autumn. This arrangement is modeled on the U.S. Senate, but each constituency is roughly the same size and the voting system used is a two-round runoff. The president is a formal head of state with limited and specific powers, who appoints the prime minister, as well the other members of the cabinet on a proposal by the prime minister. From 1993 until 2012, the President of the Czech Republic was selected by a joint session of the parliament for a five-year term, with no more than two consecutive terms (2x Václav Havel, 2x Václav Klaus). Since 2013, the president has been elected directly. Some commentators have argued that, with the introduction of direct election of the President, the Czech Republic has moved away from the parliamentary system and towards a semi-presidential one. The Government's exercise of executive power derives from the Constitution. The members of the government are the Prime Minister, Deputy prime ministers and other ministers. The Government is responsible to the Chamber of Deputies. The Prime Minister is the head of government and wields powers such as the right to set the agenda for most foreign and domestic policy and choose government ministers. |President |Petr Pavel |Independent |9 March 2023 |- |President of the Senate |Miloš Vystrčil |ODS |19 February 2020 |- |President of the Chamber of Deputies |Markéta Pekarová Adamová |TOP 09 |10 November 2021 |- |Prime Minister |Petr Fiala |ODS |28 November 2021 |} Law The Czech Republic is a unitary state, with a civil law system based on the continental type, rooted in Germanic legal culture. The basis of the legal system is the Constitution of the Czech Republic adopted in 1993. The Penal Code is effective from 2010. A new Civil code became effective in 2014. The court system includes district, county, and supreme courts and is divided into civil, criminal, and administrative branches. The Czech judiciary has a triumvirate of supreme courts. The Constitutional Court consists of 15 constitutional judges and oversees violations of the Constitution by either the legislature or by the government. The Supreme Court is formed of 67 judges and is the court of highest appeal for most legal cases heard in the Czech Republic. The Supreme Administrative Court decides on issues of procedural and administrative propriety. It also has jurisdiction over certain political matters, such as the formation and closure of political parties, jurisdictional boundaries between government entities, and the eligibility of persons to stand for public office. The Supreme Court and the Supreme Administrative Court are both based in Brno, as is the Supreme Public Prosecutor's Office. Foreign relations The Czech Republic has ranked as one of the safest or most peaceful countries for the past few decades. It is a member of the United Nations, the European Union, NATO, OECD, Council of Europe and is an observer to the Organization of American States. The embassies of most countries with diplomatic relations with the Czech Republic are located in Prague, while consulates are located across the country. The Czech passport is restricted by visas. According to the 2018 Henley & Partners Visa Restrictions Index, Czech citizens have visa-free access to 173 countries, which ranks them 7th along with Malta and New Zealand. The World Tourism Organization ranks the Czech passport 24th. The US Visa Waiver Program applies to Czech nationals. The Prime Minister and Minister of Foreign Affairs have primary roles in setting foreign policy, although the President also has influence and represents the country abroad. Membership in the European Union and NATO is central to the Czech Republic's foreign policy. The Office for Foreign Relations and Information (ÚZSI) serves as the foreign intelligence agency responsible for espionage and foreign policy briefings, as well as protection of Czech Republic's embassies abroad. The Czech Republic has ties with Slovakia, Poland and Hungary as a member of the Visegrád Group, as well as with Germany, Israel, the United States and the European Union and its members. After 2020, relations with Asian democratic states, such as Taiwan, are being strengthened. On the contrary, the Czech Republic has long had bad relations with Russia, and from 2021 the Czech Republic appears on Russia's official list of enemy countries. The Czech Republic also has problematic relations with China. Czech officials have supported dissenters in Belarus, Moldova, Myanmar and Cuba. Famous Czech diplomats of the past included Jaroslav Lev of Rožmitál, Humprecht Jan Czernin, Count Philip Kinsky of Wchinitz and Tettau, Wenzel Anton, Prince of Kaunitz-Rietberg, Prince Karl Philipp Schwarzenberg, Alois Lexa von Aehrenthal, Ottokar Czernin, Edvard Beneš, Jan Masaryk, Jiří Hájek, Jiří Dienstbier, Michael Žantovský, Petr Kolář, Alexandr Vondra, Prince Karel Schwarzenberg and Petr Pavel. Military The Czech armed forces consist of the Czech Land Forces, the Czech Air Force and of specialized support units. The armed forces are managed by the Ministry of Defence. The President of the Czech Republic is Commander-in-chief of the armed forces. In 2004 the army transformed itself into a fully professional organization and compulsory military service was abolished. The country has been a member of NATO since 12 March 1999. Defence spending is approximately 1.28% of the GDP (2021). The armed forces are charged with protecting the Czech Republic and its allies, promoting global security interests, and contributing to NATO. Currently, as a member of NATO, the Czech military are participating in the Resolute Support and KFOR operations and have soldiers in Afghanistan, Mali, Bosnia and Herzegovina, Kosovo, Egypt, Israel and Somalia. The Czech Air Force also served in the Baltic states and Iceland. The main equipment of the Czech military includes JAS 39 Gripen multi-role fighters, Aero L-159 Alca combat aircraft, Mi-35 attack helicopters, armored vehicles (Pandur II, OT-64, OT-90, BVP-2) and tanks (T-72 and T-72M4CZ). The most famous Czech, and therefore Czechoslovak, soldiers and military leaders of the past were Ottokar II of Bohemia, John of Bohemia, Jan Žižka, Albrecht von Wallenstein, Karl Philipp, Prince of Schwarzenberg, Joseph Radetzky von Radetz, Josef Šnejdárek, Heliodor Píka, Ludvík Svoboda, Jan Kubiš, Jozef Gabčík, František Fajtl and Petr Pavel. Human rights Human rights in the Czech Republic are guaranteed by the Charter of Fundamental Rights and Freedoms and international treaties on human rights. Nevertheless, there were cases of human rights violations such as discrimination against Roma children, for which the European Commission asked the Czech Republic to provide an explanation, or the illegal sterilization of Roma women, for which the government apologized. Prague is the seat of Radio Free Europe/Radio Liberty. Today, the station is based in Hagibor. At the beginning of the 1990s, Václav Havel personally invited her to Czechoslovakia. People of the same sex can enter into a "registered partnership" in the Czech Republic. Conducting same-sex marriage is not legal under current Czech law. The best-known Czech activists and supporters of human rights include Berta von Suttner, born in Prague, who won the Nobel Peace Prize for her pacifist struggle, philosopher and the first Czechoslovak president Tomáš Garrigue Masaryk, student Jan Palach, who set himself on fire in 1969 in protest against the Soviet occupation, Karel Schwarzenberg, who was chairman of the International Helsinki Committee for Human Rights between 1984 and 1990, Václav Havel, long-time dissident and later president, sociologist and dissident Jiřina Šiklová and Šimon Pánek, founder and director of the People in Need organization. Administrative divisions Since 2000, the Czech Republic has been divided into thirteen regions (Czech: kraje, singular kraj) and the capital city of Prague. Every region has its own elected regional assembly and a regional governor. In Prague, the assembly and presidential powers are executed by the city council and the mayor. The older seventy-six districts (okresy, singular okres) including three "statutory cities" (without Prague, which had special status) lost most of their importance in 1999 in an administrative reform; they remain as territorial divisions and seats of various branches of state administration. The smallest administrative units are obce (municipalities). As of 2021, the Czech Republic is divided into 6,254 municipalities. Cities and towns are also municipalities. The capital city of Prague is a region and municipality at the same time. Economy The Czech Republic has a developed, high-income export-oriented social market economy based in services, manufacturing and innovation, that maintains a welfare state and the European social model. The Czech Republic participates in the European Single Market as a member of the European Union and is therefore a part of the economy of the European Union, but uses its own currency, the Czech koruna, instead of the euro. It has a per capita GDP rate that is 91% of the EU average and is a member of the OECD. Monetary policy is conducted by the Czech National Bank, whose independence is guaranteed by the Constitution. The Czech Republic ranks 12th in the UN inequality-adjusted human development and 24th in World Bank Human Capital Index. It was described by The Guardian as "one of Europe's most flourishing economies". , the country's GDP per capita at purchasing power parity is $51,329 and $29,856 at nominal value. According to Allianz A.G., in 2018 the country was an MWC (mean wealth country), ranking 26th in net financial assets. The country experienced a 4.5% GDP growth in 2017. The 2016 unemployment rate was the lowest in the EU at 2.4%, and the 2016 poverty rate was the second lowest of OECD members. Czech Republic ranks 27th in the 2021 Index of Economic Freedom, 31st in the 2023 Global Innovation Index, down from 24th in the 2016, 29th in the Global Competitiveness Report, and 25th in the Global Enabling Trade Report. The Czech Republic has a diverse economy that ranks 7th in the 2016 Economic Complexity Index. The industrial sector accounts for 37.5% of the economy, while services account for 60% and agriculture for 2.5%. The largest trading partner for both export and import is Germany and the EU in general. Dividends worth CZK 270 billion were paid to the foreign owners of Czech companies in 2017, which has become a political issue. The country has been a member of the Schengen Area since 1 May 2004, having abolished border controls, completely opening its borders with all of its neighbors on 21 December 2007. Industry the largest companies by revenue in the Czech Republic were: automobile manufacturer Škoda Auto, utility company ČEZ Group, conglomerate Agrofert, energy trading company EPH, oil processing company Unipetrol, electronics manufacturer Foxconn CZ and steel producer Moravia Steel. Other Czech transportation companies include: Škoda Transportation (tramways, trolleybuses, metro), Tatra (heavy trucks, the second oldest car maker in the world), Avia (medium trucks), Karosa and SOR Libchavy (buses), Aero Vodochody (military aircraft), Let Kunovice (civil aircraft), Zetor (tractors), Jawa Moto (motorcycles) and Čezeta (electric scooters). Škoda Transportation is the fourth largest tram producer in the world; nearly one third of all trams in the world come from Czech factories. The Czech Republic is also the world's largest vinyl records manufacturer, with GZ Media producing about 6 million pieces annually in Loděnice. Česká zbrojovka is among the ten largest firearms producers in the world and five who produce automatic weapons. In the food industry, Czech companies include Agrofert, Kofola and Hamé. Energy Production of Czech electricity exceeds consumption by about 10 TWh per year, the excess being exported. Nuclear power presently provides about 30 percent of the total power needs, its share is projected to increase to 40 percent. In 2005, 65.4 percent of electricity was produced by steam and combustion power plants (mostly coal); 30 percent by nuclear plants; and 4.6 percent came from renewable sources, including hydropower. The largest Czech power resource is Temelín Nuclear Power Station, with another nuclear power plant in Dukovany. The Czech Republic is reducing its dependence on highly polluting low-grade brown coal as a source of energy. Natural gas is purchased from Norwegian companies and as liquefied gas LNG from the Netherlands and Belgium. In the past, three-quarters of gas supplies came from Russia, but after the outbreak of the war in Ukraine, the government gradually stopped these supplies. Gas consumption (approx. 100 TWh in 2003–2005) is almost double electricity consumption. South Moravia has small oil and gas deposits. Transportation infrastructure the road network in the Czech Republic is long, out of which are motorways. The speed limit is within towns, outside of towns and on motorways. The Czech Republic has one of the densest rail networks in the world. the country has of lines. Of that number, is electrified, are single-line tracks and are double and multiple-line tracks. The length of tracks is , out of which is electrified. České dráhy (the Czech Railways) is the main railway operator in the country, with about 180 million passengers carried yearly. Maximum speed is limited to . Václav Havel Airport in Prague is the main international airport in the country. In 2019, it handled 17.8 million passengers. In total, the Czech Republic has 91 airports, six of which provide international air services. The public international airports are in Brno, Karlovy Vary, Mnichovo Hradiště, Mošnov (near Ostrava), Pardubice and Prague. The non-public international airports capable of handling airliners are in Kunovice and Vodochody. Russia, via pipelines through Ukraine and to a lesser extent, Norway, via pipelines through Germany, supply the Czech Republic with liquid and natural gas. Communications and IT The Czech Republic ranks in the top 10 countries worldwide with the fastest average internet speed. By the beginning of 2008, there were over 800 mostly local WISPs, with about 350,000 subscribers in 2007. Plans based on either GPRS, EDGE, UMTS or CDMA2000 are being offered by all three mobile phone operators (T-Mobile, O2, Vodafone) and internet provider U:fon. Government-owned Český Telecom slowed down broadband penetration. At the beginning of 2004, local-loop unbundling began and alternative operators started to offer ADSL and also SDSL. This and later privatization of Český Telecom helped drive down prices. On 1 July 2006, Český Telecom was acquired by globalized company (Spain-owned) Telefónica group and adopted the new name Telefónica O2 Czech Republic. , VDSL and ADSL2+ are offered in variants, with download speeds of up to 50 Mbit/s and upload speeds of up to 5 Mbit/s. Cable internet is gaining more popularity with its higher download speeds ranging from 50 Mbit/s to 1 Gbit/s. Two computer security companies, Avast and AVG, were founded in the Czech Republic. In 2016, Avast led by Pavel Baudiš bought rival AVG for US$1.3 billion, together at the time, these companies had a user base of about 400 million people and 40% of the consumer market outside of China. Avast is the leading provider of antivirus software, with a 20.5% market share. Tourism Prague is the fifth most visited city in Europe after London, Paris, Istanbul and Rome. In 2001, the total earnings from tourism reached 118 billion CZK, making up 5.5% of GNP and 9% of overall export earnings. The industry employs more than 110,000 people – over 1% of the population. Guidebooks and tourists reporting overcharging by taxi drivers and pickpocketing problems are mainly in Prague, though the situation has improved recently. Since 2005, Prague's mayor, Pavel Bém, has worked to improve this reputation by cracking down on petty crime and, aside from these problems, Prague is a "safe" city. The Czech Republic's crime rate is described by the United States State department as "low". One of the tourist attractions in the Czech Republic is the Nether district Vítkovice in Ostrava. The Czech Republic boasts 16 UNESCO World Heritage Sites, 3 of them are transnational. , further 14 sites are on the tentative list. Architectural heritage is an object of interest to visitors – it includes castles and châteaux from different historical epoques, namely Karlštejn Castle, Český Krumlov and the Lednice–Valtice Cultural Landscape. There are 12 cathedrals and 15 churches elevated to the rank of basilica by the Pope, calm monasteries. Away from the towns, areas such as Bohemian Paradise, Bohemian Forest and the Giant Mountains attract visitors seeking outdoor pursuits. There is a number of beer festivals. The country is also known for its various museums. Puppetry and marionette exhibitions are with a number of puppet festivals throughout the country. Aquapalace Prague in Čestlice is the largest water park in the country. Science The Czech lands have a long and well-documented history of scientific innovation. Today, the Czech Republic has a highly sophisticated, developed, high-performing, innovation-oriented scientific community supported by the government, industry, and leading universities. Czech scientists are embedded members of the global scientific community. They contribute annually to multiple international academic journals and collaborate with their colleagues across boundaries and fields. The Czech Republic was ranked 24th in the Global Innovation Index in 2020 and 2021, up from 26th in 2019. Historically, the Czech lands, especially Prague, have been the seat of scientific discovery going back to early modern times, including Tycho Brahe, Nicolaus Copernicus, and Johannes Kepler. In 1784 the scientific community was first formally organized under the charter of the Royal Czech Society of Sciences. Currently, this organization is known as the Czech Academy of Sciences. Similarly, the Czech lands have a well-established history of scientists, including Nobel laureates biochemists Gerty and Carl Ferdinand Cori, chemists Jaroslav Heyrovský and Otto Wichterle, physicists Ernst Mach and Peter Grünberg, physiologist Jan Evangelista Purkyně and chemist Antonín Holý. Sigmund Freud, the founder of psychoanalysis, was born in Příbor, Gregor Mendel, the founder of genetics, was born in Hynčice and spent most of his life in Brno, logician and mathematician Kurt Gödel was born in Brno. Historically, most scientific research was recorded in Latin, but from the 18th century onwards increasingly in German and later in Czech, archived in libraries supported and managed by religious groups and other denominations as evidenced by historical locations of international renown and heritage such as the Strahov Monastery and the Clementinum in Prague. Increasingly, Czech scientists publish their work and that of their history in English. The current important scientific institution is the already mentioned Academy of Sciences of the Czech Republic, the CEITEC Institute in Brno or the HiLASE and Eli Beamlines centers with the most powerful laser in the world in Dolní Břežany. Prague is the seat of the administrative center of the GSA Agency operating the European navigation system Galileo and the European Union Agency for the Space Programme. Demographics The total fertility rate (TFR) in 2020 was estimated at 1.71 children per woman, which is below the replacement rate of 2.1. The Czech Republic's population has an average age of 43.3 years. The life expectancy in 2021 was estimated to be 79.5 years (76.55 years male, 82.61 years female). About 77,000 people immigrate to the Czech Republic annually. Vietnamese immigrants began settling in the country during the Communist period, when they were invited as guest workers by the Czechoslovak government. In 2009, there were about 70,000 Vietnamese in the Czech Republic. Most decide to stay in the country permanently. According to results of the 2021 census, the majority of the inhabitants of the Czech Republic are Czechs (57.3%), followed by Moravians (3.4%), Slovaks (0.9%), Ukrainians (0.7%), Viets (0.3%), Poles (0.3%), Russians (0.2%), Silesians (0.1%) and Germans (0.1%). Another 4.0% declared combination of two nationalities (3.6% combination of Czech and other nationality). As the 'nationality' was an optional item, a number of people left this field blank (31.6%). According to some estimates, there are about 250,000 Romani people in the Czech Republic. The Polish minority resides mainly in the Trans-Olza region. There were 658,564 foreigners residing in the country in 2021, according to the Czech Statistical Office, with the largest groups being Ukrainian (22%), Slovak (22%), Vietnamese (12%), Russian (7%) and German (4%). Most of the foreign population lives in Prague (37.3%) and Central Bohemia Region (13.2%). The Jewish population of Bohemia and Moravia, 118,000 according to the 1930 census, was nearly annihilated by the Nazi Germans during the Holocaust. There were approximately 3,900 Jews in the Czech Republic in 2021. The former Czech prime minister, Jan Fischer, is of Jewish faith. Nationality of residents, who answered the question in the Census 2021: Largest cities Religion About 75% to 79% of residents of the Czech Republic do not declare having any religion or faith in surveys, and the proportion of convinced atheists (30%) is the third highest in the world behind those of China (47%) and Japan (31%). The Czech people have been historically characterized as "tolerant and even indifferent towards religion". The religious identity of the country has changed drastically since the first half of the 20th century, when more than 90% of Czechs were Christians. Christianization in the 9th and 10th centuries introduced Catholicism. After the Bohemian Reformation, most Czechs became followers of Jan Hus, Petr Chelčický and other regional Protestant Reformers. Taborites and Utraquists were Hussite groups. Towards the end of the Hussite Wars, the Utraquists changed sides and allied with the Catholic Church. Following the joint Utraquist—Catholic victory, Utraquism was accepted as a distinct form of Christianity to be practiced in Bohemia by the Catholic Church while all remaining Hussite groups were prohibited. After the Reformation, some Bohemians went with the teachings of Martin Luther, especially Sudeten Germans. In the wake of the Reformation, Utraquist Hussites took a renewed increasingly anti-Catholic stance, while some of the defeated Hussite factions were revived. After the Habsburgs regained control of Bohemia, the whole population was forcibly converted to Catholicism—even the Utraquist Hussites. Going forward, Czechs have become more wary and pessimistic of religion as such. A history of resistance to the Catholic Church followed. It suffered a schism with the neo-Hussite Czechoslovak Hussite Church in 1920, lost the bulk of its adherents during the Communist era and continues to lose in the modern, ongoing secularization. Protestantism never recovered after the Counter-Reformation was introduced by the Austrian Habsburgs in 1620. Prior to the Holocaust, the Czech Republic had a sizable Jewish community of around 100,000. There are many historically important and culturally relevant Synagogues in the Czech Republic such as Europe's oldest active Synagogue, The Old New Synagogue and the second largest Synagogue in Europe, the Great Synagogue (Plzeň). The Holocaust decimated Czech Jewry and the Jewish population as of 2021 is 3,900. According to the 2011 census, 34% of the population stated they had no religion, 10.3% was Catholic, 0.8% was Protestant (0.5% Czech Brethren and 0.4% Hussite), and 9% followed other forms of religion both denominational or not (of which 863 people answered they are Pagan). 45% of the population did not answer the question about religion. From 1991 to 2001 and further to 2011 the adherence to Catholicism decreased from 39% to 27% and then to 10%; Protestantism similarly declined from 3.7% to 2% and then to 0.8%. The Muslim population is estimated to be 20,000 representing 0.2% of the population. The proportion of religious believers varies significantly across the country, from 55% in Zlín Region to 16% in Ústí nad Labem Region. Education and health care Education in the Czech Republic is compulsory for nine years and citizens have access to a free-tuition university education, while the average number of years of education is 13.1. Additionally, the Czech Republic has a "relatively equal" educational system in comparison with other countries in Europe. Founded in 1348, Charles University was the first university in Central Europe. Other major universities in the country are Masaryk University, Czech Technical University, Palacký University, Academy of Performing Arts and University of Economics. The Programme for International Student Assessment, coordinated by the OECD, currently ranks the Czech education system as the 15th most successful in the world, higher than the OECD average. The UN Education Index ranks the Czech Republic 10th (positioned behind Denmark and ahead of South Korea). Health care in the Czech Republic is similar in quality to that of other developed nations. The Czech universal health care system is based on a compulsory insurance model, with fee-for-service care funded by mandatory employment-related insurance plans. According to the 2016 Euro health consumer index, a comparison of healthcare in Europe, the Czech healthcare is 13th, ranked behind Sweden and two positions ahead of the United Kingdom. Culture Art Venus of Dolní Věstonice is the treasure of prehistoric art. Theodoric of Prague was a painter in the Gothic era who decorated the castle Karlstejn. In the Baroque era, there were Wenceslaus Hollar, Jan Kupecký, Karel Škréta, Anton Raphael Mengs or Petr Brandl, sculptors Matthias Braun and Ferdinand Brokoff. In the first half of the 19th century, Josef Mánes joined the romantic movement. In the second half of the 19th century had the main say the so-called "National Theatre generation": sculptor Josef Václav Myslbek and painters Mikoláš Aleš, Václav Brožík, Vojtěch Hynais or Julius Mařák. At the end of the century came a wave of Art Nouveau. Alfons Mucha became the main representative. He is known for Art Nouveau posters and his cycle of 20 large canvases named the Slav Epic, which depicts the history of Czechs and other Slavs. , the Slav Epic can be seen in the Veletržní Palace of the National Gallery in Prague, which manages the largest collection of art in the Czech Republic. Max Švabinský was another Art nouveau painter. The 20th century brought an avant-garde revolution. In the Czech lands mainly expressionist and cubist: Josef Čapek, Emil Filla, Bohumil Kubišta, Jan Zrzavý. Surrealism emerged particularly in the work of Toyen, Josef Šíma and Karel Teige. In the world, however, he pushed mainly František Kupka, a pioneer of abstract painting. As illustrators and cartoonists in the first half of the 20th century gained fame Josef Lada, Zdeněk Burian or Emil Orlík. Art photography has become a new field (František Drtikol, Josef Sudek, later Jan Saudek or Josef Koudelka). The Czech Republic is known for its individually made, mouth-blown, and decorated Bohemian glass. Architecture The earliest preserved stone buildings in Bohemia and Moravia date back to the time of the Christianization in the 9th and 10th centuries. Since the Middle Ages, the Czech lands have been using the same architectural styles as most of Western and Central Europe. The oldest still standing churches were built in the Romanesque style. During the 13th century, it was replaced by the Gothic style. In the 14th century, Emperor Charles IV invited architects from France and Germany, Matthias of Arras and Peter Parler, to his court in Prague. During the Middle Ages, some fortified castles were built by the king and aristocracy, as well as some monasteries. The Renaissance style penetrated the Bohemian Crown in the late 15th century when the older Gothic style started to be mixed with Renaissance elements. An example of pure Renaissance architecture in Bohemia is the Queen Anne's Summer Palace, which was situated in the garden of Prague Castle. Evidence of the general reception of the Renaissance in Bohemia, involving an influx of Italian architects, can be found in spacious chateaus with arcade courtyards and geometrically arranged gardens. Emphasis was placed on comfort, and buildings that were built for entertainment purposes also appeared. In the 17th century, the Baroque style spread throughout the Crown of Bohemia. In the 18th century, Bohemia produced an architectural peculiarity – the Baroque Gothic style, a synthesis of the Gothic and Baroque styles. During the 19th century stands the revival architectural styles. Some churches were restored to their presumed medieval appearance and there were constructed buildings in the Neo-Romanesque, Neo-Gothic and Neo-Renaissance styles. At the turn of the 19th and 20th centuries, the new art style appeared in the Czech lands – Art Nouveau. Bohemia contributed an unusual style to the world's architectural heritage when Czech architects attempted to transpose the Cubism of painting and sculpture into architecture. Between World Wars I and II, Functionalism, with its sober, progressive forms, took over as the main architectural style. After World War II and the Communist coup in 1948, art in Czechoslovakia became Soviet-influenced. The Czechoslovak avant-garde artistic movement is known as the Brussels style came up in the time of political liberalization of Czechoslovakia in the 1960s. Brutalism dominated in the 1970s and 1980s. The Czech Republic is not shying away from the more modern trends of international architecture, an example is the Dancing House (Tančící dům) in Prague, Golden Angel in Prague or Congress Centre in Zlín. Influential Czech architects include Peter Parler, Benedikt Rejt, Jan Santini Aichel, Kilian Ignaz Dientzenhofer, Josef Fanta, Josef Hlávka, Josef Gočár, Pavel Janák, Jan Kotěra, Věra Machoninová, Karel Prager, Karel Hubáček, Jan Kaplický, Eva Jiřičná or Josef Pleskot. Literature The literature from the area of today's Czech Republic was mostly written in Czech, but also in Latin and German or even Old Church Slavonic. Franz Kafka, although a competent user of Czech, wrote in his mother tongue, German. His included: (The Trial and The Castle). In the second half of the 13th century, the royal court in Prague became one of the centers of German Minnesang and courtly literature. The Czech German-language literature can be seen in the first half of the 20th century. Bible translations played a role in the development of Czech literature. The oldest Czech translation of the Psalms originated in the late 13th century and the first complete Czech translation of the Bible was finished around 1360. The first complete printed Czech Bible was published in 1488. The first complete Czech Bible translation from the original languages was published between 1579 and 1593. The Codex Gigas from the 12th century is the largest extant medieval manuscript in the world. Czech-language literature can be divided into several periods: the Middle Ages; the Hussite period; the Renaissance humanism; the Baroque period; the Enlightenment and Czech reawakening in the first half of the 19th century, modern literature in the second half of the 19th century; the avant-garde of the interwar period; the years under Communism; and the Czech Republic. The antiwar comedy novel The Good Soldier Švejk is the most translated Czech book in history. The international literary award the Franz Kafka Prize is awarded in the Czech Republic. The Czech Republic has the densest network of libraries in Europe. Czech literature and culture played a role on at least two occasions when Czechs lived under oppression and political activity was suppressed. On both of these occasions, in the early 19th century and then again in the 1960s, the Czechs used their cultural and literary effort to strive for political freedom, establishing a confident, politically aware nation. Music The musical tradition of the Czech lands arose from the first church hymns, whose first evidence is suggested at the break of the 10th and 11th centuries. Some pieces of Czech music include two chorales, which in their time performed the function of anthems: "Lord, Have Mercy on Us" and the hymn "Saint Wenceslaus" or "Saint Wenceslaus Chorale". The authorship of the anthem "Lord, Have Mercy on Us" is ascribed by some historians to Saint Adalbert of Prague (sv.Vojtěch), bishop of Prague, living between 956 and 997. The wealth of musical culture lies in the classical music tradition during all historical periods, especially in the Baroque, Classicism, Romantic, modern classical music and in the traditional folk music of Bohemia, Moravia and Silesia. Since the early era of artificial music, Czech musicians and composers have been influenced the folk music of the region and dance. Czech music can be considered to have been "beneficial" in both the European and worldwide context, several times co-determined or even determined a newly arriving era in musical art, above all of Classical era, as well as by original attitudes in Baroque, Romantic and modern classical music. Some Czech musical works are The Bartered Bride, New World Symphony, Sinfonietta and Jenůfa. A music festival in the country is Prague Spring International Music Festival of classical music, a permanent showcase for performing artists, symphony orchestras and chamber music ensembles of the world. Theatre The roots of Czech theatre can be found in the Middle Ages, especially in the cultural life of the Gothic period. In the 19th century, the theatre played a role in the national awakening movement and later, in the 20th century, it became a part of modern European theatre art. The original Czech cultural phenomenon came into being at the end of the 1950s. This project called Laterna magika, resulting in productions that combined theater, dance, and film in a poetic manner, considered the first multimedia art project in an international context. A drama is Karel Čapek's play R.U.R., which introduced the word "robot". The country has a tradition of puppet theater. In 2016, Czech and Slovak Puppetry was included on the UNESCO Intangible Cultural Heritage Lists. Film The tradition of Czech cinematography started in the second half of the 1890s. Peaks of the production in the era of silent movies include the historical drama The Builder of the Temple and the social and erotic drama Erotikon directed by Gustav Machatý. The early Czech sound film era was productive, above all in mainstream genres, with the comedies of Martin Frič or Karel Lamač. There were dramatic movies sought internationally. Hermína Týrlová was a prominent Czech animator, screenwriter, and film director. She was often called the mother of Czech animation. Over the course of her career, she produced over 60 animated children's short films using puppets and the technique of stop motion animation. Before the German occupation, in 1933, filmmaker and animator established the first Czech animation studio "IRE Film" with her husband Karel Dodal. After the period of Nazi occupation and early communist official dramaturgy of socialist realism in movies at the turn of the 1940s and 1950s with fewer exceptions such as Krakatit or Men without wings (awarded by in 1946), an era of the Czech film began with animated films, performed in anglophone countries under the name "The Fabulous World of Jules Verne" from 1958, which combined acted drama with animation, and Jiří Trnka, the founder of the modern puppet film. This began a tradition of animated films (Mole etc.). In the 1960s, the hallmark of Czechoslovak New Wave's films were improvised dialogues, black and absurd humor and the occupation of non-actors. Directors are trying to preserve natural atmosphere without refinement and artificial arrangement of scenes. A personality of the 1960s and the beginning of the 1970s with original manuscript and psychological impact is František Vláčil. Another international author is Jan Švankmajer, a filmmaker and artist whose work spans several media. He is a self-labeled surrealist known for animations and features. The Barrandov Studios in Prague are the largest film studios with film locations in the country. Filmmakers have come to Prague to shoot scenery no longer found in Berlin, Paris and Vienna. The city of Karlovy Vary was used as a location for the 2006 James Bond film Casino Royale. The Czech Lion is the highest Czech award for film achievement. Karlovy Vary International Film Festival is one of the film festivals that have been given competitive status by the FIAPF. Other film festivals held in the country include Febiofest, Jihlava International Documentary Film Festival, One World Film Festival, Zlín Film Festival and Fresh Film Festival. Media Czech journalists and media enjoy a degree of freedom. There are restrictions against writing in support of Nazism, racism or violating Czech law. The Czech press was ranked as the 40th most free press in the World Freedom Index by Reporters Without Borders in 2021. Radio Free Europe/Radio Liberty has its headquarters in Prague. The national public television service is Czech Television that operates the 24-hour news channel ČT24 and the news website ct24.cz. As of 2020, Czech Television is the most watched television, followed by private televisions TV Nova and Prima TV. However, TV Nova has the most watched main news program and prime time program. Other public services include the Czech Radio and the Czech News Agency. The best-selling daily national newspapers in 2020/21 are Blesk (average 703,000 daily readers), Mladá fronta DNES (average 461,000 daily readers), Právo (average 182,000 daily readers), Lidové noviny (average 163,000 daily readers) and Hospodářské noviny (average 162,000 daily readers). Most Czechs (87%) read their news online, with Seznam.cz, iDNES.cz, Novinky.cz, iPrima.cz and Seznam Zprávy.cz being the most visited as of 2021. Cuisine Czech cuisine is marked by an emphasis on meat dishes with pork, beef, and chicken. Goose, duck, rabbit, and venison are served. Fish is less common, with the occasional exception of fresh trout and carp, which is served at Christmas. There is a variety of local sausages, wurst, pâtés, and smoked and cured meats. Czech desserts include a variety of whipped cream, chocolate, and fruit pastries and tarts, crêpes, creme desserts and cheese, poppy-seed-filled and other types of traditional cakes such as buchty, koláče and štrúdl. Czech beer has a history extending more than a millennium; the earliest known brewery existed in 993. Today the Czech Republic has the highest beer consumption per capita in the world. The pilsner style beer (pils) originated in Plzeň, where the world's first blond lager Pilsner Urquell is still produced. It has served as the inspiration for more than two-thirds of the beer produced in the world today. The city of České Budějovice has similarly lent its name to its beer, known as Budweiser Budvar. The South Moravian region has been producing wine since the Middle Ages; about 94% of vineyards in the Czech Republic are Moravian. Aside from beer, slivovitz and wine, the Czech Republic also produces two liquors, Fernet Stock and Becherovka. Kofola is a non-alcoholic domestic cola soft drink which competes with Coca-Cola and Pepsi. Sport The two leading sports in the Czech Republic are football and ice hockey. The most watched sporting events are the Olympic tournament and World Championships of ice hockey. Other most popular sports include tennis, volleyball, floorball, golf, ball hockey, athletics, basketball and skiing. The country has won 15 gold medals in the Summer Olympics and nine in the Winter Games. (See Olympic history.) The Czech ice hockey team won the gold medal at the 1998 Winter Olympics and has won twelve gold medals at the World Championships, including three straight from 1999 to 2001. The Škoda Motorsport is engaged in competition racing since 1901 and has gained a number of titles with various vehicles around the world. MTX automobile company was formerly engaged in the manufacture of racing and formula cars since 1969. Hiking is a popular sport. The word for 'tourist' in Czech, turista, also means 'trekker' or 'hiker'. For hikers, thanks to the more than 120-year-old tradition, there is the Czech Hiking Markers System of trail blazing, that has been adopted by countries worldwide. There is a network of around 40,000 km of marked short- and long-distance trails crossing the whole country and all the Czech mountains. See also List of Czech Republic-related topics Outline of the Czech Republic Notes References Citations General sources Further reading Hochman, Jiří (1998). Historical dictionary of the Czech State. Scarecrow Press. Bryant, Chad. Prague: Belonging and the Modern City. Cambridge MA: Harvard University Press, 2021. External links Governmental website Presidential website Senate Portal of the Public Administration #VisitCzechia – official tourist portal of the Czech Republic Czechia – Central Intelligence Agency: The World Factbook Central Europe Countries in Europe Landlocked countries Member states of NATO Member states of the European Union Member states of the United Nations Member states of the Three Seas Initiative Republics Member states of the Council of Europe States and territories established in 1993 OECD members
5322
https://en.wikipedia.org/wiki/Czechoslovakia
Czechoslovakia
Czechoslovakia (; Czech and , Česko-Slovensko) was a landlocked state in Central Europe, created in 1918, when it declared its independence from Austria-Hungary. In 1938, after the Munich Agreement, the Sudetenland became part of Nazi Germany, while the country lost further territories to Hungary and Poland (Carpathian Ruthenia to Hungary and Zaolzie to Poland). Between 1939 and 1945, the state ceased to exist, as Slovakia proclaimed its independence and the remaining territories in the east became part of Hungary, while in the remainder of the Czech Lands, the German Protectorate of Bohemia and Moravia was proclaimed. In 1939, after the outbreak of World War II, former Czechoslovak President Edvard Beneš formed a government-in-exile and sought recognition from the Allies. After World War II, Czechoslovakia was reestablished under its pre-1938 borders, with the exception of Carpathian Ruthenia, which became part of the Ukrainian SSR (a republic of the Soviet Union). The Communist Party seized power in a coup in 1948. From 1948 to 1989, Czechoslovakia was part of the Eastern Bloc with a planned economy. Its economic status was formalized in membership of Comecon from 1949 and its defense status in the Warsaw Pact of 1955. A period of political liberalization in 1968, the Prague Spring, ended violently when the Soviet Union, assisted by other Warsaw Pact countries, invaded Czechoslovakia. In 1989, as Marxist–Leninist governments and communism were ending all over Central and Eastern Europe, Czechoslovaks peacefully deposed their communist government during the Velvet Revolution, which began on 17 November 1989 and ended 11 days later on 28 November when all of the top Communist leaders and Communist party itself resigned. On 31 December 1992, Czechoslovakia peacefully split into the two sovereign states of the Czech Republic and Slovakia. Characteristics Form of state 1918–1937: A democratic republic championed by Tomáš Masaryk. 1938–1939: After the annexation of Sudetenland by Nazi Germany in 1938, the region gradually turned into a state with loosened connections among the Czech, Slovak, and Ruthenian parts. A strip of southern Slovakia and Carpathian Ruthenia was redeemed by Hungary, and the Trans-Olza region was annexed by Poland. 1939–1945: The remainder of the state was dismembered and became split into the Protectorate of Bohemia and Moravia and the Slovak Republic, while the rest of Carpathian Ruthenia was occupied and annexed by Hungary. A government-in-exile continued to exist in London, supported by the United Kingdom, United States and their Allies; after the German invasion of Soviet Union, it was also recognized by the Soviet Union. Czechoslovakia adhered to the Declaration by United Nations and was a founding member of the United Nations. 1946–1948: The country was governed by a coalition government with communist ministers, including the prime minister and the minister of interior. Carpathian Ruthenia was ceded to the Soviet Union. 1948–1989: The country became a Marxist-Leninist state under Soviet domination with a command economy. In 1960, the country officially became a socialist republic, the Czechoslovak Socialist Republic. It was a satellite state of the Soviet Union. 1989–1990: Czechoslovakia formally became a federal republic comprising the Czech Socialist Republic and the Slovak Socialist Republic. In late 1989, the communist rule came to an end during the Velvet Revolution followed by the re-establishment of a democratic parliamentary republic. 1990–1992: Shortly after the Velvet Revolution, the state was renamed the Czech and Slovak Federative Republic, consisting of the Czech Republic and the Slovak Republic (Slovakia) until the peaceful dissolution on 31  December 1992. Neighbors Austria 1918–1938, 1945–1992 Germany (both predecessors, West Germany and East Germany, were neighbors between 1949 and 1990) Hungary Poland Romania 1918–1938 Soviet Union 1945–1991 Ukraine 1991–1992 (Soviet Union member until 1991) Topography The country was of generally irregular terrain. The western area was part of the north-central European uplands. The eastern region was composed of the northern reaches of the Carpathian Mountains and lands of the Danube River basin. Climate The weather is mild winters and mild summers. Influenced by the Atlantic Ocean from the west, the Baltic Sea from the north, and Mediterranean Sea from the south. There is no continental weather. Names 1918–1938: Czechoslovak Republic (abbreviated ČSR), or Czechoslovakia, before the formalization of the name in 1920, also known as Czecho-Slovakia or the Czecho-Slovak state 1938–1939: Czecho-Slovak Republic, or Czecho-Slovakia 1945–1960: Czechoslovak Republic (ČSR), or Czechoslovakia 1960–1990: Czechoslovak Socialist Republic (ČSSR), or Czechoslovakia 1990: Czechoslovak Federative Republic (ČSFR) 1990–1992: Czech and Slovak Federative Republic (ČSFR), or Czechoslovakia History Origins The area was part of the Austro-Hungarian Empire until it collapsed at the end of World War I. The new state was founded by Tomáš Garrigue Masaryk, who served as its first president from 14 November 1918 to 14 December 1935. He was succeeded by his close ally Edvard Beneš (1884–1948). The roots of Czech nationalism go back to the 19th century, when philologists and educators, influenced by Romanticism, promoted the Czech language and pride in the Czech people. Nationalism became a mass movement in the second half of the 19th century. Taking advantage of the limited opportunities for participation in political life under Austrian rule, Czech leaders such as historian František Palacký (1798–1876) founded various patriotic, self-help organizations which provided a chance for many of their compatriots to participate in communal life before independence. Palacký supported Austro-Slavism and worked for a reorganized federal Austrian Empire, which would protect the Slavic speaking peoples of Central Europe against Russian and German threats. An advocate of democratic reform and Czech autonomy within Austria-Hungary, Masaryk was elected twice to the Reichsrat (Austrian Parliament), from 1891 to 1893 for the Young Czech Party, and from 1907 to 1914 for the Czech Realist Party, which he had founded in 1889 with Karel Kramář and Josef Kaizl. During World War I a number of Czechs and Slovaks, the Czechoslovak Legions, fought with the Allies in France and Italy, while large numbers deserted to Russia in exchange for its support for the independence of Czechoslovakia from the Austrian Empire. With the outbreak of World War I, Masaryk began working for Czech independence in a union with Slovakia. With Edvard Beneš and Milan Rastislav Štefánik, Masaryk visited several Western countries and won support from influential publicists. The Czechoslovak National Council was the main organization that advanced the claims for a Czechoslovak state. First Czechoslovak Republic Formation The Bohemian Kingdom ceased to exist in 1918 when it was incorporated into Czechoslovakia. Czechoslovakia was founded in October 1918, as one of the successor states of the Austro-Hungarian Empire at the end of World War I and as part of the Treaty of Saint-Germain-en-Laye. It consisted of the present day territories of Bohemia, Moravia, Slovakia and Carpathian Ruthenia. Its territory included some of the most industrialized regions of the former Austria-Hungary. The land consisted of modern day Czechia, Slovakia, and a region of Ukraine called Carpathian Ruthenia Ethnicity The new country was a multi-ethnic state, with Czechs and Slovaks as constituent peoples. The population consisted of Czechs (51%), Slovaks (16%), Germans (22%), Hungarians (5%) and Rusyns (4%). Many of the Germans, Hungarians, Ruthenians and Poles and some Slovaks, felt oppressed because the political elite did not generally allow political autonomy for minority ethnic groups. This policy led to unrest among the non-Czech population, particularly in German-speaking Sudetenland, which initially had proclaimed itself part of the Republic of German-Austria in accordance with the self-determination principle. The state proclaimed the official ideology that there were no separate Czech and Slovak nations, but only one nation of Czechoslovaks (see Czechoslovakism), to the disagreement of Slovaks and other ethnic groups. Once a unified Czechoslovakia was restored after World War II (after the country had been divided during the war), the conflict between the Czechs and the Slovaks surfaced again. The governments of Czechoslovakia and other Central European nations deported ethnic Germans, reducing the presence of minorities in the nation. Most of the Jews had been killed during the war by the Nazis. *Jews identified themselves as Germans or Hungarians (and Jews only by religion not ethnicity), the sum is, therefore, more than 100%. Interwar period During the period between the two world wars Czechoslovakia was a democratic state. The population was generally literate, and contained fewer alienated groups. The influence of these conditions was augmented by the political values of Czechoslovakia's leaders and the policies they adopted. Under Tomas Masaryk, Czech and Slovak politicians promoted progressive social and economic conditions that served to defuse discontent. Foreign minister Beneš became the prime architect of the Czechoslovak-Romanian-Yugoslav alliance (the "Little Entente", 1921–38) directed against Hungarian attempts to reclaim lost areas. Beneš worked closely with France. Far more dangerous was the German element, which after 1933 became allied with the Nazis in Germany. Czech-Slovak relations came to be a central issue in Czechoslovak politics during the 1930s. The increasing feeling of inferiority among the Slovaks, who were hostile to the more numerous Czechs, weakened the country in the late 1930s. Slovakia became autonomous in the fall of 1938, and by mid-1939, Slovakia had become independent, with the First Slovak Republic set up as a satellite state of Nazi Germany and the far-right Slovak People's Party in power . After 1933, Czechoslovakia remained the only democracy in central and eastern Europe. Munich Agreement, and Two-Step German Occupation In September 1938, Adolf Hitler demanded control of the Sudetenland. On 29 September 1938, Britain and France ceded control in the Appeasement at the Munich Conference; France ignored the military alliance it had with Czechoslovakia. During October 1938, Nazi Germany occupied the Sudetenland border region, effectively crippling Czechoslovak defences. The First Vienna Award assigned a strip of southern Slovakia and Carpathian Ruthenia to Hungary. Poland occupied Zaolzie, an area whose population was majority Polish, in October 1938. On 14 March 1939, the remainder ("rump") of Czechoslovakia was dismembered by the proclamation of the Slovak State, the next day the rest of Carpathian Ruthenia was occupied and annexed by Hungary, while the following day the German Protectorate of Bohemia and Moravia was proclaimed. The eventual goal of the German state under Nazi leadership was to eradicate Czech nationality through assimilation, deportation, and extermination of the Czech intelligentsia; the intellectual elites and middle class made up a considerable number of the 200,000 people who passed through concentration camps and the 250,000 who died during German occupation. Under , it was assumed that around 50% of Czechs would be fit for Germanization. The Czech intellectual elites were to be removed not only from Czech territories but from Europe completely. The authors of Generalplan Ost believed it would be best if they emigrated overseas, as even in Siberia they were considered a threat to German rule. Just like Jews, Poles, Serbs, and several other nations, Czechs were considered to be untermenschen by the Nazi state. In 1940, in a secret Nazi plan for the Germanization of the Protectorate of Bohemia and Moravia it was declared that those considered to be of racially Mongoloid origin and the Czech intelligentsia were not to be Germanized. The deportation of Jews to concentration camps was organized under the direction of Reinhard Heydrich, and the fortress town of Terezín was made into a ghetto way station for Jewish families. On 4 June 1942 Heydrich died after being wounded by an assassin in Operation Anthropoid. Heydrich's successor, Colonel General Kurt Daluege, ordered mass arrests and executions and the destruction of the villages of Lidice and Ležáky. In 1943 the German war effort was accelerated. Under the authority of Karl Hermann Frank, German minister of state for Bohemia and Moravia, some 350,000 Czech laborers were dispatched to the Reich. Within the protectorate, all non-war-related industry was prohibited. Most of the Czech population obeyed quiescently up until the final months preceding the end of the war, while thousands were involved in the resistance movement. For the Czechs of the Protectorate Bohemia and Moravia, German occupation was a period of brutal oppression. Czech losses resulting from political persecution and deaths in concentration camps totaled between 36,000 and 55,000. The Jewish populations of Bohemia and Moravia (118,000 according to the 1930 census) were virtually annihilated. Many Jews emigrated after 1939; more than 70,000 were killed; 8,000 survived at Terezín. Several thousand Jews managed to live in freedom or in hiding throughout the occupation. Despite the estimated 136,000 deaths at the hands of the Nazi regime, the population in the Reichsprotektorate saw a net increase during the war years of approximately 250,000 in line with an increased birth rate. On 6 May 1945, the third US Army of General Patton entered Pilsen from the south west. On 9 May 1945, Soviet Red Army troops entered Prague. Communist Czechoslovakia After World War II, prewar Czechoslovakia was reestablished, with the exception of Subcarpathian Ruthenia, which was annexed by the Soviet Union and incorporated into the Ukrainian Soviet Socialist Republic. The Beneš decrees were promulgated concerning ethnic Germans (see Potsdam Agreement) and ethnic Hungarians. Under the decrees, citizenship was abrogated for people of German and Hungarian ethnic origin who had accepted German or Hungarian citizenship during the occupations. In 1948, this provision was cancelled for the Hungarians, but only partially for the Germans. The government then confiscated the property of the Germans and expelled about 90% of the ethnic German population, over 2 million people. Those who remained were collectively accused of supporting the Nazis after the Munich Agreement, as 97.32% of Sudeten Germans had voted for the NSDAP in the December 1938 elections. Almost every decree explicitly stated that the sanctions did not apply to antifascists. Some 250,000 Germans, many married to Czechs, some antifascists, and also those required for the post-war reconstruction of the country, remained in Czechoslovakia. The Beneš Decrees still cause controversy among nationalist groups in the Czech Republic, Germany, Austria and Hungary. Following the expulsion of the ethnic German population from Czechoslovakia, parts of the former Sudetenland, especially around Krnov and the surrounding villages of the Jesenik mountain region in northeastern Czechoslovakia, were settled in 1949 by Communist refugees from Northern Greece who had left their homeland as a result of the Greek Civil War. These Greeks made up a large proportion of the town and region's population until the late 1980s/early 1990s. Although defined as "Greeks", the Greek Communist community of Krnov and the Jeseniky region actually consisted of an ethnically diverse population, including Greek Macedonians, Macedonians, Vlachs, Pontic Greeks and Turkish speaking Urums or Caucasus Greeks. Carpathian Ruthenia (Podkarpatská Rus) was occupied by (and in June 1945 formally ceded to) the Soviet Union. In the 1946 parliamentary election, the Communist Party of Czechoslovakia was the winner in the Czech lands, and the Democratic Party won in Slovakia. In February 1948 the Communists seized power. Although they would maintain the fiction of political pluralism through the existence of the National Front, except for a short period in the late 1960s (the Prague Spring) the country had no liberal democracy. Since citizens lacked significant electoral methods of registering protest against government policies, periodically there were street protests that became violent. For example, there were riots in the town of Plzeň in 1953, reflecting economic discontent. Police and army units put down the rebellion, and hundreds were injured but no one was killed. While its economy remained more advanced than those of its neighbors in Eastern Europe, Czechoslovakia grew increasingly economically weak relative to Western Europe. The currency reform of 1953 caused dissatisfaction among Czechoslovak laborers. To equalize the wage rate, Czechoslovaks had to turn in their old money for new at a decreased value. The banks also confiscated savings and bank deposits to control the amount of money in circulation. In the 1950s, Czechoslovakia experienced high economic growth (averaging 7% per year), which allowed for a substantial increase in wages and living standards, thus promoting the stability of the regime. In 1968, when the reformer Alexander Dubček was appointed to the key post of First Secretary of the Czechoslovak Communist Party, there was a brief period of liberalization known as the Prague Spring. In response, after failing to persuade the Czechoslovak leaders to change course, five other members of the Warsaw Pact invaded. Soviet tanks rolled into Czechoslovakia on the night of 20–21 August 1968. Soviet Communist Party General Secretary Leonid Brezhnev viewed this intervention as vital for the preservation of the Soviet, socialist system and vowed to intervene in any state that sought to replace Marxism-Leninism with capitalism. In the week after the invasion there was a spontaneous campaign of civil resistance against the occupation. This resistance involved a wide range of acts of non-cooperation and defiance: this was followed by a period in which the Czechoslovak Communist Party leadership, having been forced in Moscow to make concessions to the Soviet Union, gradually put the brakes on their earlier liberal policies. Meanwhile, one plank of the reform program had been carried out: in 1968–69, Czechoslovakia was turned into a federation of the Czech Socialist Republic and Slovak Socialist Republic. The theory was that under the federation, social and economic inequities between the Czech and Slovak halves of the state would be largely eliminated. A number of ministries, such as education, now became two formally equal bodies in the two formally equal republics. However, the centralized political control by the Czechoslovak Communist Party severely limited the effects of federalization. The 1970s saw the rise of the dissident movement in Czechoslovakia, represented among others by Václav Havel. The movement sought greater political participation and expression in the face of official disapproval, manifested in limitations on work activities, which went as far as a ban on professional employment, the refusal of higher education for the dissidents' children, police harassment and prison. During the 1980s, Czechoslovakia became one of the most tightly controlled Communist regimes in the Warsaw Pact in resistance to the mitigation of controls notified by Soviet president Mikhail Gorbachev. After 1989 In 1989, the Velvet Revolution restored democracy. This occurred around the same time as the fall of communism in Romania, Bulgaria, Hungary, East Germany and Poland. The word "socialist" was removed from the country's full name on 29 March 1990 and replaced by "federal". Pope John Paul II made a papal visit to Czechoslovakia on 21 April 1990, hailing it as a symbolic step of reviving Christianity in the newly-formed post-communist state. Czechoslovakia participated in the Gulf War with a small force of 200 troops under the command of the U.S.-led coalition. In 1992, because of growing nationalist tensions in the government, Czechoslovakia was peacefully dissolved by parliament. On 31 December 1992 it formally separated into two independent countries, the Czech Republic and the Slovak Republic. Government and politics After World War II, a political monopoly was held by the Communist Party of Czechoslovakia (KSČ). The leader of the KSČ was de facto the most powerful person in the country during this period. Gustáv Husák was elected first secretary of the KSČ in 1969 (changed to general secretary in 1971) and president of Czechoslovakia in 1975. Other parties and organizations existed but functioned in subordinate roles to the KSČ. All political parties, as well as numerous mass organizations, were grouped under umbrella of the National Front. Human rights activists and religious activists were severely repressed. Constitutional development Czechoslovakia had the following constitutions during its history (1918–1992): Temporary constitution of 14 November 1918 (democratic): see History of Czechoslovakia (1918–1938) The 1920 constitution (The Constitutional Document of the Czechoslovak Republic), democratic, in force until 1948, several amendments The Communist 1948 Ninth-of-May Constitution The Communist 1960 Constitution of the Czechoslovak Socialist Republic with major amendments in 1968 (Constitutional Law of Federation), 1971, 1975, 1978, and 1989 (at which point the leading role of the Communist Party was abolished). It was amended several more times during 1990–1992 (for example, 1990, name change to Czecho-Slovakia, 1991 incorporation of the human rights charter) Heads of state and government List of presidents of Czechoslovakia List of prime ministers of Czechoslovakia Foreign policy International agreements and membership In the 1930s, the nation formed a military alliance with France, which collapsed in the Munich Agreement of 1938. After World War II, an active participant in Council for Mutual Economic Assistance (Comecon), Warsaw Pact, United Nations and its specialized agencies; signatory of conference on Security and Cooperation in Europe. Administrative divisions 1918–1923: Different systems in former Austrian territory (Bohemia, Moravia, a small part of Silesia) compared to former Hungarian territory (Slovakia and Ruthenia): three lands (země) (also called district units (kraje)): Bohemia, Moravia, Silesia, plus 21 counties (župy) in today's Slovakia and three counties in today's Ruthenia; both lands and counties were divided into districts (okresy). 1923–1927: As above, except that the Slovak and Ruthenian counties were replaced by six (grand) counties ((veľ)župy) in Slovakia and one (grand) county in Ruthenia, and the numbers and boundaries of the okresy were changed in those two territories. 1928–1938: Four lands (Czech: země, Slovak: krajiny): Bohemia, Moravia-Silesia, Slovakia and Sub-Carpathian Ruthenia, divided into districts (okresy). Late 1938 – March 1939: As above, but Slovakia and Ruthenia gained the status of "autonomous lands". Slovakia was called Slovenský štát, with its own currency and government. 1945–1948: As in 1928–1938, except that Ruthenia became part of the Soviet Union. 1949–1960: 19 regions (kraje) divided into 270 okresy. 1960–1992: 10 kraje, Prague, and (from 1970) Bratislava (capital of Slovakia); these were divided into 109–114 okresy; the kraje were abolished temporarily in Slovakia in 1969–1970 and for many purposes from 1991 in Czechoslovakia; in addition, the Czech Socialist Republic and the Slovak Socialist Republic were established in 1969 (without the word Socialist from 1990). Population and ethnic groups Economy Before World War II, the economy was about the fourth in all industrial countries in Europe. The state was based on strong economy, manufacturing cars (Škoda, Tatra), trams, aircraft (Aero, Avia), ships, ship engines (Škoda), cannons, shoes (Baťa), turbines, guns (Zbrojovka Brno). It was the industrial workshop for the Austro-Hungarian empire. The Slovak lands relied more heavily on agriculture than the Czech lands. After World War II, the economy was centrally planned, with command links controlled by the communist party, similarly to the Soviet Union. The large metallurgical industry was dependent on imports of iron and non-ferrous ores. Industry: Extractive industry and manufacturing dominated the sector, including machinery, chemicals, food processing, metallurgy, and textiles. The sector was wasteful in its use of energy, materials, and labor and was slow to upgrade technology, but the country was a major supplier of high-quality machinery, instruments, electronics, aircraft, airplane engines and arms to other socialist countries. Agriculture: Agriculture was a minor sector, but collectivized farms of large acreage and relatively efficient mode of production enabled the country to be relatively self-sufficient in the food supply. The country depended on imports of grains (mainly for livestock feed) in years of adverse weather. Meat production was constrained by a shortage of feed, but the country still recorded high per capita consumption of meat. Foreign Trade: Exports were estimated at US$17.8 billion in 1985. Exports were machinery (55%), fuel and materials (14%), and manufactured consumer goods (16%). Imports stood at an estimated US$17.9 billion in 1985, including fuel and materials (41%), machinery (33%), and agricultural and forestry products (12%). In 1986, about 80% of foreign trade was with other socialist countries. Exchange rate: Official, or commercial, the rate was crowns (Kčs) 5.4 per US$1 in 1987. Tourist, or non-commercial, the rate was Kčs 10.5 per US$1. Neither rate reflected purchasing power. The exchange rate on the black market was around Kčs 30 per US$1, which became the official rate once the currency became convertible in the early 1990s. Fiscal year: Calendar year. Fiscal policy: The state was the exclusive owner of means of production in most cases. Revenue from state enterprises was the primary source of revenues followed by turnover tax. The government spent heavily on social programs, subsidies, and investment. The budget was usually balanced or left a small surplus. Resource base After World War II, the country was short of energy, relying on imported crude oil and natural gas from the Soviet Union, domestic brown coal, and nuclear and hydroelectric energy. Energy constraints were a major factor in the 1980s. Transport and communications Slightly after the foundation of Czechoslovakia in 1918, there was a lack of essential infrastructure in many areas – paved roads, railways, bridges, etc. Massive improvement in the following years enabled Czechoslovakia to develop its industry. Prague's civil airport in Ruzyně became one of the most modern terminals in the world when it was finished in 1937. Tomáš Baťa, a Czech entrepreneur and visionary, outlined his ideas in the publication "Budujme stát pro 40 milionů lidí", where he described the future motorway system. Construction of the first motorways in Czechoslovakia begun in 1939, nevertheless, they were stopped after German occupation during World War II. Society Education Education was free at all levels and compulsory from ages 6 to 15. The vast majority of the population was literate. There was a highly developed system of apprenticeship training and vocational schools supplemented general secondary schools and institutions of higher education. Religion In 1991, 46% of the population were Roman Catholics, 5.3% were Evangelical Lutheran, 30% were Atheist, and other religions made up 17% of the country, but there were huge differences in religious practices between the two constituent republics; see Czech Republic and Slovakia. Health, social welfare and housing After World War II, free health care was available to all citizens. National health planning emphasized preventive medicine; factory and local health care centres supplemented hospitals and other inpatient institutions. There was a substantial improvement in rural health care during the 1960s and 1970s. Mass media During the era between the World Wars, Czechoslovak democracy and liberalism facilitated conditions for free publication. The most significant daily newspapers in these times were Lidové noviny, Národní listy, Český deník and Československá Republika. During Communist rule, the mass media in Czechoslovakia were controlled by the Communist Party. Private ownership of any publication or agency of the mass media was generally forbidden, although churches and other organizations published small periodicals and newspapers. Even with this information monopoly in the hands of organizations under KSČ control, all publications were reviewed by the government's Office for Press and Information. Sports The Czechoslovakia national football team was a consistent performer on the international scene, with eight appearances in the FIFA World Cup Finals, finishing in second place in 1934 and 1962. The team also won the European Football Championship in 1976, came in third in 1980 and won the Olympic gold in 1980. Well-known football players such as Pavel Nedvěd, Antonín Panenka, Milan Baroš, Tomáš Rosický, Vladimír Šmicer or Petr Čech were all born in Czechoslovakia. The International Olympic Committee code for Czechoslovakia is TCH, which is still used in historical listings of results. The Czechoslovak national ice hockey team won many medals from the world championships and Olympic Games. Peter Šťastný, Jaromír Jágr, Dominik Hašek, Peter Bondra, Petr Klíma, Marián Gáborík, Marián Hossa, Miroslav Šatan and Pavol Demitra all come from Czechoslovakia. Emil Zátopek, winner of four Olympic gold medals in athletics, is considered one of the top athletes in Czechoslovak history. Věra Čáslavská was an Olympic gold medallist in gymnastics, winning seven gold medals and four silver medals. She represented Czechoslovakia in three consecutive Olympics. Several accomplished professional tennis players including Jaroslav Drobný, Ivan Lendl, Jan Kodeš, Miloslav Mečíř, Hana Mandlíková, Martina Hingis, Martina Navratilova, Jana Novotna, Petra Kvitová and Daniela Hantuchová were born in Czechoslovakia. Culture Czech RepublicSlovakia List of CzechsList of Slovaks MDŽ (International Women's Day) Jazz in dissident Czechoslovakia Postage stamps Postage stamps and postal history of Czechoslovakia Czechoslovakia stamp reused by Slovak Republic after 18 January 1939 by overprinting country and value See also Effects on the environment in Czechoslovakia from Soviet influence during the Cold War Former countries in Europe after 1815 List of former sovereign states Notes References Sources Further reading Heimann, Mary. Czechoslovakia: The State That Failed (2009). Hermann, A. H. A History of the Czechs (1975). Kalvoda, Josef. The Genesis of Czechoslovakia (1986). Leff, Carol Skalnick. National Conflict in Czechoslovakia: The Making and Remaking of a State, 1918–87 (1988). Mantey, Victor. A History of the Czechoslovak Republic (1973). Myant, Martin. The Czechoslovak Economy, 1948–88 (1989). Naimark, Norman, and Leonid Gibianskii, eds. The Establishment of Communist Regimes in Eastern Europe, 1944–1949 (1997) online edition Orzoff, Andrea. Battle for the Castle: The Myth of Czechoslovakia in Europe 1914–1948 (Oxford University Press, 2009); online review online Paul, David. Czechoslovakia: Profile of a Socialist Republic at the Crossroads of Europe (1990). Renner, Hans. A History of Czechoslovakia since 1945 (1989). Seton-Watson, R. W. A History of the Czechs and Slovaks (1943). Stone, Norman, and E. Strouhal, eds.Czechoslovakia: Crossroads and Crises, 1918–88 (1989). Wheaton, Bernard; Zdenek Kavav. "The Velvet Revolution: Czechoslovakia, 1988–1991" (1992). Williams, Kieran, "Civil Resistance in Czechoslovakia: From Soviet Invasion to "Velvet Revolution", 1968–89",in Adam Roberts and Timothy Garton Ash (eds.), Civil Resistance and Power Politics: The Experience of Non-violent Action from Gandhi to the Present (Oxford University Press, 2009). Windsor, Philip, and Adam Roberts, Czechoslovakia 1968: Reform, Repression and Resistance (1969). Wolchik, Sharon L. Czechoslovakia: Politics, Society, and Economics (1990). External links Online books and articles U.S. Library of Congress Country Studies, "Czechoslovakia" English/Czech: Orders and Medals of Czechoslovakia including Order of the White Lion Czechoslovakia by Encyclopædia Britannica Katrin Boeckh: Crumbling of Empires and Emerging States: Czechoslovakia and Yugoslavia as (Multi)national Countries, in: 1914-1918-online. International Encyclopedia of the First World War. Maps with Hungarian-language rubrics: Border changes after the creation of Czechoslovakia Interwar Czechoslovakia Czechoslovakia after Munich Agreement Eastern Bloc Former republics Geography of Europe History of Central Europe 1918 establishments in Czechoslovakia 1939 disestablishments in Czechoslovakia 1945 establishments in Czechoslovakia 1992 disestablishments in Czechoslovakia States and territories established in 1918 States and territories disestablished in 1939 States and territories established in 1945 States and territories disestablished in 1992 1918 establishments in Europe 1992 disestablishments in Europe Former member states of the United Nations
5323
https://en.wikipedia.org/wiki/Computer%20science
Computer science
Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software). Though more often considered an academic discipline, computer science is closely related to computer programming. Algorithms and data structures are central to computer science. The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data. The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science. History The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. Leibniz may be considered the first computer scientist and information theorist, because of various reasons, including the fact that he documented the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he invented his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine. He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer". "A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. In 1914, the Spanish engineer Leonardo Torres Quevedo published his Essays on Automatics, and designed, inspired by Babbage, a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, a prototype that demonstrated the feasibility of an electromechanical analytical engine, on which commands could be typed and the results printed automatically. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, with the development of new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world. Ultimately, the close relationship between IBM and Columbia University was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. Etymology Although first proposed in 1956, the term "computer science" appears in a 1959 article in Communications of the ACM, in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921. Louis justifies the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline. His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962. Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases. In the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested. In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics, University of Edinburgh). "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain." A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been exchange of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as cognitive science, linguistics, mathematics, physics, biology, Earth science, statistics, philosophy, and logic. Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra. The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines. The academic, political, and funding aspects of computer science tend to depend on whether a department is formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research. Philosophy Epistemology of computer science Despite the word "science" in its name, there is debate over whether or not computer science is a discipline of science, mathematics, or engineering. Allen Newell and Herbert A. Simon argued in 1975, It has since been argued that computer science can be classified as an empirical science since it makes use of empirical testing to evaluate the correctness of programs, but a problem remains in defining the laws and theorems of computer science (if any exist) and defining the nature of experiments in computer science. Proponents of classifying computer science as an engineering discipline argue that the reliability of computational systems is investigated in the same way as bridges in civil engineering and airplanes in aerospace engineering. They also argue that while empirical sciences observe what presently exists, computer science observes what is possible to exist and while scientists discover laws from observation, no proper laws have been found in computer science and it is instead concerned with creating phenomena. Proponents of classifying computer science as a mathematical discipline argue that computer programs are physical realizations of mathematical entities and programs can be deductively reasoned through mathematical formal methods. Computer scientists Edsger W. Dijkstra and Tony Hoare regard instructions for computer programs as mathematical sentences and interpret formal semantics for programming languages as mathematical axiomatic systems. Paradigms of computer science A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics. Peter Denning's working group argued that they are theory, abstraction (modeling), and design. Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence). Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems. Fields As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software. CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science. Theoretical computer science Theoretical Computer Science is mathematical and abstract in spirit, but it derives its motivation from the practical and everyday computation. Its aim is to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies. Theory of computation According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?" Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems. The famous P = NP? problem, one of the Millennium Prize Problems, is an open problem in the theory of computation. Information and coding theory Information theory, closely related to probability and statistics, is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods. Data structures and algorithms Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency. Programming language theory and formal methods Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals. Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification. Applied computer science Computer graphics and visualization Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games. Image and sound processing Information can take the form of images, sound, video or other multimedia. Bits of information can be streamed via signals. Its processing is the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier – whether it is electrical, mechanical or biological. This field plays important role in information theory, telecommunications, information engineering and has applications in medical image computing and speech synthesis, among others. What is the lower bound on the complexity of fast Fourier transform algorithms? is one of unsolved problems in theoretical computer science. Computational science, finance and engineering Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits. Social computing and human–computer interaction Social computing is an area that is concerned with the intersection of social behavior and computational systems. Human–computer interaction research develops theories, principles, and guidelines for user interface designers. Software engineering Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it does not just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For example software testing, systems engineering, technical debt and software development processes. Artificial intelligence Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data. Computer systems Computer architecture and organization Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. Computer engineers study computational logic and design of computer hardware, from individual processor components, microcontrollers, personal computers to supercomputers and embedded systems. The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959. Concurrent, parallel and distributed computing Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals. Computer networks This branch of computer science aims to manage networks between computers worldwide. Computer security and cryptography Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Historical cryptography is the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked. Technologies studied in modern cryptography include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits. Databases and data mining A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages. Data mining is a process of discovering patterns in large data sets. Discoveries The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science: Gottfried Wilhelm Leibniz's, George Boole's, Alan Turing's, Claude Shannon's, and Samuel Morse's insight: there are only two objects that a computer has to deal with in order to represent "anything". All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.). Alan Turing's insight: there are only five actions that a computer has to perform in order to do "anything". Every algorithm can be expressed in a language for a computer consisting of only five basic instructions: move left one location; move right one location; read symbol at current location; print 0 at current location; print 1 at current location. Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything". Only three rules are needed to combine any set of basic instructions into more complex ones: sequence: first do this, then do that; selection: IF such-and-such is the case, THEN do this, ELSE do that; repetition: WHILE such-and-such is the case, DO this. The three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming). Programming paradigms Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include: Functional programming, a style of building the structure and elements of computer programs that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements. Imperative programming, a programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates. Object-oriented programming, a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated. Thus object-oriented computer programs are made out of objects that interact with one another. Service-oriented programming, a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities. Research Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals. Education Computer Science, known by its near synonyms, Computing, Computer Studies, has been taught in UK schools since the days of batch processing, mark sensitive cards and paper tape but usually to a select few students. In 1981, the BBC produced a micro-computer and classroom network and Computer Studies became common for GCE O level students (11–16-year-old), and Computer Science to A level students. Its importance was recognised, and it became a compulsory part of the National Curriculum, for Key Stage 3 & 4. In September 2014 it became an entitlement for all pupils over the age of 4. In the US, with 14,000 school districts deciding the curriculum, provision was fractured. According to a 2010 report by the Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA), only 14 out of 50 states have adopted significant education standards for high school computer science. According to a 2021 report, only 51% of high schools in the US offer computer science. Israel, New Zealand, and South Korea have included computer science in their national secondary education curricula, and several others are following. See also Glossary of computer science List of computer scientists List of computer science awards List of pioneers in computer science Outline of computer science Notes References Further reading Peter J. Denning. Is computer science science?, Communications of the ACM, April 2005. Peter J. Denning, Great principles in computing curricula, Technical Symposium on Computer Science Education, 2004. External links DBLP Computer Science Bibliography Association for Computing Machinery Institute of Electrical and Electronics Engineers
5324
https://en.wikipedia.org/wiki/Catalan
Catalan
Catalan may refer to: Catalonia From, or related to Catalonia: Catalan language, a Romance language Catalans, an ethnic group formed by the people from, or with origins in, Northern or southern Catalonia Places 13178 Catalan, asteroid #13178, named "Catalan" Catalán (crater), a lunar crater named for Miguel Ángel Catalán Çatalan, İvrindi, a village in Balıkesir province, Turkey Çatalan, Karaisalı, a village in Adana Province, Turkey Catalan Bay, Gibraltar Catalan Sea, more commonly known as the Balearic Sea Catalan Mediterranean System, the Catalan Mountains Facilities and structures Çatalan Bridge, Adana, Turkey Çatalan Dam, Adana, Turkey Catalan Batteries, Gibraltar People Catalan, Lord of Monaco (1415–1457), Lord of Monaco from 1454 until 1457 Alfredo Catalán (born 1968), Venezuelan politician Alex Catalán (born 1968), Spanish filmmaker Arnaut Catalan (1219–1253), troubador Diego Catalán (1928–2008), Spanish philologist Emilio Arenales Catalán (1922–1969) Guatemalan politician Eugène Charles Catalan (1814–1894), French and Belgian mathematician Miguel A. Catalán (1894–1957), Spanish spectroscopist Moses Chayyim Catalan (died 1661), Italian rabbi Sergio Catalán (born 1991) Chilean soccer player Mathematics Mathematical concepts named after mathematician Eugène Catalan: Catalan numbers, a sequence of natural numbers that occur in various counting problems Catalan solids, a family of polyhedra Catalan's constant, a number that occurs in estimates in combinatorics Catalan's conjecture Wine Catalan (grape), another name for the wine grape Mourvèdre Catalan wine, an alternative name used in France for wine made from the Carignan grape Carignan, a wine grape that is also known as Catalan Sports and games Catalan Opening, in chess Catalan Open, golf tournament Catalans Dragons, a rugby league team often known simply as Catalan XIII Catalan, a rugby league team from Perpignan, France Other uses Battle of Catalán (1817) in Uruguay Catalan Sheepdog Catalan Company, medieval mercenary company Catalan vault, architectural design element The Catalans, a 1953 novel by Patrick O'Brian See also Catalonia (disambiguation) Catalunya (disambiguation) Catalan exonyms Anti-Catalanism Language and nationality disambiguation pages Ethnonymic surnames
5326
https://en.wikipedia.org/wiki/Creationism
Creationism
Creationism is the religious belief that nature, and aspects such as the universe, Earth, life, and humans, originated with supernatural acts of divine creation. In its broadest sense, creationism includes a continuum of religious views, which vary in their acceptance or rejection of scientific explanations such as evolution that describe the origin and development of natural phenomena. The term creationism most often refers to belief in special creation; the claim that the universe and lifeforms were created as they exist today by divine action, and that the only true explanations are those which are compatible with a Christian fundamentalist literal interpretation of the creation myth found in the Bible's Genesis creation narrative. Since the 1970s, the most common form of this has been Young Earth creationism which posits special creation of the universe and lifeforms within the last 10,000 years on the basis of flood geology, and promotes pseudoscientific creation science. From the 18th century onward, Old Earth creationism accepted geological time harmonized with Genesis through gap or day-age theory, while supporting anti-evolution. Modern old-Earth creationists support progressive creationism and continue to reject evolutionary explanations. Following political controversy, creation science was reformulated as intelligent design and neo-creationism. Mainline Protestants and the Catholic Church reconcile modern science with their faith in Creation through forms of theistic evolution which hold that God purposefully created through the laws of nature, and accept evolution. Some groups call their belief evolutionary creationism. Less prominently, there are also members of the Islamic and Hindu faiths who are creationists. Use of the term "creationist" in this context dates back to Charles Darwin's unpublished 1842 sketch draft for what became On the Origin of Species, and he used the term later in letters to colleagues. In 1873, Asa Gray published an article in The Nation saying a "special creationist" who held that species "were supernaturally originated just as they are, by the very terms of his doctrine places them out of the reach of scientific explanation." Biblical basis The basis for many creationists' beliefs is a literal or quasi-literal interpretation of the Book of Genesis. The Genesis creation narratives (Genesis 1–2) describe how God brings the Universe into being in a series of creative acts over six days and places the first man and woman (Adam and Eve) in the Garden of Eden. This story is the basis of creationist cosmology and biology. The Genesis flood narrative (Genesis 6–9) tells how God destroys the world and all life through a great flood, saving representatives of each form of life by means of Noah's Ark. This forms the basis of creationist geology, better known as flood geology. Recent decades have seen attempts to de-link creationism from the Bible and recast it as science; these include creation science and intelligent design. Types To counter the common misunderstanding that the creation–evolution controversy was a simple dichotomy of views, with "creationists" set against "evolutionists", Eugenie Scott of the National Center for Science Education produced a diagram and description of a continuum of religious views as a spectrum ranging from extreme literal biblical creationism to materialist evolution, grouped under main headings. This was used in public presentations, then published in 1999 in Reports of the NCSE. Other versions of a taxonomy of creationists were produced, and comparisons made between the different groupings. In 2009 Scott produced a revised continuum taking account of these issues, emphasizing that intelligent design creationism overlaps other types, and each type is a grouping of various beliefs and positions. The revised diagram is labelled to shows a spectrum relating to positions on the age of the Earth, and the part played by special creation as against evolution. This was published in the book Evolution Vs. Creationism: An Introduction, and the NCSE website rewritten on the basis of the book version. The main general types are listed below. Young Earth creationism Young Earth creationists such as Ken Ham and Doug Phillips believe that God created the Earth within the last ten thousand years, with a literalist interpretation of the Genesis creation narrative, within the approximate time-frame of biblical genealogies. Most young Earth creationists believe that the universe has a similar age as the Earth. A few assign a much older age to the universe than to Earth. Young Earth creationism gives the universe an age consistent with the Ussher chronology and other young Earth time frames. Other young Earth creationists believe that the Earth and the universe were created with the appearance of age, so that the world appears to be much older than it is, and that this appearance is what gives the geological findings and other methods of dating the Earth and the universe their much longer timelines. The Christian organizations Answers in Genesis (AiG), Institute for Creation Research (ICR) and the Creation Research Society (CRS) promote young Earth creationism in the United States. Carl Baugh's Creation Evidence Museum in Texas, United States AiG's Creation Museum and Ark Encounter in Kentucky, United States were opened to promote young Earth creationism. Creation Ministries International promotes young Earth views in Australia, Canada, South Africa, New Zealand, the United States, and the United Kingdom. Among Roman Catholics, the Kolbe Center for the Study of Creation promotes similar ideas. Old Earth creationism Old Earth creationism holds that the physical universe was created by God, but that the creation event described in the Book of Genesis is to be taken figuratively. This group generally believes that the age of the universe and the age of the Earth are as described by astronomers and geologists, but that details of modern evolutionary theory are questionable. Old Earth creationism itself comes in at least three types: Gap creationism Gap creationism (also known as ruin-restoration creationism, restoration creationism, or the Gap Theory) is a form of old Earth creationism that posits that the six-yom creation period, as described in the Book of Genesis, involved six literal 24-hour days, but that there was a gap of time between two distinct creations in the first and the second verses of Genesis, which the theory states explains many scientific observations, including the age of the Earth. Thus, the six days of creation (verse 3 onwards) start sometime after the Earth was "without form and void." This allows an indefinite gap of time to be inserted after the original creation of the universe, but prior to the Genesis creation narrative, (when present biological species and humanity were created). Gap theorists can therefore agree with the scientific consensus regarding the age of the Earth and universe, while maintaining a literal interpretation of the biblical text. Some gap creationists expand the basic version of creationism by proposing a "primordial creation" of biological life within the "gap" of time. This is thought to be "the world that then was" mentioned in 2 Peter 3:3–6. Discoveries of fossils and archaeological ruins older than 10,000 years are generally ascribed to this "world that then was," which may also be associated with Lucifer's rebellion. Day-age creationism Day-age creationism, a type of old Earth creationism, is a metaphorical interpretation of the creation accounts in Genesis. It holds that the six days referred to in the Genesis account of creation are not ordinary 24-hour days, but are much longer periods (from thousands to billions of years). The Genesis account is then reconciled with the age of the Earth. Proponents of the day-age theory can be found among both theistic evolutionists, who accept the scientific consensus on evolution, and progressive creationists, who reject it. The theories are said to be built on the understanding that the Hebrew word yom is also used to refer to a time period, with a beginning and an end and not necessarily that of a 24-hour day. The day-age theory attempts to reconcile the Genesis creation narrative and modern science by asserting that the creation "days" were not ordinary 24-hour days, but actually lasted for long periods of time (as day-age implies, the "days" each lasted an age). According to this view, the sequence and duration of the creation "days" may be paralleled to the scientific consensus for the age of the earth and the universe. Progressive creationism Progressive creationism is the religious belief that God created new forms of life gradually over a period of hundreds of millions of years. As a form of old Earth creationism, it accepts mainstream geological and cosmological estimates for the age of the Earth, some tenets of biology such as microevolution as well as archaeology to make its case. In this view creation occurred in rapid bursts in which all "kinds" of plants and animals appear in stages lasting millions of years. The bursts are followed by periods of stasis or equilibrium to accommodate new arrivals. These bursts represent instances of God creating new types of organisms by divine intervention. As viewed from the archaeological record, progressive creationism holds that "species do not gradually appear by the steady transformation of its ancestors; [but] appear all at once and "fully formed." The view rejects macroevolution, claiming it is biologically untenable and not supported by the fossil record, as well as rejects the concept of common descent from a last universal common ancestor. Thus the evidence for macroevolution is claimed to be false, but microevolution is accepted as a genetic parameter designed by the Creator into the fabric of genetics to allow for environmental adaptations and survival. Generally, it is viewed by proponents as a middle ground between literal creationism and evolution. Organizations such as Reasons To Believe, founded by Hugh Ross, promote this version of creationism. Progressive creationism can be held in conjunction with hermeneutic approaches to the Genesis creation narrative such as the day-age creationism or framework/metaphoric/poetic views. Philosophic and scientific creationism Creation science Creation science, or initially scientific creationism, is a pseudoscience that emerged in the 1960s with proponents aiming to have young Earth creationist beliefs taught in school science classes as a counter to teaching of evolution. Common features of creation science argument include: creationist cosmologies which accommodate a universe on the order of thousands of years old, criticism of radiometric dating through a technical argument about radiohalos, explanations for the fossil record as a record of the Genesis flood narrative (see flood geology), and explanations for the present diversity as a result of pre-designed genetic variability and partially due to the rapid degradation of the perfect genomes God placed in "created kinds" or "baramins" due to mutations. Neo-creationism Neo-creationism is a pseudoscientific movement which aims to restate creationism in terms more likely to be well received by the public, by policy makers, by educators and by the scientific community. It aims to re-frame the debate over the origins of life in non-religious terms and without appeals to scripture. This comes in response to the 1987 ruling by the United States Supreme Court in Edwards v. Aguillard that creationism is an inherently religious concept and that advocating it as correct or accurate in public-school curricula violates the Establishment Clause of the First Amendment. One of the principal claims of neo-creationism propounds that ostensibly objective orthodox science, with a foundation in naturalism, is actually a dogmatically atheistic religion. Its proponents argue that the scientific method excludes certain explanations of phenomena, particularly where they point towards supernatural elements, thus effectively excluding religious insight from contributing to understanding the universe. This leads to an open and often hostile opposition to what neo-creationists term "Darwinism", which they generally mean to refer to evolution, but which they may extend to include such concepts as abiogenesis, stellar evolution and the Big Bang theory. Unlike their philosophical forebears, neo-creationists largely do not believe in many of the traditional cornerstones of creationism such as a young Earth, or in a dogmatically literal interpretation of the Bible. Intelligent design Intelligent design (ID) is the pseudoscientific view that "certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." All of its leading proponents are associated with the Discovery Institute, a think tank whose wedge strategy aims to replace the scientific method with "a science consonant with Christian and theistic convictions" which accepts supernatural explanations. It is widely accepted in the scientific and academic communities that intelligent design is a form of creationism, and is sometimes referred to as "intelligent design creationism." ID originated as a re-branding of creation science in an attempt to avoid a series of court decisions ruling out the teaching of creationism in American public schools, and the Discovery Institute has run a series of campaigns to change school curricula. In Australia, where curricula are under the control of state governments rather than local school boards, there was a public outcry when the notion of ID being taught in science classes was raised by the Federal Education Minister Brendan Nelson; the minister quickly conceded that the correct forum for ID, if it were to be taught, is in religious or philosophy classes. In the US, teaching of intelligent design in public schools has been decisively ruled by a federal district court to be in violation of the Establishment Clause of the First Amendment to the United States Constitution. In Kitzmiller v. Dover, the court found that intelligent design is not science and "cannot uncouple itself from its creationist, and thus religious, antecedents," and hence cannot be taught as an alternative to evolution in public school science classrooms under the jurisdiction of that court. This sets a persuasive precedent, based on previous US Supreme Court decisions in Edwards v. Aguillard and Epperson v. Arkansas (1968), and by the application of the Lemon test, that creates a legal hurdle to teaching intelligent design in public school districts in other federal court jurisdictions. Geocentrism In astronomy, the geocentric model (also known as geocentrism, or the Ptolemaic system), is a description of the cosmos where Earth is at the orbital center of all celestial bodies. This model served as the predominant cosmological system in many ancient civilizations such as ancient Greece. As such, they assumed that the Sun, Moon, stars, and naked eye planets circled Earth, including the noteworthy systems of Aristotle (see Aristotelian physics) and Ptolemy. Articles arguing that geocentrism was the biblical perspective appeared in some early creation science newsletters associated with the Creation Research Society pointing to some passages in the Bible, which, when taken literally, indicate that the daily apparent motions of the Sun and the Moon are due to their actual motions around the Earth rather than due to the rotation of the Earth about its axis. For example, where the Sun and Moon are said to stop in the sky, and where the world is described as immobile. Contemporary advocates for such religious beliefs include Robert Sungenis, co-author of the self-published Galileo Was Wrong: The Church Was Right (2006). These people subscribe to the view that a plain reading of the Bible contains an accurate account of the manner in which the universe was created and requires a geocentric worldview. Most contemporary creationist organizations reject such perspectives. Omphalos hypothesis The Omphalos hypothesis is one attempt to reconcile the scientific evidence that the universe is billions of years old with a literal interpretation of the Genesis creation narrative, which implies that the Earth is only a few thousand years old. It is based on the religious belief that the universe was created by a divine being, within the past six to ten thousand years (in keeping with flood geology), and that the presence of objective, verifiable evidence that the universe is older than approximately ten millennia is due to the creator introducing false evidence that makes the universe appear significantly older. The idea was named after the title of an 1857 book, Omphalos by Philip Henry Gosse, in which Gosse argued that in order for the world to be functional God must have created the Earth with mountains and canyons, trees with growth rings, Adam and Eve with fully grown hair, fingernails, and navels (ὀμφαλός omphalos is Greek for "navel"), and all living creatures with fully formed evolutionary features, etc..., and that, therefore, no empirical evidence about the age of the Earth or universe can be taken as reliable. Various supporters of Young Earth creationism have given different explanations for their belief that the universe is filled with false evidence of the universe's age, including a belief that some things needed to be created at a certain age for the ecosystems to function, or their belief that the creator was deliberately planting deceptive evidence. The idea has seen some revival in the 20th century by some modern creationists, who have extended the argument to address the "starlight problem". The idea has been criticised as Last Thursdayism, and on the grounds that it requires a deliberately deceptive creator. Theistic evolution Theistic evolution, or evolutionary creation, is a belief that "the personal God of the Bible created the universe and life through evolutionary processes." According to the American Scientific Affiliation: Through the 19th century the term creationism most commonly referred to direct creation of individual souls, in contrast to traducianism. Following the publication of Vestiges of the Natural History of Creation, there was interest in ideas of Creation by divine law. In particular, the liberal theologian Baden Powell argued that this illustrated the Creator's power better than the idea of miraculous creation, which he thought ridiculous. When On the Origin of Species was published, the cleric Charles Kingsley wrote of evolution as "just as noble a conception of Deity." Darwin's view at the time was of God creating life through the laws of nature, and the book makes several references to "creation," though he later regretted using the term rather than calling it an unknown process. In America, Asa Gray argued that evolution is the secondary effect, or modus operandi, of the first cause, design, and published a pamphlet defending the book in theistic terms, Natural Selection not inconsistent with Natural Theology. Theistic evolution, also called, evolutionary creation, became a popular compromise, and St. George Jackson Mivart was among those accepting evolution but attacking Darwin's naturalistic mechanism. Eventually it was realised that supernatural intervention could not be a scientific explanation, and naturalistic mechanisms such as neo-Lamarckism were favoured as being more compatible with purpose than natural selection. Some theists took the general view that, instead of faith being in opposition to biological evolution, some or all classical religious teachings about Christian God and creation are compatible with some or all of modern scientific theory, including specifically evolution; it is also known as "evolutionary creation." In Evolution versus Creationism, Eugenie Scott and Niles Eldredge state that it is in fact a type of evolution. It generally views evolution as a tool used by God, who is both the first cause and immanent sustainer/upholder of the universe; it is therefore well accepted by people of strong theistic (as opposed to deistic) convictions. Theistic evolution can synthesize with the day-age creationist interpretation of the Genesis creation narrative; however most adherents consider that the first chapters of the Book of Genesis should not be interpreted as a "literal" description, but rather as a literary framework or allegory. From a theistic viewpoint, the underlying laws of nature were designed by God for a purpose, and are so self-sufficient that the complexity of the entire physical universe evolved from fundamental particles in processes such as stellar evolution, life forms developed in biological evolution, and in the same way the origin of life by natural causes has resulted from these laws. In one form or another, theistic evolution is the view of creation taught at the majority of mainline Protestant seminaries. For Roman Catholics, human evolution is not a matter of religious teaching, and must stand or fall on its own scientific merits. Evolution and the Roman Catholic Church are not in conflict. The Catechism of the Catholic Church comments positively on the theory of evolution, which is neither precluded nor required by the sources of faith, stating that scientific studies "have splendidly enriched our knowledge of the age and dimensions of the cosmos, the development of life-forms and the appearance of man." Roman Catholic schools teach evolution without controversy on the basis that scientific knowledge does not extend beyond the physical, and scientific truth and religious truth cannot be in conflict. Theistic evolution can be described as "creationism" in holding that divine intervention brought about the origin of life or that divine laws govern formation of species, though many creationists (in the strict sense) would deny that the position is creationism at all. In the creation–evolution controversy, its proponents generally take the "evolutionist" side. This sentiment was expressed by Fr. George Coyne, (the Vatican's chief astronomer between 1978 and 2006):...in America, creationism has come to mean some fundamentalistic, literal, scientific interpretation of Genesis. Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in a belief that everything depends upon God, or better, all is a gift from God. While supporting the methodological naturalism inherent in modern science, the proponents of theistic evolution reject the implication taken by some atheists that this gives credence to ontological materialism. In fact, many modern philosophers of science, including atheists, refer to the long-standing convention in the scientific method that observable events in nature should be explained by natural causes, with the distinction that it does not assume the actual existence or non-existence of the supernatural. Religious views There are also non-Christian forms of creationism, notably Islamic creationism and Hindu creationism. Bahá'í Faith In the creation myth taught by Bahá'u'lláh, the Bahá'í Faith founder, the universe has "neither beginning nor ending," and that the component elements of the material world have always existed and will always exist. With regard to evolution and the origin of human beings, 'Abdu'l-Bahá gave extensive comments on the subject when he addressed western audiences in the beginning of the 20th century. Transcripts of these comments can be found in Some Answered Questions, Paris Talks and The Promulgation of Universal Peace. 'Abdu'l-Bahá described the human species as having evolved from a primitive form to modern man, but that the capacity to form human intelligence was always in existence. Buddhism Buddhism denies a creator deity and posits that mundane deities such as Mahabrahma are sometimes misperceived to be a creator. While Buddhism includes belief in divine beings called devas, it holds that they are mortal, limited in their power, and that none of them are creators of the universe. In the Saṃyutta Nikāya, the Buddha also states that the cycle of rebirths stretches back hundreds of thousands of eons, without discernible beginning. Major Buddhist Indian philosophers such as Nagarjuna, Vasubandhu, Dharmakirti and Buddhaghosa, consistently critiqued Creator God views put forth by Hindu thinkers. Christianity , most Christians around the world accepted evolution as the most likely explanation for the origins of species, and did not take a literal view of the Genesis creation narrative. The United States is an exception where belief in religious fundamentalism is much more likely to affect attitudes towards evolution than it is for believers elsewhere. Political partisanship affecting religious belief may be a factor because political partisanship in the US is highly correlated with fundamentalist thinking, unlike in Europe. Most contemporary Christian leaders and scholars from mainstream churches, such as Anglicans and Lutherans, consider that there is no conflict between the spiritual meaning of creation and the science of evolution. According to the former archbishop of Canterbury, Rowan Williams, "for most of the history of Christianity, and I think this is fair enough, most of the history of the Christianity there's been an awareness that a belief that everything depends on the creative act of God, is quite compatible with a degree of uncertainty or latitude about how precisely that unfolds in creative time." Leaders of the Anglican and Roman Catholic churches have made statements in favor of evolutionary theory, as have scholars such as the physicist John Polkinghorne, who argues that evolution is one of the principles through which God created living beings. Earlier supporters of evolutionary theory include Frederick Temple, Asa Gray and Charles Kingsley who were enthusiastic supporters of Darwin's theories upon their publication, and the French Jesuit priest and geologist Pierre Teilhard de Chardin saw evolution as confirmation of his Christian beliefs, despite condemnation from Church authorities for his more speculative theories. Another example is that of Liberal theology, not providing any creation models, but instead focusing on the symbolism in beliefs of the time of authoring Genesis and the cultural environment. Many Christians and Jews had been considering the idea of the creation history as an allegory (instead of historical) long before the development of Darwin's theory of evolution. For example, Philo, whose works were taken up by early Church writers, wrote that it would be a mistake to think that creation happened in six days, or in any set amount of time. Augustine of the late fourth century who was also a former neoplatonist argued that everything in the universe was created by God at the same moment in time (and not in six days as a literal reading of the Book of Genesis would seem to require); It appears that both Philo and Augustine felt uncomfortable with the idea of a seven-day creation because it detracted from the notion of God's omnipotence. In 1950, Pope Pius XII stated limited support for the idea in his encyclical . In 1996, Pope John Paul II stated that "new knowledge has led to the recognition of the theory of evolution as more than a hypothesis," but, referring to previous papal writings, he concluded that "if the human body takes its origin from pre-existent living matter, the spiritual soul is immediately created by God." In the US, Evangelical Christians have continued to believe in a literal Genesis. , members of evangelical Protestant (70%), Mormon (76%) and Jehovah's Witnesses (90%) denominations were the most likely to reject the evolutionary interpretation of the origins of life. Jehovah's Witnesses adhere to a combination of gap creationism and day-age creationism, asserting that scientific evidence about the age of the universe is compatible with the Bible, but that the 'days' after Genesis 1:1 were each thousands of years in length. The historic Christian literal interpretation of creation requires the harmonization of the two creation stories, Genesis 1:1–2:3 and Genesis 2:4–25, for there to be a consistent interpretation. They sometimes seek to ensure that their belief is taught in science classes, mainly in American schools. Opponents reject the claim that the literalistic biblical view meets the criteria required to be considered scientific. Many religious groups teach that God created the Cosmos. From the days of the early Christian Church Fathers there were allegorical interpretations of the Book of Genesis as well as literal aspects. Christian Science, a system of thought and practice derived from the writings of Mary Baker Eddy, interprets the Book of Genesis figuratively rather than literally. It holds that the material world is an illusion, and consequently not created by God: the only real creation is the spiritual realm, of which the material world is a distorted version. Christian Scientists regard the story of the creation in the Book of Genesis as having symbolic rather than literal meaning. According to Christian Science, both creationism and evolution are false from an absolute or "spiritual" point of view, as they both proceed from a (false) belief in the reality of a material universe. However, Christian Scientists do not oppose the teaching of evolution in schools, nor do they demand that alternative accounts be taught: they believe that both material science and literalist theology are concerned with the illusory, mortal and material, rather than the real, immortal and spiritual. With regard to material theories of creation, Eddy showed a preference for Darwin's theory of evolution over others. Hinduism Hindu creationists claim that species of plants and animals are material forms adopted by pure consciousness which live an endless cycle of births and rebirths. Ronald Numbers says that: "Hindu Creationists have insisted on the antiquity of humans, who they believe appeared fully formed as long, perhaps, as trillions of years ago." Hindu creationism is a form of old Earth creationism, according to Hindu creationists the universe may even be older than billions of years. These views are based on the Vedas, the creation myths of which depict an extreme antiquity of the universe and history of the Earth. In Hindu cosmology, time cyclically repeats general events of creation and destruction, with many "first man", each known as Manu, the progenitor of mankind. Each Manu successively reigns over a 306.72 million year period known as a , each ending with the destruction of mankind followed by a (period of non-activity) before the next . 120.53million years have elapsed in the current (current mankind) according to calculations on Hindu units of time. The universe is cyclically created at the start and destroyed at the end of a (day of Brahma), lasting for 4.32billion years, which is followed by a (period of dissolution) of equal length. 1.97billion years have elapsed in the current (current universe). The universal elements or building blocks (unmanifest matter) exists for a period known as a , lasting for 311.04trillion years, which is followed by a (period of great dissolution) of equal length. 155.52trillion years have elapsed in the current . Islam Islamic creationism is the belief that the universe (including humanity) was directly created by God as explained in the Quran. It usually views the Book of Genesis as a corrupted version of God's message. The creation myths in the Quran are vaguer and allow for a wider range of interpretations similar to those in other Abrahamic religions. Islam also has its own school of theistic evolutionism, which holds that mainstream scientific analysis of the origin of the universe is supported by the Quran. Some Muslims believe in evolutionary creation, especially among liberal movements within Islam. Writing for The Boston Globe, Drake Bennett noted: "Without a Book of Genesis to account for[...] Muslim creationists have little interest in proving that the age of the Earth is measured in the thousands rather than the billions of years, nor do they show much interest in the problem of the dinosaurs. And the idea that animals might evolve into other animals also tends to be less controversial, in part because there are passages of the Koran that seem to support it. But the issue of whether human beings are the product of evolution is just as fraught among Muslims." Khalid Anees, president of the Islamic Society of Britain, states that Muslims do not agree that one species can develop from another. Since the 1980s, Turkey has been a site of strong advocacy for creationism, supported by American adherents. There are several verses in the Qur'an which some modern writers have interpreted as being compatible with the expansion of the universe, Big Bang and Big Crunch theories: Ahmadiyya The Ahmadiyya movement actively promotes evolutionary theory. Ahmadis interpret scripture from the Qur'an to support the concept of macroevolution and give precedence to scientific theories. Furthermore, unlike orthodox Muslims, Ahmadis believe that humans have gradually evolved from different species. Ahmadis regard Adam as being the first Prophet of Godas opposed to him being the first man on Earth. Rather than wholly adopting the theory of natural selection, Ahmadis promote the idea of a "guided evolution," viewing each stage of the evolutionary process as having been selectively woven by God. Mirza Tahir Ahmad, Fourth Caliph of the Ahmadiyya Muslim Community has stated in his magnum opus Revelation, Rationality, Knowledge & Truth (1998) that evolution did occur but only through God being the One who brings it about. It does not occur itself, according to the Ahmadiyya Muslim Community. Judaism For Orthodox Jews who seek to reconcile discrepancies between science and the creation myths in the Bible, the notion that science and the Bible should even be reconciled through traditional scientific means is questioned. To these groups, science is as true as the Torah and if there seems to be a problem, epistemological limits are to blame for apparently irreconcilable points. They point to discrepancies between what is expected and what actually is to demonstrate that things are not always as they appear. They note that even the root word for 'world' in the Hebrew language, , means 'hidden' (). Just as they know from the Torah that God created man and trees and the light on its way from the stars in their observed state, so too can they know that the world was created in its over the six days of Creation that reflects progression to its currently-observed state, with the understanding that physical ways to verify this may eventually be identified. This knowledge has been advanced by Rabbi Dovid Gottlieb, former philosophy professor at Johns Hopkins University. Relatively old Kabbalistic sources from well before the scientifically apparent age of the universe was first determined are also in close concord with modern scientific estimates of the age of the universe, according to Rabbi Aryeh Kaplan, and based on Sefer Temunah, an early kabbalistic work attributed to the first-century Tanna Nehunya ben HaKanah. Many kabbalists accepted the teachings of the Sefer HaTemunah, including the medieval Jewish scholar Nahmanides, his close student Isaac ben Samuel of Acre, and David ben Solomon ibn Abi Zimra. Other parallels are derived, among other sources, from Nahmanides, who expounds that there was a Neanderthal-like species with which Adam mated (he did this long before Neanderthals had even been discovered scientifically). Reform Judaism does not take the Torah as a literal text, but rather as a symbolic or open-ended work. Some contemporary writers such as Rabbi Gedalyah Nadel have sought to reconcile the discrepancy between the account in the Torah, and scientific findings by arguing that each day referred to in the Bible was not 24 hours, but billions of years long. Others claim that the Earth was created a few thousand years ago, but was deliberately made to look as if it was five billion years old, e.g. by being created with ready made fossils. The best known exponent of this approach being Rabbi Menachem Mendel Schneerson. Others state that although the world was physically created in six 24-hour days, the Torah accounts can be interpreted to mean that there was a period of billions of years before the six days of creation. Prevalence Most vocal literalist creationists are from the US, and strict creationist views are much less common in other developed countries. According to a study published in Science, a survey of the US, Turkey, Japan and Europe showed that public acceptance of evolution is most prevalent in Iceland, Denmark and Sweden at 80% of the population. There seems to be no significant correlation between believing in evolution and understanding evolutionary science. Australia A 2009 Nielsen poll showed that 23% of Australians believe "the biblical account of human origins," 42% believe in a "wholly scientific" explanation for the origins of life, while 32% believe in an evolutionary process "guided by God". A 2013 survey conducted by Auspoll and the Australian Academy of Science found that 80% of Australians believe in evolution (70% believe it is currently occurring, 10% believe in evolution but do not think it is currently occurring), 12% were not sure and 9% stated they do not believe in evolution. Brazil A 2011 Ipsos survey found that 47% of responders in Brazil identified themselves as "creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes". In 2004, IBOPE conducted a poll in Brazil that asked questions about creationism and the teaching of creationism in schools. When asked if creationism should be taught in schools, 89% of people said that creationism should be taught in schools. When asked if the teaching of creationism should replace the teaching of evolution in schools, 75% of people said that the teaching of creationism should replace the teaching of evolution in schools. Canada A 2012 survey, by Angus Reid Public Opinion revealed that 61 percent of Canadians believe in evolution. The poll asked "Where did human beings come fromdid we start as singular cells millions of year ago and evolve into our present form, or did God create us in his image 10,000 years ago?" In 2019, a Research Co. poll asked people in Canada if creationism "should be part of the school curriculum in their province". 38% of Canadians said that creationism should be part of the school curriculum, 39% of Canadians said that it should not be part of the school curriculum, and 23% of Canadians were undecided. In 2023, a Research Co. poll found that 21% of Canadians "believe God created human beings in their present form within the last 10,000 years". The poll also found that "More than two-in-five Canadians (43%) think creationism should be part of the school curriculum in their province." Europe In Europe, literalist creationism is more widely rejected, though regular opinion polls are not available. Most people accept that evolution is the most widely accepted scientific theory as taught in most schools. In countries with a Roman Catholic majority, papal acceptance of evolutionary creationism as worthy of study has essentially ended debate on the matter for many people. In the UK, a 2006 poll on the "origin and development of life", asked participants to choose between three different perspectives on the origin of life: 22% chose creationism, 17% opted for intelligent design, 48% selected evolutionary theory, and the rest did not know. A subsequent 2010 YouGov poll on the correct explanation for the origin of humans found that 9% opted for creationism, 12% intelligent design, 65% evolutionary theory and 13% didn't know. The former Archbishop of Canterbury Rowan Williams, head of the worldwide Anglican Communion, views the idea of teaching creationism in schools as a mistake. In 2009, an Ipsos Mori survey in the United Kingdom found that 54% of Britons agreed with the view: "Evolutionary theories should be taught in science lessons in schools together with other possible perspectives, such as intelligent design and creationism." In Italy, Education Minister Letizia Moratti wanted to retire evolution from the secondary school level; after one week of massive protests, she reversed her opinion. There continues to be scattered and possibly mounting efforts on the part of religious groups throughout Europe to introduce creationism into public education. In response, the Parliamentary Assembly of the Council of Europe has released a draft report titled The dangers of creationism in education on June 8, 2007, reinforced by a further proposal of banning it in schools dated October 4, 2007. Serbia suspended the teaching of evolution for one week in September 2004, under education minister Ljiljana Čolić, only allowing schools to reintroduce evolution into the curriculum if they also taught creationism. "After a deluge of protest from scientists, teachers and opposition parties" says the BBC report, Čolić's deputy made the statement, "I have come here to confirm Charles Darwin is still alive" and announced that the decision was reversed. Čolić resigned after the government said that she had caused "problems that had started to reflect on the work of the entire government." Poland saw a major controversy over creationism in 2006, when the Deputy Education Minister, Mirosław Orzechowski, denounced evolution as "one of many lies" taught in Polish schools. His superior, Minister of Education Roman Giertych, has stated that the theory of evolution would continue to be taught in Polish schools, "as long as most scientists in our country say that it is the right theory." Giertych's father, Member of the European Parliament Maciej Giertych, has opposed the teaching of evolution and has claimed that dinosaurs and humans co-existed. A June 2015 - July 2016 Pew poll of Eastern European countries found that 56% of people from Armenia say that humans and other living things have "Existed in present state since the beginning of time". Armenia is followed by 52% from Bosnia, 42% from Moldova, 37% from Lithuania, 34% from Georgia and Ukraine, 33% from Croatia and Romania, 31% from Bulgaria, 29% from Greece and Serbia, 26% from Russia, 25% from Latvia, 23% from Belarus and Poland, 21% from Estonia and Hungary, and 16% from the Czech Republic. South Africa A 2011 Ipsos survey found that 56% of responders in South Africa identified themselves as "creationists and believe that human beings were in fact created by a spiritual force such as the God they believe in and do not believe that the origin of man came from evolving from other species such as apes". South Korea In 2009, an EBS survey in South Korea found that 63% of people believed that creation and evolution should both be taught in schools simultaneously. United States A 2017 poll by Pew Research found that 62% of Americans believe humans have evolved over time and 34% of Americans believe humans and other living things have existed in their present form since the beginning of time. A 2019 Gallup creationism survey found that 40% of adults in the United States inclined to the view that "God created humans in their present form at one time within the last 10,000 years" when asked for their views on the origin and development of human beings. According to a 2014 Gallup poll, about 42% of Americans believe that "God created human beings pretty much in their present form at one time within the last 10,000 years or so." Another 31% believe that "human beings have developed over millions of years from less advanced forms of life, but God guided this process,"and 19% believe that "human beings have developed over millions of years from less advanced forms of life, but God had no part in this process." Belief in creationism is inversely correlated to education; of those with postgraduate degrees, 74% accept evolution. In 1987, Newsweek reported: "By one count there are some 700 scientists with respectable academic credentials (out of a total of 480,000 U.S. earth and life scientists) who give credence to creation-science, the general theory that complex life forms did not evolve but appeared 'abruptly.'" A 2000 poll for People for the American Way found 70% of the US public felt that evolution was compatible with a belief in God. According to a study published in Science, between 1985 and 2005 the number of adult North Americans who accept evolution declined from 45% to 40%, the number of adults who reject evolution declined from 48% to 39% and the number of people who were unsure increased from 7% to 21%. Besides the US the study also compared data from 32 European countries, Turkey, and Japan. The only country where acceptance of evolution was lower than in the US was Turkey (25%). According to a 2011 Fox News poll, 45% of Americans believe in creationism, down from 50% in a similar poll in 1999. 21% believe in 'the theory of evolution as outlined by Darwin and other scientists' (up from 15% in 1999), and 27% answered that both are true (up from 26% in 1999). In September 2012, educator and television personality Bill Nye spoke with the Associated Press and aired his fears about acceptance of creationism, believing that teaching children that creationism is the only true answer without letting them understand the way science works will prevent any future innovation in the world of science. In February 2014, Nye defended evolution in the classroom in a debate with creationist Ken Ham on the topic of whether creation is a viable model of origins in today's modern, scientific era. Education controversies In the US, creationism has become centered in the political controversy over creation and evolution in public education, and whether teaching creationism in science classes conflicts with the separation of church and state. Currently, the controversy comes in the form of whether advocates of the intelligent design movement who wish to "Teach the Controversy" in science classes have conflated science with religion. People for the American Way polled 1500 North Americans about the teaching of evolution and creationism in November and December 1999. They found that most North Americans were not familiar with creationism, and most North Americans had heard of evolution, but many did not fully understand the basics of the theory. The main findings were: In such political contexts, creationists argue that their particular religiously based origin belief is superior to those of other belief systems, in particular those made through secular or scientific rationale. Political creationists are opposed by many individuals and organizations who have made detailed critiques and given testimony in various court cases that the alternatives to scientific reasoning offered by creationists are opposed by the consensus of the scientific community. Criticism Christian criticism Most Christians disagree with the teaching of creationism as an alternative to evolution in schools. Several religious organizations, among them the Catholic Church, hold that their faith does not conflict with the scientific consensus regarding evolution. The Clergy Letter Project, which has collected more than 13,000 signatures, is an "endeavor designed to demonstrate that religion and science can be compatible." In his 2002 article "Intelligent Design as a Theological Problem," George Murphy argues against the view that life on Earth, in all its forms, is direct evidence of God's act of creation (Murphy quotes Phillip E. Johnson's claim that he is speaking "of a God who acted openly and left his fingerprints on all the evidence."). Murphy argues that this view of God is incompatible with the Christian understanding of God as "the one revealed in the cross and resurrection of Christ." The basis of this theology is Isaiah 45:15, "Verily thou art a God that hidest thyself, O God of Israel, the Saviour." Murphy observes that the execution of a Jewish carpenter by Roman authorities is in and of itself an ordinary event and did not require divine action. On the contrary, for the crucifixion to occur, God had to limit or "empty" himself. It was for this reason that Paul the Apostle wrote, in Philippians 2:5-8: Let this mind be in you, which was also in Christ Jesus: Who, being in the form of God, thought it not robbery to be equal with God: But made himself of no reputation, and took upon him the form of a servant, and was made in the likeness of men: And being found in fashion as a man, he humbled himself, and became obedient unto death, even the death of the cross. Murphy concludes that,Just as the Son of God limited himself by taking human form and dying on a cross, God limits divine action in the world to be in accord with rational laws which God has chosen. This enables us to understand the world on its own terms, but it also means that natural processes hide God from scientific observation.For Murphy, a theology of the cross requires that Christians accept a methodological naturalism, meaning that one cannot invoke God to explain natural phenomena, while recognizing that such acceptance does not require one to accept a metaphysical naturalism, which proposes that nature is all that there is. The Jesuit priest George Coyne has stated that it is "unfortunate that, especially here in America, creationism has come to mean...some literal interpretation of Genesis." He argues that "...Judaic-Christian faith is radically creationist, but in a totally different sense. It is rooted in belief that everything depends on God, or better, all is a gift from God." Teaching of creationism Other Christians have expressed qualms about teaching creationism. In March 2006, then Archbishop of Canterbury Rowan Williams, the leader of the world's Anglicans, stated his discomfort about teaching creationism, saying that creationism was "a kind of category mistake, as if the Bible were a theory like other theories." He also said: "My worry is creationism can end up reducing the doctrine of creation rather than enhancing it." The views of the Episcopal Churcha major American-based branch of the Anglican Communionon teaching creationism resemble those of Williams. The National Science Teachers Association is opposed to teaching creationism as a science, as is the Association for Science Teacher Education, the National Association of Biology Teachers, the American Anthropological Association, the American Geosciences Institute, the Geological Society of America, the American Geophysical Union, and numerous other professional teaching and scientific societies. In April 2010, the American Academy of Religion issued Guidelines for Teaching About Religion in K‐12 Public Schools in the United States, which included guidance that creation science or intelligent design should not be taught in science classes, as "Creation science and intelligent design represent worldviews that fall outside of the realm of science that is defined as (and limited to) a method of inquiry based on gathering observable and measurable evidence subject to specific principles of reasoning." However, they, as well as other "worldviews that focus on speculation regarding the origins of life represent another important and relevant form of human inquiry that is appropriately studied in literature or social sciences courses. Such study, however, must include a diversity of worldviews representing a variety of religious and philosophical perspectives and must avoid privileging one view as more legitimate than others." Randy Moore and Sehoya Cotner, from the biology program at the University of Minnesota, reflect on the relevance of teaching creationism in the article "The Creationist Down the Hall: Does It Matter When Teachers Teach Creationism?", in which they write: "Despite decades of science education reform, numerous legal decisions declaring the teaching of creationism in public-school science classes to be unconstitutional, overwhelming evidence supporting evolution, and the many denunciations of creationism as nonscientific by professional scientific societies, creationism remains popular throughout the United States." Scientific criticism Science is a system of knowledge based on observation, empirical evidence, and the development of theories that yield testable explanations and predictions of natural phenomena. By contrast, creationism is often based on literal interpretations of the narratives of particular religious texts. Creationist beliefs involve purported forces that lie outside of nature, such as supernatural intervention, and often do not allow predictions at all. Therefore, these can neither be confirmed nor disproved by scientists. However, many creationist beliefs can be framed as testable predictions about phenomena such as the age of the Earth, its geological history and the origins, distributions and relationships of living organisms found on it. Early science incorporated elements of these beliefs, but as science developed these beliefs were gradually falsified and were replaced with understandings based on accumulated and reproducible evidence that often allows the accurate prediction of future results. Some scientists, such as Stephen Jay Gould, consider science and religion to be two compatible and complementary fields, with authorities in distinct areas of human experience, so-called non-overlapping magisteria. This view is also held by many theologians, who believe that ultimate origins and meaning are addressed by religion, but favor verifiable scientific explanations of natural phenomena over those of creationist beliefs. Other scientists, such as Richard Dawkins, reject the non-overlapping magisteria and argue that, in disproving literal interpretations of creationists, the scientific method also undermines religious texts as a source of truth. Irrespective of this diversity in viewpoints, since creationist beliefs are not supported by empirical evidence, the scientific consensus is that any attempt to teach creationism as science should be rejected. Organizations See also Biblical inerrancy Biogenesis Evolution of complexity Flying Spaghetti Monster History of creationism Religious cosmology Notes References Citations Works cited "Presented as a Paleontological Society short course at the annual meeting of the Geological Society of America, Denver, Colorado, October 24, 1999." Further reading External links "Creationism" at the Stanford Encyclopedia of Philosophy by Michael Ruse "How Creationism Works" at HowStuffWorks by Julia Layton "TIMELINE: Evolution, Creationism and Intelligent Design" Focuses on major historical and recent events in the scientific and political debate   by Warren D. Allmon, Director of the Museum of the Earth "What is creationism?" at talk.origins by Mark Isaak "The Creation/Evolution Continuum" by Eugenie Scott "15 Answers to Creationist Nonsense" by John Rennie, editor in chief of Scientific American magazine "Race, Evolution and the Science of Human Origins" by Allison Hopper, Scientific American (July 5, 2021). Human Timeline (Interactive) Smithsonian, National Museum of Natural History (August 2016) Christian terminology Creation myths Denialism Obsolete biology theories Origin of life Pseudoscience Religious cosmologies Theism
5329
https://en.wikipedia.org/wiki/History%20of%20Chad
History of Chad
Chad (; ), officially the Republic of Chad, is a landlocked country in Central Africa. It borders Libya to the north, Sudan to the east, the Central African Republic to the south, Cameroon and Nigeria to the southwest, and Niger to the west. Due to its distance from the sea and its largely desert climate, the country is sometimes referred to as the "Dead Heart of Africa". Prehistory The territory now known as Chad possesses some of the richest archaeological sites in Africa. A hominid skull was found by Michel Brunet, that is more than 7 million years old, the oldest discovered anywhere in the world; it has been given the name Sahelanthropus tchadensis. In 1996 Michel Brunet had unearthed a hominid jaw which he named Australopithecus bahrelghazali, and unofficially dubbed Abel. It was dated using Beryllium based Radiometric dating as living circa. 3.6 million years ago. During the 7th millennium BC, the northern half of Chad was part of a broad expanse of land, stretching from the Indus River in the east to the Atlantic Ocean in the west, in which ecological conditions favored early human settlement. Rock art of the "Round Head" style, found in the Ennedi region, has been dated to before the 7th millennium BC and, because of the tools with which the rocks were carved and the scenes they depict, may represent the oldest evidence in the Sahara of Neolithic industries. Many of the pottery-making and Neolithic activities in Ennedi date back further than any of those of the Nile Valley to the east. In the prehistoric period, Chad was much wetter than it is today, as evidenced by large game animals depicted in rock paintings in the Tibesti and Borkou regions. Recent linguistic research suggests that all of Africa's major language groupings south of the Sahara Desert (except Khoisan, which is not considered a valid genetic grouping anyway), i.e. the Afro-Asiatic, Nilo-Saharan and Niger–Congo phyla, originated in prehistoric times in a narrow band between Lake Chad and the Nile Valley. The origins of Chad's peoples, however, remain unclear. Several of the proven archaeological sites have been only partially studied, and other sites of great potential have yet to be mapped. Era of Empires (AD 900–1900) At the end of the 1st millennium AD, the formation of states began across central Chad in the sahelian zone between the desert and the savanna. For almost the next 1,000 years, these states, their relations with each other, and their effects on the peoples who lived in stateless societies along their peripheries dominated Chad's political history. Recent research suggests that indigenous Africans founded of these states, not migrating Arabic-speaking groups, as was believed previously. Nonetheless, immigrants, Arabic-speaking or otherwise, played a significant role, along with Islam, in the formation and early evolution of these states. Most states began as kingdoms, in which the king was considered divine and endowed with temporal and spiritual powers. All states were militaristic (or they did not survive long), but none was able to expand far into southern Chad, where forests and the tsetse fly complicated the use of cavalry. Control over the trans-Saharan trade routes that passed through the region formed the economic basis of these kingdoms. Although many states rose and fell, the most important and durable of the empires were Kanem–Bornu, Baguirmi, and Ouaddai, according to most written sources (mainly court chronicles and writings of Arab traders and travelers).Chad - ERA OF EMPIRES, A.D. 900–1900 Kanem–Bornu The Kanem Empire originated in the 9th century AD to the northeast of Lake Chad. Historians agree that the leaders of the new state were ancestors of the Kanembu people. Toward the end of the 11th century the Sayfawa king (or mai, the title of the Sayfawa rulers) Hummay, converted to Islam. In the following century the Sayfawa rulers expanded southward into Kanem, where was to rise their first capital, Njimi. Kanem's expansion peaked during the long and energetic reign of Mai Dunama Dabbalemi (c. 1221–1259). By the end of the 14th century, internal struggles and external attacks had torn Kanem apart. Finally, around 1396 the Bulala invaders forced Mai Umar Idrismi to abandon Njimi and move the Kanembu people to Bornu on the western edge of Lake Chad. Over time, the intermarriage of the Kanembu and Bornu peoples created a new people and language, the Kanuri, and founded a new capital, Ngazargamu. Kanem–Bornu peaked during the reign of the outstanding statesman Mai Idris Aluma (c. 1571–1603). Aluma is remembered for his military skills, administrative reforms, and Islamic piety. The administrative reforms and military brilliance of Aluma sustained the empire until the mid-17th century, when its power began to fade. By the early 19th century, Kanem–Bornu was clearly an empire in decline, and in 1808 Fulani warriors conquered Ngazargamu. Bornu survived, but the Sayfawa dynasty ended in 1846 and the Empire itself fell in 1893. Baguirmi and Ouaddai The Kingdom of Baguirmi, located southeast of Kanem-Bornu, was founded in the late 15th or early 16th century, and adopted Islam in the reign of Abdullah IV (1568-98). Baguirmi was in a tributary relationship with Kanem–Bornu at various points in the 17th and 18th centuries, then to Ouaddai in the 19th century. In 1893, Baguirmi sultan Abd ar Rahman Gwaranga surrendered the territory to France, and it became a French protectorate. The Ouaddai Kingdom, west of Kanem–Bornu, was established in the early 16th century by Tunjur rulers. In the 1630s, Abd al Karim invaded and established an Islamic sultanate. Among its most impactful rulers for the next three centuries were Muhammad Sabun, who controlled a new trade route to the north and established a currency during the early 19th century, and Muhammad Sharif, whose military campaigns in the mid 19th century fended off an assimilation attempt from Darfur, conquered Baguirmi, and successfully resisted French colonization. However, Ouaddai lost its independence to France after a war from 1909 to 1912. Colonialism (1900–1940) The French first invaded Chad in 1891, establishing their authority through military expeditions primarily against the Muslim kingdoms. The decisive colonial battle for Chad was fought on April 22, 1900 at Battle of Kousséri between forces of French Major Amédée-François Lamy and forces of the Sudanese warlord Rabih az-Zubayr. Both leaders were killed in the battle. In 1905, administrative responsibility for Chad was placed under a governor-general stationed at Brazzaville, capital of French Equatorial Africa (FEA). Chad did not have a separate colonial status until 1920, when it was placed under a lieutenant-governor stationed in Fort-Lamy (today N'Djamena). Two fundamental themes dominated Chad's colonial experience with the French: an absence of policies designed to unify the territory and an exceptionally slow pace of modernization. In the French scale of priorities, the colony of Chad ranked near the bottom, and the French came to perceive Chad primarily as a source of raw cotton and untrained labour to be used in the more productive colonies to the south. Throughout the colonial period, large areas of Chad were never governed effectively: in the huge BET Prefecture, the handful of French military administrators usually left the people alone, and in central Chad, French rule was only slightly more substantive. Truly speaking, France managed to govern effectively only the south. Decolonization (1940–1960) During World War II, Chad was the first French colony to rejoin the Allies (August 26, 1940), after the defeat of France by Germany. Under the administration of Félix Éboué, France's first black colonial governor, a military column, commanded by Colonel Philippe Leclerc de Hauteclocque, and including two battalions of Sara troops, moved north from N'Djamena (then Fort Lamy) to engage Axis forces in Libya, where, in partnership with the British Army's Long Range Desert Group, they captured Kufra. On 21 January 1942, N'Djamena was bombed by a German aircraft. After the war ended, local parties started to develop in Chad. The first to be born was the radical Chadian Progressive Party (PPT) in February 1947, initially headed by Panamanian born Gabriel Lisette, but from 1959 headed by François Tombalbaye. The more conservative Chadian Democratic Union (UDT) was founded in November 1947 and represented French commercial interests and a bloc of traditional leaders composed primarily of Muslim and Ouaddaïan nobility. The confrontation between the PPT and UDT was more than simply ideological; it represented different regional identities, with the PPT representing the Christian and animist south and the UDT the Islamic north. The PPT won the May 1957 pre-independence elections thanks to a greatly expanded franchise, and Lisette led the government of the Territorial Assembly until he lost a confidence vote on 11 February 1959. After a referendum on territorial autonomy on 28 September 1958, French Equatorial Africa was dissolved, and its four constituent states – Gabon, Congo (Brazzaville), the Central African Republic, and Chad became autonomous members of the French Community from 28 November 1958. Following Lisette's fall in February 1959 the opposition leaders Gontchome Sahoulba and Ahmed Koulamallah could not form a stable government, so the PPT was again asked to form an administration - which it did under the leadership of François Tombalbaye on 26 March 1959. On 12 July 1960 France agreed to Chad becoming fully independent. On 11 August 1960, Chad became an independent country and François Tombalbaye became its first president. The Tombalbaye era (1960–1975) One of the most prominent aspects of Tombalbaye's rule to prove itself was his authoritarianism and distrust of democracy. Already in January 1962 he banned all political parties except his own PPT, and started immediately concentrating all power in his own hands. His treatment of opponents, real or imagined, was extremely harsh, filling the prisons with thousands of political prisoners. What was even worse was his constant discrimination against the central and northern regions of Chad, where the southern Chadian administrators came to be perceived as arrogant and incompetent. This resentment at last exploded in a tax revolt on September 2, 1965 in the Guéra Prefecture, causing 500 deaths. The year after saw the birth in Sudan of the National Liberation Front of Chad (FROLINAT), created to militarily oust Tombalbaye and the Southern dominance. It was the start of a bloody civil war. Tombalbaye resorted to calling in French troops; while moderately successful, they were not fully able to quell the insurgency. Proving more fortunate was his choice to break with the French and seek friendly ties with Libyan Brotherly Leader Gaddafi, taking away the rebels' principal source of supplies. But while he had reported some success against the rebels, Tombalbaye started behaving more and more irrationally and brutally, continuously eroding his consensus among the southern elites, which dominated all key positions in the army, the civil service and the ruling party. As a consequence on April 13, 1975, several units of N'Djamena's gendarmerie killed Tombalbaye during a coup. Military rule (1975–1978) The coup d'état that terminated Tombalbaye's government received an enthusiastic response in N'Djamena. The southerner General Félix Malloum emerged early as the chairman of the new junta. The new military leaders were unable to retain for long the popularity that they had gained through their overthrow of Tombalbaye. Malloum proved himself unable to cope with the FROLINAT and at the end decided his only chance was in coopting some of the rebels: in 1978 he allied himself with the insurgent leader Hissène Habré, who entered the government as prime minister. Civil war (1979-1982) Internal dissent within the government led Prime Minister Habré to send his forces against Malloum's national army in the capital in February 1979. Malloum was ousted from the presidency, but the resulting civil war amongst the 11 emergent factions was so widespread that it rendered the central government largely irrelevant. At that point, other African governments decided to intervene. A series of four international conferences held first under Nigerian and then Organization of African Unity (OAU) sponsorship attempted to bring the Chadian factions together. At the fourth conference, held in Lagos, Nigeria, in August 1979, the Lagos Accord was signed. This accord established a transitional government pending national elections. In November 1979, the Transitional Government of National Unity (GUNT) was created with a mandate to govern for 18 months. Goukouni Oueddei, a northerner, was named president; Colonel Kamougué, a southerner, Vice President; and Habré, Minister of Defense. This coalition proved fragile; in January 1980, fighting broke out again between Goukouni's and Habré's forces. With assistance from Libya, Goukouni regained control of the capital and other urban centers by year's end. However, Goukouni's January 1981 statement that Chad and Libya had agreed to work for the realization of complete unity between the two countries generated intense international pressure and Goukouni's subsequent call for the complete withdrawal of external forces. The Habré era (1982–1990) Libya's partial withdrawal to the Aozou Strip in northern Chad cleared the way for Habré's forces to enter N’Djamena in June. French troops and an OAU peacekeeping force of 3,500 Nigerian, Senegalese, and Zairian troops (partially funded by the United States) remained neutral during the conflict. Habré continued to face armed opposition on various fronts, and was brutal in his repression of suspected opponents, massacring and torturing many during his rule. In the summer of 1983, GUNT forces launched an offensive against government positions in northern and eastern Chad with heavy Libyan support. In response to Libya's direct intervention, French and Zairian forces intervened to defend Habré, pushing Libyan and rebel forces north of the 16th parallel. In September 1984, the French and the Libyan governments announced an agreement for the mutual withdrawal of their forces from Chad. By the end of the year, all French and Zairian troops were withdrawn. Libya did not honor the withdrawal accord, and its forces continued to occupy the northern third of Chad. Rebel commando groups (Codos) in southern Chad were broken up by government massacres in 1984. In 1985 Habré briefly reconciled with some of his opponents, including the Democratic Front of Chad (FDT) and the Coordinating Action Committee of the Democratic Revolutionary Council. Goukouni also began to rally toward Habré, and with his support Habré successfully expelled Libyan forces from most of Chadian territory. A cease-fire between Chad and Libya held from 1987 to 1988, and negotiations over the next several years led to the 1994 International Court of Justice decision granting Chad sovereignty over the Aouzou strip, effectively ending Libyan occupation. The Idriss Déby era (1990–2021) Rise to power However, rivalry between Hadjerai, Zaghawa and Gorane groups within the government grew in the late 1980s. In April 1989, Idriss Déby, one of Habré's leading generals and a Zaghawa, defected and fled to Darfur in Sudan, from which he mounted a Zaghawa-supported series of attacks on Habré (a Gorane). In December 1990, with Libyan assistance and no opposition from French troops stationed in Chad, Déby's forces successfully marched on N’Djamena. After 3 months of provisional government, Déby's Patriotic Salvation Movement (MPS) approved a national charter on February 28, 1991, with Déby as president. During the next two years, Déby faced at least two coup attempts. Government forces clashed violently with rebel forces, including the Movement for Democracy and Development, MDD, National Revival Committee for Peace and Democracy (CSNPD), Chadian National Front (FNT) and the Western Armed Forces (FAO), near Lake Chad and in southern regions of the country. Earlier French demands for the country to hold a National Conference resulted in the gathering of 750 delegates representing political parties (which were legalized in 1992), the government, trade unions and the army to discuss the creation of a pluralist democratic regime. However, unrest continued, sparked in part by large-scale killings of civilians in southern Chad. The CSNPD, led by Kette Moise and other southern groups entered into a peace agreement with government forces in 1994, which later broke down. Two new groups, the Armed Forces for a Federal Republic (FARF) led by former Kette ally Laokein Barde and the Democratic Front for Renewal (FDR), and a reformulated MDD clashed with government forces from 1994 to 1995. Multiparty elections Talks with political opponents in early 1996 did not go well, but Déby announced his intent to hold presidential elections in June. Déby won the country's first multi-party presidential elections with support in the second round from opposition leader Kebzabo, defeating General Kamougue (leader of the 1975 coup against Tombalbaye). Déby's MPS party won 63 of 125 seats in the January 1997 legislative elections. International observers noted numerous serious irregularities in presidential and legislative election proceedings. By mid-1997 the government signed peace deals with FARF and the MDD leadership and succeeded in cutting off the groups from their rear bases in the Central African Republic and Cameroon. Agreements also were struck with rebels from the National Front of Chad (FNT) and Movement for Social Justice and Democracy in October 1997. However, peace was short-lived, as FARF rebels clashed with government soldiers, finally surrendering to government forces in May 1998. Barde was killed in the fighting, as were hundreds of other southerners, most civilians. Since October 1998, Chadian Movement for Justice and Democracy (MDJT) rebels, led by Youssuf Togoimi until his death in September 2002, have skirmished with government troops in the Tibesti region, resulting in hundreds of civilian, government, and rebel casualties, but little ground won or lost. No active armed opposition has emerged in other parts of Chad, although Kette Moise, following senior postings at the Ministry of Interior, mounted a smallscale local operation near Moundou which was quickly and violently suppressed by government forces in late 2000. Déby, in the mid-1990s, gradually restored basic functions of government and entered into agreements with the World Bank and IMF to carry out substantial economic reforms. Oil exploitation in the southern Doba region began in June 2000, with World Bank Board approval to finance a small portion of a project, the Chad-Cameroon Petroleum Development Project, aimed at transport of Chadian crude through a 1000-km buried pipeline through Cameroon to the Gulf of Guinea. The project established unique mechanisms for World Bank, private sector, government, and civil society collaboration to guarantee that future oil revenues benefit local populations and result in poverty alleviation. Success of the project depended on multiple monitoring efforts to ensure that all parties keep their commitments. These "unique" mechanisms for monitoring and revenue management have faced intense criticism from the beginning. Debt relief was accorded to Chad in May 2001. Déby won a flawed 63% first-round victory in May 2001 presidential elections after legislative elections were postponed until spring 2002. Having accused the government of fraud, six opposition leaders were arrested (twice) and one opposition party activist was killed following the announcement of election results. However, despite claims of government corruption, favoritism of Zaghawas, and abuses by the security forces, opposition party and labor union calls for general strikes and more active demonstrations against the government have been unsuccessful. Despite movement toward democratic reform, power remains in the hands of a northern ethnic oligarchy. In 2003, Chad began receiving refugees from the Darfur region of western Sudan. More than 200,000 refugees fled the fighting between two rebel groups and government-supported militias known as Janjaweed. A number of border incidents led to the Chadian-Sudanese War. Oil producing and military improvement Chad become an oil producer in 2003. In order to avoid resource curse and corruption, elaborate plans sponsored by World Bank were made. This plan ensured transparency in payments, as well as that 80% of money from oil exports would be spent on five priority development sectors, two most important of these being: education and healthcare. However money started getting diverted towards the military even before the civil war broke out. In 2006 when the civil war escalated, Chad abandoned previous economic plans sponsored by World Bank and added "national security" as priority development sector, money from this sector was used to improve the military. During the civil war, more than 600 million dollars were used to buy fighter jets, attack helicopters, and armored personnel carriers. Chad earned between 10 and 11 billion dollars from oil production, and estimated 4 billion dollars were invested in the army. War in the East The war started on December 23, 2005, when the government of Chad declared a state of war with Sudan and called for the citizens of Chad to mobilize themselves against the "common enemy," which the Chadian government sees as the Rally for Democracy and Liberty (RDL) militants, Chadian rebels, backed by the Sudanese government, and Sudanese militiamen. Militants have attacked villages and towns in eastern Chad, stealing cattle, murdering citizens, and burning houses. Over 200,000 refugees from the Darfur region of northwestern Sudan currently claim asylum in eastern Chad. Chadian president Idriss Déby accuses Sudanese President Omar Hasan Ahmad al-Bashir of trying to "destabilize our country, to drive our people into misery, to create disorder and export the war from Darfur to Chad." An attack on the Chadian town of Adre near the Sudanese border led to the deaths of either one hundred rebels, as every news source other than CNN has reported, or three hundred rebels. The Sudanese government was blamed for the attack, which was the second in the region in three days, but Sudanese foreign ministry spokesman Jamal Mohammed Ibrahim denies any Sudanese involvement, "We are not for any escalation with Chad. We technically deny involvement in Chadian internal affairs." This attack was the final straw that led to the declaration of war by Chad and the alleged deployment of the Chadian airforce into Sudanese airspace, which the Chadian government denies. An attack on N'Djamena was defeated on April 13, 2006 in the Battle of N'Djamena. The President on national radio stated that the situation was under control, but residents, diplomats and journalists reportedly heard shots of weapons fire. On November 25, 2006, rebels captured the eastern town of Abeche, capital of the Ouaddaï Region and center for humanitarian aid to the Darfur region in Sudan. On the same day, a separate rebel group Rally of Democratic Forces had captured Biltine. On November 26, 2006, the Chadian government claimed to have recaptured both towns, although rebels still claimed control of Biltine. Government buildings and humanitarian aid offices in Abeche were said to have been looted. The Chadian government denied a warning issued by the French Embassy in N'Djamena that a group of rebels was making its way through the Batha Prefecture in central Chad. Chad insists that both rebel groups are supported by the Sudanese government. International orphanage scandal Nearly 100 children at the center of an international scandal that left them stranded at an orphanage in remote eastern Chad returned home after nearly five months March 14, 2008. The 97 children were taken from their homes in October 2007 by a then-obscure French charity, Zoé's Ark, which claimed they were orphans from Sudan's war-torn Darfur region. Rebel attack on Ndjamena On Friday, February 1, 2008, rebels, an opposition alliance of leaders Mahamat Nouri, a former defense minister, and Timane Erdimi, a nephew of Idriss Déby who was his chief of staff, attacked the Chadian capital of Ndjamena - even surrounding the Presidential Palace. But Idris Deby with government troops fought back. French forces flew in ammunition for Chadian government troops but took no active part in the fighting. UN has said that up to 20,000 people left the region, taking refuge in nearby Cameroon and Nigeria. Hundreds of people were killed, mostly civilians. The rebels accuse Deby of corruption and embezzling millions in oil revenue. While many Chadians may share that assessment, the uprising appears to be a power struggle within the elite that has long controlled Chad. The French government believes that the opposition has regrouped east of the capital. Déby has blamed Sudan for the current unrest in Chad. Regional interventionism During the Déby era, Chad intervened in conflicts in Mali, Central African Republic, Niger and Nigeria. In 2013, Chad sent 2000 men from its military to help France in Operation Serval during the Mali War. Later in the same year Chad sent 850 troops to Central African Republic to help peacekeeping operation MISCA, those troops withdrew in April 2014 after allegations of human rights violations. During the Boko Haram insurgency, Chad multiple times sent troops to assist the fight against Boko Haram in Niger and Nigeria. In August 2018, rebel fighters of the Military Command Council for the Salvation of the Republic (CCMSR) attacked government forces in northern Chad. Chad experienced threats from jihadists fleeing the Libyan conflict. Chad had been an ally of the West in the fight against Islamist militants in West Africa. In January 2019, after 47 years, Chad restored diplomatic relations with Israel. It was announced during a visit to N’Djamena by Israeli Prime Minister Benjamin Netanyahu. After Idriss Déby (2021–present) In April 2021, Chad's army announced that President Idriss Déby had died of his injuries following clashes with rebels in the north of the country. Idriss Deby ruled the country for more than 30 years since 1990. It was also announced that a military council led by Déby's son, Mahamat Idriss Déby a 37-year-old four star general, will govern for the next 18 months. See also 2010 Sahel famine History of Africa List of heads of government of Chad List of heads of state of Chad List of human evolution fossils Politics of Chad Neolithic Subpluvial Further reading Gibbons, Ann. The First Human : The Race to Discover our Earliest Ancestor. Anchor Books (2007). References External links The Library of Congress - A Country Study: Chad Chad he:צ'אד#היסטוריה
5330
https://en.wikipedia.org/wiki/Geography%20of%20Chad
Geography of Chad
Chad is one of the 47 landlocked countries in the world and is located in North Central Africa, measuring , nearly twice the size of France and slightly more than three times the size of California. Most of its ethnically and linguistically diverse population lives in the south, with densities ranging from 54 persons per square kilometer in the Logone River basin to 0.1 persons in the northern B.E.T. (Borkou-Ennedi-Tibesti) desert region, which itself is larger than France. The capital city of N'Djaména, situated at the confluence of the Chari and Logone Rivers, is cosmopolitan in nature, with a current population in excess of 700,000 people. Chad has four climatic zones. The northernmost Saharan zone averages less than of rainfall annually. The sparse human population is largely nomadic, with some livestock, mostly small ruminants and camels. The central Sahelian zone receives between rainfall and has vegetation ranging from grass/shrub steppe to thorny, open savanna. The southern zone, often referred to as the Sudan zone, receives between , with woodland savanna and deciduous forests for vegetation. Rainfall in the Guinea zone, located in Chad's southwestern tip, ranges between . The country's topography is generally flat, with the elevation gradually rising as one moves north and east away from Lake Chad. The highest point in Chad is Emi Koussi, a mountain that rises in the northern Tibesti Mountains. The Ennedi Plateau and the Ouaddaï highlands in the east complete the image of a gradually sloping basin, which descends towards Lake Chad. There are also central highlands in the Guera region rising to . Lake Chad is the second largest lake in west Africa and is one of the most important wetlands on the continent. Home to 120 species of fish and at least that many species of birds, the lake has shrunk dramatically in the last four decades due to increased water usage from an expanding population and low rainfall. Bordered by Chad, Niger, Nigeria, and Cameroon, Lake Chad currently covers only 1350 square kilometers, down from 25,000 square kilometers in 1963. The Chari and Logone Rivers, both of which originate in the Central African Republic and flow northward, provide most of the surface water entering Lake Chad. Chad is also next to Niger. Geographical placement Located in north-central Africa, Chad stretches for about 1,800 kilometers from its northernmost point to its southern boundary. Except in the far northwest and south, where its borders converge, Chad's average width is about 800 kilometers. Its area of 1,284,000 square kilometers is roughly equal to the combined areas of Idaho, Wyoming, Utah, Nevada, and Arizona. Chad's neighbors include Libya to the north, Niger and Nigeria to the west, Sudan to the east, Central African Republic to the south, and Cameroon to the southwest. Chad exhibits two striking geographical characteristics. First, the country is landlocked. N'Djamena, the capital, is located more than 1,100 kilometers northeast of the Atlantic Ocean; Abéché, a major city in the east, lies 2,650 kilometers from the Red Sea; and Faya-Largeau, a much smaller but strategically important center in the north, is in the middle of the Sahara Desert, 1,550 kilometers from the Mediterranean Sea. These vast distances from the sea have had a profound impact on Chad's historical and contemporary development. The second noteworthy characteristic is that the country borders on very different parts of the African continent: North Africa, with its Islamic culture and economic orientation toward the Mediterranean Basin; and West Africa, with its diverse religions and cultures and its history of highly developed states and regional economies. Chad also borders Northeast Africa, oriented toward the Nile Valley and the Red Sea region - and Central or Equatorial Africa, some of whose people have retained classical African religions while others have adopted Christianity, and whose economies were part of the great Congo River system. Although much of Chad's distinctiveness comes from this diversity of influences, since independence the diversity has also been an obstacle to the creation of a national identity. Land Although Chadian society is economically, socially, and culturally fragmented, the country's geography is unified by the Lake Chad Basin. Once a huge inland sea (the Pale-Chadian Sea) whose only remnant is shallow Lake Chad, this vast depression extends west into Nigeria and Niger. The larger, northern portion of the basin is bounded within Chad by the Tibesti Mountains in the northwest, the Ennedi Plateau in the northeast, the Ouaddaï Highlands in the east along the border with Sudan, the Guéra Massif in central Chad, and the Mandara Mountains along Chad's southwestern border with Cameroon. The smaller, southern part of the basin falls almost exclusively in Chad. It is delimited in the north by the Guéra Massif, in the south by highlands 250 kilometers south of the border with Central African Republic, and in the southwest by the Mandara Mountains. Lake Chad, located in the southwestern part of the basin at an altitude of 282 meters, surprisingly does not mark the basin's lowest point; instead, this is found in the Bodele and Djourab regions in the north-central and northeastern parts of the country, respectively. This oddity arises because the great stationary dunes (ergs) of the Kanem region create a dam, preventing lake waters from flowing to the basin's lowest point. At various times in the past, and as late as the 1870s, the Bahr el Ghazal Depression, which extends from the northeastern part of the lake to the Djourab, acted as an overflow canal; since independence, climatic conditions have made overflows impossible. North and northeast of Lake Chad, the basin extends for more than 800 kilometers, passing through regions characterized by great rolling dunes separated by very deep depressions. Although vegetation holds the dunes in place in the Kanem region, farther north they are bare and have a fluid, rippling character. From its low point in the Djourab, the basin then rises to the plateaus and peaks of the Tibesti Mountains in the north. The summit of this formation—as well as the highest point in the Sahara Desert—is Emi Koussi, a dormant volcano that reaches 3,414 meters above sea level. The basin's northeastern limit is the Ennedi Plateau, whose limestone bed rises in steps etched by erosion. East of the lake, the basin rises gradually to the Ouaddaï Highlands, which mark Chad's eastern border and also divide the Chad and Nile watersheds. These highland areas are part of the East Saharan montane xeric woodlands ecoregion. Southeast of Lake Chad, the regular contours of the terrain are broken by the Guéra Massif, which divides the basin into its northern and southern parts. South of the lake lie the floodplains of the Chari and Logone rivers, much of which are inundated during the rainy season. Farther south, the basin floor slopes upward, forming a series of low sand and clay plateaus, called koros, which eventually climb to 615 meters above sea level. South of the Chadian border, the koros divide the Lake Chad Basin from the Ubangi-Zaire river system. Water systems Permanent streams do not exist in northern or central Chad. Following infrequent rains in the Ennedi Plateau and Ouaddaï Highlands, water may flow through depressions called enneris and wadis. Often the result of flash floods, such streams usually dry out within a few days as the remaining puddles seep into the sandy clay soil. The most important of these streams is the Batha, which in the rainy season carries water west from the Ouaddaï Highlands and the Guéra Massif to Lake Fitri. Chad's major rivers are the Chari and the Logone and their tributaries, which flow from the southeast into Lake Chad. Both river systems rise in the highlands of Central African Republic and Cameroon, regions that receive more than 1,250 millimeters of rainfall annually. Fed by rivers of Central African Republic, as well as by the Bahr Salamat, Bahr Aouk, and Bahr Sara rivers of southeastern Chad, the Chari River is about 1,200 kilometers long. From its origins near the city of Sarh, the middle course of the Chari makes its way through swampy terrain; the lower Chari is joined by the Logone River near N'Djamena. The Chari's volume varies greatly, from 17 cubic meters per second during the dry season to 340 cubic meters per second during the wettest part of the year. The Logone River is formed by tributaries flowing from Cameroon and Central African Republic. Both shorter and smaller in volume than the Chari, it flows northeast for 960 kilometers; its volume ranges from five to eighty-five cubic meters per second. At N'Djamena the Logone empties into the Chari, and the combined rivers flow together for thirty kilometers through a large delta and into Lake Chad. At the end of the rainy season in the fall, the river overflows its banks and creates a huge floodplain in the delta. The seventh largest lake in the world (and the fourth largest in Africa), Lake Chad is located in the sahelian zone, a region just south of the Sahara Desert. The Chari River contributes 95 percent of Lake Chad's water, an average annual volume of 40 billion cubic meters, 95% of which is lost to evaporation. The size of the lake is determined by rains in the southern highlands bordering the basin and by temperatures in the Sahel. Fluctuations in both cause the lake to change dramatically in size, from 9,800 square kilometers in the dry season to 25,500 at the end of the rainy season. Lake Chad also changes greatly in size from one year to another. In 1870 its maximum area was 28,000 square kilometers. The measurement dropped to 12,700 in 1908. In the 1940s and 1950s, the lake remained small, but it grew again to 26,000 square kilometers in 1963. The droughts of the late 1960s, early 1970s, and mid-1980s caused Lake Chad to shrink once again, however. The only other lakes of importance in Chad are Lake Fitri, in Batha Prefecture, and Lake Iro, in the marshy southeast. Climate The Lake Chad Basin embraces a great range of tropical climates from north to south, although most of these climates tend to be dry. Apart from the far north, most regions are characterized by a cycle of alternating rainy and dry seasons. In any given year, the duration of each season is determined largely by the positions of two great air masses—a maritime mass over the Atlantic Ocean to the southwest and a much drier continental mass. During the rainy season, winds from the southwest push the moister maritime system north over the African continent where it meets and slips under the continental mass along a front called the "intertropical convergence zone". At the height of the rainy season, the front may reach as far as Kanem Prefecture. By the middle of the dry season, the intertropical convergence zone moves south of Chad, taking the rain with it. This weather system contributes to the formation of three major regions of climate and vegetation. Saharan region The Saharan region covers roughly the northern half of the country, including Borkou-Ennedi-Tibesti Prefecture along with the northern parts of Kanem, Batha, and Biltine prefectures. Much of this area receives only traces of rain during the entire year; at Faya-Largeau, for example, annual rainfall averages less than , and there are nearly 3800 hours of sunshine. Scattered small oases and occasional wells provide water for a few date palms or small plots of millet and garden crops. In much of the north, the average daily maximum temperature is about during January, the coolest month of the year, and about during May, the hottest month. On occasion, strong winds from the northeast produce violent sandstorms. In northern Biltine Prefecture, a region called the Mortcha plays a major role in animal husbandry. Dry for eight months of the year, it receives or more of rain, mostly during July and August. A carpet of green springs from the desert during this brief wet season, attracting herders from throughout the region who come to pasture their cattle and camels. Because very few wells and springs have water throughout the year, the herders leave with the end of the rains, turning over the land to the antelopes, gazelles, and ostriches that can survive with little groundwater. Northern Chad averages over 3500 hours of sunlight per year, the south somewhat less. Sahelian region The semiarid sahelian zone, or Sahel, forms a belt about wide that runs from Lac and Chari-Baguirmi prefectures eastward through Guéra, Ouaddaï, and northern Salamat prefectures to the Sudanese frontier. The climate in this transition zone between the desert and the southern sudanian zone is divided into a rainy season (from June to September) and a dry period (from October to May). In the northern Sahel, thorny shrubs and acacia trees grow wild, while date palms, cereals, and garden crops are raised in scattered oases. Outside these settlements, nomads tend their flocks during the rainy season, moving southward as forage and surface water disappear with the onset of the dry part of the year. The central Sahel is characterized by drought-resistant grasses and small woods. Rainfall is more abundant there than in the Saharan region. For example, N'Djamena records a maximum annual average rainfall of , while Ouaddaï Prefecture receives just a bit less. During the hot season, in April and May, maximum temperatures frequently rise above . In the southern part of the Sahel, rainfall is sufficient to permit crop production on unirrigated land, and millet and sorghum are grown. Agriculture is also common in the marshlands east of Lake Chad and near swamps or wells. Many farmers in the region combine subsistence agriculture with the raising of cattle, sheep, goats, and poultry. Sudanian region The humid sudanian zone includes the Sahel, the southern prefectures of Mayo-Kebbi, Tandjilé, Logone Occidental, Logone Oriental, Moyen-Chari, and southern Salamat. Between April and October, the rainy season brings between of precipitation. Temperatures are high throughout the year. Daytime readings in Moundou, the major city in the southwest, range from in the middle of the cool season in January to about in the hot months of March, April, and May. The sudanian region is predominantly East Sudanian savanna, or plains covered with a mixture of tropical or subtropical grasses and woodlands. The growth is lush during the rainy season but turns brown and dormant during the five-month dry season between November and March. Over a large part of the region, however, natural vegetation has yielded to agriculture. 2010 drought On 22 June, the temperature reached in Faya, breaking a record set in 1961 at the same location. Similar temperature rises were also reported in Niger, which began to enter a famine situation. On 26 July the heat reached near-record levels over Chad and Niger. Area Area: total: 1.284 million km2 land: 1,259,200 km2 water: 24,800 km2 Area - comparative: Canada: smaller than the Northwest Territories US: slightly more than three times the size of California Boundaries Land boundaries: total: 6,406 km border countries: Cameroon 1,116 km, Central African Republic 1,556 km, Libya 1,050 km, Niger 1,196 km, Nigeria 85 km, Sudan 1,403 km Coastline: 0 km (landlocked) Maritime claims: none (landlocked) Elevation extremes: lowest point: Bodélé Depression 160 m highest point: Emi Koussi 3,415 m Land use and resources Natural resources: petroleum, uranium, natron, kaolin, fish (Chari River, Logone River), gold, limestone, sand and gravel, salt Land use: arable land: 3.89% permanent crops: 0.03% other: 96.08% (2012) Irrigated land: 302.7 km2 (2003) Total renewable water resources: 43 km3 (2011) Freshwater withdrawal (domestic/industrial/agricultural): total: 0.88 km3/yr (12%/12%/76%) per capita: 84.81 m3/yr (2005) Environmental issues Natural hazards: hot, dry, dusty, Harmattan winds occur in north; periodic droughts; locust plagues Environment - current issues: inadequate supplies of potable water; improper waste disposal in rural areas contributes to soil and water pollution; desertification See also 2010 Sahel famine Extreme points This is a list of the extreme points of Chad, the points that are farther north, south, east or west than any other location. Northernmost point - an unnamed location on the border with Libya, Borkou-Ennedi-Tibesti region Easternmost point - the northern section of the Chad-Sudan border, Borkou-Ennedi-Tibesti region * Southernmost point - unnamed location on the border with Central African Republic at a confluence in the Lébé river, Logone Oriental region Westernmost point - unnamed location west of the town of Kanirom and immediately north of Lake Chad, Lac Region *Note: technically Chad does not have an easternmost point, the easternmost section of the border being formed by the 24° of longitude References Sources External links – title=Field Listing: Geographic Coordinates. Geo-links for Geography of Chad. Detailed map of Chad from www.izf.net
5331
https://en.wikipedia.org/wiki/Demographics%20of%20Chad
Demographics of Chad
The people of Chad speak more than 100 languages and divide themselves into many ethnic groups. However, language and ethnicity are not the same. Moreover, neither element can be tied to a particular physical type. Although the possession of a common language shows that its speakers have lived together and have a common history, peoples also change languages. This is particularly so in Chad, where the openness of the terrain, marginal rainfall, frequent drought and famine, and low population densities have encouraged physical and linguistic mobility. Slave raids among non-Muslim peoples, internal slave trade, and exports of captives northward from the ninth to the twentieth centuries also have resulted in language changes. Anthropologists view ethnicity as being more than genetics. Like language, ethnicity implies a shared heritage, partly economic, where people of the same ethnic group may share a livelihood, and partly social, taking the form of shared ways of doing things and organizing relations among individuals and groups. Ethnicity also involves a cultural component made up of shared values and a common worldview. Like language, ethnicity is not immutable. Shared ways of doing things change over time and alter a group's perception of its own identity. Not only do the social aspects of ethnic identity change but the biological composition (or gene pool) also may change over time. Although most ethnic groups emphasize intermarriage, people are often proscribed from seeking partners among close relatives—a prohibition that promotes biological variation. In all groups, the departure of some individuals or groups and the integration of others also changes the biological component. The Chadian government has avoided official recognition of ethnicity. With the exception of a few surveys conducted shortly after independence, little data were available on this important aspect of Chadian society. Nonetheless, ethnic identity was a significant component of life in Chad. The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa. Chad's languages fall into ten major groups, each of which belongs to either the Nilo-Saharan, Afro-Asiatic, or Niger–Congo language family. These represent three of the four major language families in Africa; only the Khoisan languages of southern Africa are not represented. The presence of such different languages suggests that the Lake Chad Basin may have been an important point of dispersal in ancient times. Population According to the total population was in , compared to only 2 429 000 in 1950. The proportion of children below the age of 15 in 2010 was 45.4%, 51.7% was between 15 and 65 years of age, while 2.9% was 65 years or the country is projected to have a population of 34 millions peoples in 2050 and 61 millions peoples in 2100 . Population by Sex and Age Group (Census 20.V.2009): Vital statistics Registration of vital events is in Chad not complete. The Population Departement of the United Nations prepared the following estimates. Source: UN DESA, World Population Prospects, 2022 Fertility and births Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR): Fertility data as of 2014-2015 (DHS Program): Religions The separation of religion from social structure in Chad represents a false dichotomy, for they are perceived as two sides of the same coin. Three religious traditions coexist in Chad- classical African religions, Islam, and Christianity. None is monolithic. The first tradition includes a variety of ancestor and/or place-oriented religions whose expression is highly specific. Islam, although characterized by an orthodox set of beliefs and observances, also is expressed in diverse ways. Christianity arrived in Chad much more recently with the arrival of Europeans. Its followers are divided into Roman Catholics and Protestants (including several denominations); as with Chadian Islam, Chadian Christianity retains aspects of pre-Christian religious belief. The number of followers of each tradition in Chad is unknown. Estimates made in 1962 suggested that 35 percent of Chadians practiced classical African religions, 55 percent were Muslims, and 10 percent were Christians. In the 1970s and 1980s, this distribution undoubtedly changed. Observers report that Islam has spread among the Hajerai and among other non-Muslim populations of the Saharan and sahelian zones. However, the proportion of Muslims may have fallen because the birthrate among the followers of traditional religions and Christians in southern Chad is thought to be higher than that among Muslims. In addition, the upheavals since the mid-1970s have resulted in the departure of some missionaries; whether or not Chadian Christians have been numerous enough and organized enough to have attracted more converts since that time is unknown. Other demographic statistics Demographic statistics according to the World Population Review in 2022. One birth every 45 seconds One death every 3 minutes One net migrant every 1440 minutes Net gain of one person every 1 minutes The following demographic statistics are from the CIA World Factbook. Population 17,963,211 (2022 est.) 15,833,116 (July 2018 est.) 12,075,985 (2017 est.) Religions Muslim 52.1%, Protestant 23.9%, Roman Catholic 20%, animist 0.3%, other Christian 0.2%, none 2.8%, unspecified 0.7% (2014-15 est.) Age structure 0-14 years: 47.43% (male 4,050,505/female 3,954,413) 15-24 years: 19.77% (male 1,676,495/female 1,660,417) 25-54 years: 27.14% (male 2,208,181/female 2,371,490) 55-64 years: 3.24% (male 239,634/female 306,477) 65 years and over: 2.43% (2020 est.) (male 176,658/female 233,087) 0-14 years: 48.12% (male 3,856,001 /female 3,763,622) 15-24 years: 19.27% (male 1,532,687 /female 1,518,940) 25-54 years: 26.95% (male 2,044,795 /female 2,222,751) 55-64 years: 3.25% (male 228,930 /female 286,379) 65 years and over: 2.39% (male 164,257 /female 214,754) (2018 est.) Median age total: 16.1 years. Country comparison to the world: 223rd male: 15.6 years female: 16.5 years (2020 est.) total: 15.8 years. Country comparison to the world: 226th male: 15.3 years female: 16.3 years (2018 est.) Total: 17.8 years Male: 16.8 years Female: 18.8 years (2017 est.) Population growth rate 3.09% (2022 est.) Country comparison to the world: 10th 3.23% (2018 est.) Country comparison to the world: 5th Birth rate 40.45 births/1,000 population (2022 est.) Country comparison to the world: 6th 43 births/1,000 population (2018 est.) Country comparison to the world: 4th Death rate 9.45 deaths/1,000 population (2022 est.) Country comparison to the world: 49th 10.5 deaths/1,000 population (2018 est.) Country comparison to the world: 26th Net migration rate -0.13 migrant(s)/1,000 population (2022 est.) Country comparison to the world: 105th -3.2 migrant(s)/1,000 population (2017 est.) Country comparison to the world: 176th Total fertility rate 5.46 children born/woman (2022 est.) Country comparison to the world: 5th 5.9 children born/woman (2018 est.) Country comparison to the world: 4th Mother's mean age at first birth 18.1 years (2014/15 est.) note: median age at first birth among women 25-49 Dependency ratios total dependency ratio: 100.2 (2015 est.) youth dependency ratio: 95.2 (2015 est.) elderly dependency ratio: 4.9 (2015 est.) potential support ratio: 20.3 (2015 est.) Contraceptive prevalence rate 8.1% (2019) 5.7% (2014/15) Urbanization urban population: 24.1% of total population (2022) rate of urbanization: 4.1% annual rate of change (2020-25 est.) urban population: 23.1% of total population (2018) rate of urbanization: 3.88% annual rate of change (2015-20 est.) Sex ratio At birth: 1.04 male(s)/female Under 15 years: 1.01 male(s)/female 15–64 years: 0.92 male(s)/female 65 years and over: 0.66 male(s)/female Total population: 0.96 male(s)/female (2006 est.) Life expectancy at birth total population: 59.15 years. Country comparison to the world: 222nd male: 57.32 years female: 61.06 years (2022 est.) total population: 57.5 years (2018 est.) Country comparison to the world: 214th male: 55.7 years (2018 est.) female: 59.3 years (2018 est.) Total population: 50.6 years Male: 49.4 years Female: 51.9 years (2017 est.) HIV/AIDS Adult prevalence rate: 1.3% (2017 est.) People living with HIV/AIDS: 110,000(2017 est.) Deaths: 3,100 (2017 est.) Children under the age of 5 years underweight 28.8% (2015) Major infectious diseases degree of risk: very high (2020) food or waterborne diseases: bacterial and protozoal diarrhea, hepatitis A and E, and typhoid fever vectorborne diseases: malaria and dengue fever water contact diseases: schistosomiasis animal contact diseases: rabies respiratory diseases: meningococcal meningitis note: on 21 March 2022, the US Centers for Disease Control and Prevention (CDC) issued a Travel Alert for polio in Africa; Chad is currently considered a high risk to travelers for circulating vaccine-derived polioviruses (cVDPV); vaccine-derived poliovirus (VDPV) is a strain of the weakened poliovirus that was initially included in oral polio vaccine (OPV) and that has changed over time and behaves more like the wild or naturally occurring virus; this means it can be spread more easily to people who are unvaccinated against polio and who come in contact with the stool or respiratory secretions, such as from a sneeze, of an “infected” person who received oral polio vaccine; the CDC recommends that before any international travel, anyone unvaccinated, incompletely vaccinated, or with an unknown polio vaccination status should complete the routine polio vaccine series; before travel to any high-risk destination, CDC recommends that adults who previously completed the full, routine polio vaccine series receive a single, lifetime booster dose of polio vaccine Child marriage women married by age 15: 24.2% (2019) women married by age 18: 60.6% (2019) men married by age 18: 8.1% (2019 est.) Nationality Noun: Chadian(s) Adjective: Chadian Ethnic groups The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa. 200 distinct groups In the north and center: Arabs, Tubu (Daza, Teda), Zaghawa, Kanembu, Wadai, Baguirmi, Hadjarai, Fulani, Kotoko, Hausa, Bulala, and Maba, most of whom are Muslim In the south: Sara (Ngambaye, Mbaye, Goulaye), Mundang, Mussei, Massa, most of whom are Christian or animist About 5,000 French citizens live in Chad. Religions Islam 51.8% Roman Catholic 20.3% Protestant 23.5% Animist 0.6% Other Christians 0.3% Unknown 0.6% None 2.9% Languages Arabic (official), French (official), Sara (in south), more than 120 languages and dialects Literacy Definition: age 15 and over can read and write French or Arabic total population: 22.3% (2016 est.) male: 31.3% (2016 est.) female: 14% (2016 est.) School life expectancy (primary to tertiary education) total: 7 years male: 9 years female: 6 years (2015) total: 8 years (2014) male: 9 years (2014) female: 6 years (2014) Notes References Attribution: Society of Chad
5332
https://en.wikipedia.org/wiki/Politics%20of%20Chad
Politics of Chad
The Politics of Chad take place in a framework of a presidential republic, whereby the President of Chad is both head of state and head of government. Executive power is exercised by the government. Legislative power is vested in both the government and parliament. Chad is one of the most corrupt countries in the world. In May 2013, security forces in Chad foiled a coup against the President Idriss Deby that had been in preparation for several months. In April 2021, President Déby was injured by the rebel group Front Pour l'Alternance et La Concorde au Tchad (FACT). He succumbed to his injuries on April 20, 2021. His presidency was taken by his family member Mahamat Déby in April of 2021. This resulted in both the National Assembly and Chadian Government being dissolved and replaced with a Transitional Military Council. The National Transitional Council will oversee the transition to democracy. Executive branch |President |Mahamat Déby |Patriotic Salvation Movement |20 April 2021 |} Chad's executive branch is headed by the President and dominates the Chadian political system. Following the military overthrow of Hissène Habré in December 1990, Idriss Déby won the presidential elections in 1996 and 2001. The constitutional basis for the government is the 1996 constitution, under which the president was limited to two terms of office until Déby had that provision repealed in 2005. The president has the power to appoint the Council of State (or cabinet), and exercises considerable influence over appointments of judges, generals, provincial officials and heads of Chad's parastatal firms. In cases of grave and immediate threat, the president, in consultation with the National Assembly President and Council of State, may declare a state of emergency. Most of the key advisors for former president Déby were members of the Zaghawa clan, although some southern and opposition personalities were represented in his government. Legislative branch According to the 1996 constitution, the National Assembly deputies are elected by universal suffrage for 4-year terms. The Assembly holds regular sessions twice a year, starting in March and October, and can hold special sessions as necessary and called by the prime minister. Deputies elect a president of the National Assembly every 2 years. Assembly deputies or members of the executive branch may introduce legislation; once passed by the Assembly, the president must take action to either sign or reject the law within 15 days. The National Assembly must approve the prime minister's plan of government and may force the prime minister to resign through a majority vote of no-confidence. However, if the National Assembly rejects the executive branch's program twice in one year, the president may disband the Assembly and call for new legislative elections. In practice, the president exercises considerable influence over the National Assembly through the MPS party structure. Judicial branch Despite the constitution's guarantee of judicial independence from the executive branch, the president names most key judicial officials. The Supreme Court is made up of a chief justice, named by the president, and 15 councilors chosen by the president and National Assembly; appointments are for life. The Constitutional Council, with nine judges elected to 9-year terms, has the power to review all legislation, treaties and international agreements prior to their adoption. The constitution recognizes customary and traditional law in locales where it is recognized and to the extent it does not interfere with public order or constitutional guarantees of equality for all citizens. Political parties and elections Presidential elections Parliamentary elections International organization participation ACCT, ACP, AfDB, AU, BDEAC, CEMAC, FAO, FZ, G-77, IBRD, ICAO, ICCt, ICFTU, ICRM, IDA, IDB, IFAD, IFC, IFRCS, ILO, IMF, Interpol, IOC, ITU, MIGA, NAM, OIC, ONUB, OPCW, UN, UNCTAD, UNESCO, UNIDO, UNOCI, UPU, WCL, WHO, WIPO, WMO, WToO, WTrO 2021 government shakeup On 20 April 2021, following the death of longtime Chad President Idriss Déby, the Military of Chad released a statement confirming that both the Government of Chad and the nation's National Assembly had been dissolved and that a Transitional Military Council led by Déby's son Mahamat would lead the nation for at least 18 months. Following protests on 14 May 2022, the authorities in Chad detained several members of civil society organizations. The protests were organized in N’Djamena, and other cities across the country by Chadian civil society organizations, united under the coalition Wakit Tamma. References pt:Chade#Política
5333
https://en.wikipedia.org/wiki/Economy%20of%20Chad
Economy of Chad
The economy of Chad suffers from the landlocked country's geographic remoteness, drought, lack of infrastructure, and political turmoil. About 85% of the population depends on agriculture, including the herding of livestock. Of Africa's Francophone countries, Chad benefited least from the 50% devaluation of their currencies in January 1994. Financial aid from the World Bank, the African Development Bank, and other sources is directed largely at the improvement of agriculture, especially livestock production. Because of lack of financing, the development of oil fields near Doba, originally due to finish in 2000, was delayed until 2003. It was finally developed and is now operated by ExxonMobil. In terms of gross domestic product, Chad ranks 147th globally with $11.051 billion dollars as of 2018. Agriculture Chad produced in 2018: 969 thousand tons of sorghum; 893 thousand tons of peanut butter; 756 thousand tons of millet; 484 thousand tonnes of yam (8th largest producer in the world); 475 thousand tons of sugarcane; 437 thousand tons of maize; 284 thousand tons of cassava; 259 thousand tons of rice; 255 thousand tons of sweet potato; 172 thousand tons of sesame seed; 151 thousand tons of bean; 120 thousand tons of cotton; In addition to smaller productions of other agricultural products. Macro-economic trend The following table shows the main economic indicators in 1980–2017. Other statistics GDP: purchasing power parity – $28.62 billion (2017 est.) GDP – real growth rate: -3.1% (2017 est.) GDP – per capita: $2,300 (2017 est.) Gross national saving: 15.5% of GDP (2017 est.) GDP – composition by sector: agriculture: 52.3% (2017 est.) industry: 14.7% (2017 est.) services: 33.1% (2017 est.) Population below poverty line:: 46.7% (2011 est.) Distribution of family income – Gini index: 43.3 (2011 est.) Inflation rate (consumer prices): -0.9% (2017 est.) Labor force: 5.654 million (2017 est.) Labor force – by occupation: agriculture 80%, industry and services 20% (2006 est.) Budget: revenues: 1.337 billion (2017 est.) expenditures: 1.481 billion (2017 est.) Budget surplus (+) or deficit (-): -1.5% (of GDP) (2017 est.) Public debt: 52.5% of GDP (2017 est.) Industries: oil, cotton textiles, brewing, natron (sodium carbonate), soap, cigarettes, construction materials Industrial production growth rate: -4% (2017 est.) electrification: total population: 4% (2013) electrification: urban areas: 14% (2013) electrification: rural areas: 1% (2013) Electricity – production: 224.3 million kWh (2016 est.) Electricity – production by source: fossil fuel: 98% hydro: 0% nuclear: 0% other renewable: 3% (2017) Electricity – consumption: 208.6 million kWh (2016 est.) Electricity – exports: 0 kWh (2016 est.) Electricity – imports: 0 kWh (2016 est.) Agriculture – products: cotton, sorghum, millet, peanuts, sesame, corn, rice, potatoes, onions, cassava (manioc, tapioca), cattle, sheep, goats, camels Exports: $2.464 billion (2017 est.) Exports – commodities: oil, livestock, cotton, sesame, gum arabic, shea butter Exports – partners: US 38.7%, China 16.6%, Netherlands 15.7%, UAE 12.2%, India 6.3% (2017) Imports: $2.16 billion (2017 est.) Imports – commodities: machinery and transportation equipment, industrial goods, foodstuffs, textiles Imports – partners: China 19.9%, Cameroon 17.2%, France 17%, US 5.4%, India 4.9%, Senegal 4.5% (2017) Debt – external: $1.724 billion (31 December 2017 est.) Reserves of foreign exchange and gold: $22.9 million (31 December 2017 est.) See also Chad Economy of Africa Petroleum industry in Chad United Nations Economic Commission for Africa References General External links Chad latest trade data on ITC Trade Map World Bank – Chad-Cameroon Pipeline Project Chad Chad
5334
https://en.wikipedia.org/wiki/Telecommunications%20in%20Chad
Telecommunications in Chad
Telecommunications in Chad include radio, television, fixed and mobile telephones, and the Internet. Radio and television Radio stations: state-owned radio network, Radiodiffusion Nationale Tchadienne (RNT), operates national and regional stations; about 10 private radio stations; some stations rebroadcast programs from international broadcasters (2007); 2 AM, 4 FM, and 5 shortwave stations (2001). Radios: 1.7 million (1997). Television stations: 1 state-owned TV station, Tele Tchad (2007); 1 station (2001). Television sets: 10,000 (1997). Radio is the most important medium of mass communication. State-run Radiodiffusion Nationale Tchadienne operates national and regional radio stations. Around a dozen private radio stations are on the air, despite high licensing fees, some run by religious or other non-profit groups. The BBC World Service (FM 90.6) and Radio France Internationale (RFI) broadcast in the capital, N'Djamena. The only television station, Tele Tchad, is state-owned. State control of many broadcasting outlets allows few dissenting views. Journalists are harassed and attacked. On rare occasions journalists are warned in writing by the High Council for Communication to produce more "responsible" journalism or face fines. Some journalists and publishers practice self-censorship. On 10 October 2012, the High Council on Communications issued a formal warning to La Voix du Paysan, claiming that the station's live broadcast on 30 September incited the public to "insurrection against the government." The station had broadcast a sermon by a bishop who criticized the government for allegedly failing to use oil wealth to benefit the region. Telephones Calling code: +235 International call prefix: 00 Main lines: 29,900 lines in use, 176th in the world (2012); 13,000 lines in use, 201st in the world (2004). Mobile cellular: 4.2 million lines, 119th in the world (2012); 210,000 lines, 155th in the world (2005). Telephone system: inadequate system of radiotelephone communication stations with high costs and low telephone density; fixed-line connections for less than 1 per 100 persons coupled with mobile-cellular subscribership base of only about 35 per 100 persons (2011). Satellite earth stations: 1 Intelsat (Atlantic Ocean) (2011). Internet Top-level domain: .td Internet users: 230,489 users, 149th in the world; 2.1% of the population, 200th in the world (2012); 168,100 users, 145th in the world (2009);   35,000 users, 167th in the world (2005). Fixed broadband: 18,000 subscriptions, 132nd in the world; 0.2% of the population, 161st in the world (2012). Wireless broadband: Unknown (2012). Internet hosts: 6 hosts, 229th in the world (2012); 9 hosts, 217th in the world (2006). IPv4: 4,096 addresses allocated, less than 0.05% of the world total, 0.4 addresses per 1000 people (2012). Internet censorship and surveillance There are no government restrictions on access to the Internet or credible reports that the government monitors e-mail or Internet chat rooms. The constitution provides for freedom of opinion, expression, and press, but the government does not always respect these rights. Private individuals are generally free to criticize the government without reprisal, but reporters and publishers risk harassment from authorities when publishing critical articles. The 2010 media law abolished prison sentences for defamation and insult, but prohibits "inciting racial, ethnic, or religious hatred," which is punishable by one to two years in prison and a fine of one to three million CFA francs ($2,000 to $6,000). See also Radiodiffusion Nationale Tchadienne, state-operated national radio broadcaster. Télé Tchad, state-operated national TV broadcaster. Societe des Telecommunications Internationales du Tchad (SotelTchad), telecommunications parastatal providing landline telephone and Internet services. List of terrestrial fibre optic cable projects in Africa Media of Chad Economy of Chad Chad References External links "Chad still on pace for ICT policy goals", oAfrica, 20 November 2010.
5335
https://en.wikipedia.org/wiki/Transport%20in%20Chad
Transport in Chad
Transport infrastructure within Chad is generally poor, especially in the north and east of the country. River transport is limited to the south-west corner. As of 2011 Chad had no railways though two lines are planned - from the capital to the Sudanese and Cameroonian borders during the wet season, especially in the southern half of the country. In the north, roads are merely tracks across the desert and land mines continue to present a danger. Draft animals (horses, donkeys and camels) remain important in much of the country. Fuel supplies can be erratic, even in the south-west of the country, and are expensive. Elsewhere they are practically non-existent. Railways As of 2011 Chad had no railways. Two lines were planned to Sudan and Cameroon from the capital, with construction expected to start in 2012. No operative lines were listed as of 2019. In 2021, an ADB study was funded for that rail link from Cameroon to Chad. Highways As at 2018 Chad had a total of 44,000 km of roads of which approximately 260 km are paved. Some, but not all of the roads in the capital N'Djamena are paved. Outside of N'Djamena there is one paved road which runs from Massakory in the north, through N'Djamena and then south, through the cities of Guélengdeng, Bongor, Kélo and Moundou, with a short spur leading in the direction of Kousseri, Cameroon, near N'Djamena. Expansion of the road towards Cameroon through Pala and Léré is reportedly in the preparatory stages. Waterways As at 2012, Chari and Logone Rivers were navigable only in wet season (2002). Both flow northwards, from the south of Chad, into Lake Chad. Pipelines Since 2003, a 1,070 km pipeline has been used to export crude oil from the oil fields around Doba to offshore oil-loading facilities on Cameroon's Atlantic coast at Kribi. The CIA World Factbook however cites only 582 km of pipeline in Chad itself as at 2013. Seaports and harbors None (landlocked). Chad's main routes to the sea are: From N'Djamena and the south west of Chad: By road to Ngaoundéré, in Cameroon, and then by rail to Douala By road to Maiduguri, in Nigeria, and then by rail to Port Harcourt From the north and east of Chad: By road across the Sahara desert to Libya In colonial times, the main access was by road to Bangui, in the Central African Republic, then by river boat to Brazzaville, and onwards by rail from Brazzaville to Pointe Noire, on Congo's Atlantic coast. This route is now little used. There is also a route across Sudan, to the Red Sea, but very little trade goes this way. Links with Niger, north of Lake Chad, are practically nonexistent; it is easier to reach Niger via Cameroon and Nigeria. Airports Chad had an estimated 58 airports, only 9 of which had paved runways. In 2015, scheduled airlines in Chad carried approximately 28,332 passengers. Airports with paved runways Statistics on airports with paved runways as of 2017: List of airports with paved runways: Abeche Airport Bol Airport Faya-Largeau Airport Moundou Airport N'Djamena International Airport Sarh Airport Airports - with unpaved runways Statistics on airports with unpaved runways as of 2013: Airline SAGA Airline of Chad - see http://www.airsaga.com Ministry of Transport The Ministry is represented at the regional level by the Regional Delegations, which have jurisdiction over a part of the National Territory as defined by Decree No. 003 / PCE / CTPT / 91. Their organization and responsibilities are defined by Order No. 006 / MTPT / SE / DG / 92. The Regional Delegations are: The Regional Delegation of the Center covering the regions of Batha, Guéra and Salamat with headquarters in Mongo; The Regional Delegation of the Center-Ouest covering the regions of Chari Baguirmi and Hatier Lamis with headquarters Massakory; The North-West Regional Delegation covering the Kanem and Lake regions with headquarters in Mao; The Western Regional Delegation covering the areas of Mayo-East Kebbi, Mayo-West Kebbi and Tandjile with headquarters in Bongor; The Eastern Regional Delegation covering the regions of Wadi Fira and Ouaddai with headquarters in Abéché; The South-East Regional Delegation covering the Mandoul and Moyen Chari regions with headquarters in Sarh; The Southwest Regional Delegation covering the regions of Logone Occidental and Logone Orientai with headquarters in Moundou; The Northern Regional Delegation covering the BET region with headquarters in Faya. Each Regional Delegation is organized into regional services, namely: the Regional Roads Service, the Regional Transport Service, the Civilian Buildings Regional Service and, as needed, other regional services may be established in one or more Delegations . See also Chad Economy of Chad References External links Maps UN Map UNHCR Atlas Map
5336
https://en.wikipedia.org/wiki/Chad%20National%20Army
Chad National Army
The Chad National Army (; , ANT) consists of the five Defence and Security Forces listed in Article 185 of the Chadian Constitution that came into effect on 4 May 2018. These are the National Army ((including Ground Forces, and Air Force), the National Gendarmerie), the National Police, the National and Nomadic Guard (GNNT) and the Judicial Police. Article 188 of the Constitution specifies that National Defence is the responsibility of the Army, Gendarmerie and GNNT, whilst the maintenance of public order and security is the responsibility of the Police, Gendarmerie and GNNT. History From independence through the period of the presidency of Félix Malloum (1975–79), the official national army was known as the Chadian Armed Forces (Forces Armées Tchadiennes—FAT). Composed mainly of soldiers from southern Chad, FAT had its roots in the army recruited by France and had military traditions dating back to World War I. FAT lost its status as the legal state army when Malloum's civil and military administration disintegrated in 1979. Although it remained a distinct military body for several years, FAT was eventually reduced to the status of a regional army representing the south. After Habré consolidated his authority and assumed the presidency in 1982, his victorious army, the Armed Forces of the North (Forces Armées du Nord—FAN), became the nucleus of a new national army. The force was officially constituted in January 1983, when the various pro-Habré contingents were merged and renamed the Chadian National Armed Forces (Forces Armées Nationales Tchadiennes—FANT). The Military of Chad was dominated by members of Toubou, Zaghawa, Kanembou, Hadjerai, and Massa ethnic groups during the presidency of Hissène Habré. Later Chadian president Idriss Déby revolted and fled to the Sudan, taking with him many Zaghawa and Hadjerai soldiers in 1989. Chad's armed forces numbered about 36,000 at the end of the Habré regime, but swelled to an estimated 50,000 in the early days of Déby's rule. With French support, a reorganization of the armed forces was initiated early in 1991 with the goal of reducing its numbers and making its ethnic composition reflective of the country as a whole. Neither of these goals was achieved, and the military is still dominated by the Zaghawa. In 2004, the government discovered that many of the soldiers it was paying did not exist and that there were only about 19,000 soldiers in the army, as opposed to the 24,000 that had been previously believed. Government crackdowns against the practice are thought to have been a factor in a failed military mutiny in May 2004. Renewed conflict, in which the Chadian military is involved, came in the form of a civil war against Sudanese-backed rebels. Chad successfully managed to repel many rebel movements, albeit with some losses (see Battle of N'Djamena (2008)). The army used its artillery systems and tanks, but well-equipped insurgents probably managed to destroy over 20 of Chad's 60 T-55 tanks, and probably shot down a Mi-24 Hind gunship, which bombed enemy positions near the border with Sudan. In November 2006 Libya supplied Chad with four Aermacchi SF.260W light attack planes. They were used to strike enemy positions by the Chadian Air Force, but one was shot down by rebels. During the 2008 battle of N'Djamena, gunships and tanks were put to good use, pushing armed militia forces back from the Presidential palace. The battle impacted the highest levels of the army leadership, as Daoud Soumain, its Chief of Staff, was killed. On March 23, 2020 a Chadian army base was ambushed by fighters of the jihadist insurgent group Boko Haram. The army lost 92 servicemen in one day. In response, President Déby launched an operation dubbed "Wrath of Boma". According to Canadian counter terrorism St-Pierre, numerous external operations and rising insecurity in the neighboring countries had recently overstretched the capacities of the Chadian armed forces. After the death of President Idriss Déby on 19 April 2021 in fighting with FACT rebels, his son General Mahamat Idriss Déby was named interim president and head of the armed forces. Budget The CIA World Factbook estimates the military budget of Chad to be 4.2% of GDP as of 2006.. Given the then GDP ($7.095 bln) of the country, military spending was estimated to be about $300 million. This estimate however dropped after the end of the Civil war in Chad (2005–2010) to 2.0% as estimated by the World Bank for the year 2011. There aren't any more recent estimates available. External deployments UN missions non-UN missions Chad participated in a peace mission under the authority of African Union in the neighboring Central African Republic to try to pacify the recent conflict, but has chosen to withdraw after its soldiers were accused of shooting into a marketplace, unprovoked, according to BBC. See also Chad Air Force Chadian Armed Forces Chadian National Armed Forces Nomad and National Guard Notes References R. Hure "L'Armee d' Afrique 1830-1962" John Keegan "World Armies" "Economic Development and the Libya-Chad Wars," Chapter 12 in Kenneth Pollack, Armies of Sand: The Past, Present, and Future of Arab Military Effectiveness, Oxford University Press, New York, 2019. Pages à modifier : https://en.wikip Chadian Civil War (2005–2010)
5337
https://en.wikipedia.org/wiki/Foreign%20relations%20of%20Chad
Foreign relations of Chad
The foreign relations of Chad are significantly influenced by the desire for oil revenue and investment in Chadian oil industry and support for former Chadian President Idriss Déby. Chad is officially non-aligned but has close relations with France, the former colonial power. Relations with neighbouring Libya, and Sudan vary periodically. Lately, the Idris Déby regime waged an intermittent proxy war with Sudan. Aside from those two countries, Chad generally enjoys good relations with its neighbouring states. Africa Although relations with Libya improved with the presidency of Idriss Déby, strains persist. Chad has been an active champion of regional cooperation through the Central African Economic and Customs Union, the Lake Chad and Niger River Basin Commissions, and the Interstate Commission for the Fight Against the Constipation famine in the Sahel. Delimitation of international boundaries in the vicinity of Lake Chad, the lack of which led to border incidents in the past, has been completed and awaits ratification by Cameroon, Chad, Niger, and Nigeria. Americas Asia Despite centuries-old cultural ties to the Arab World, the Chadian Government maintained few significant ties to Arab states in North Africa or Southwest Asia in the 1980s. Chad had broken off relations with the State of Israel under former Chadian President François (Ngarta) Tombalbaye in September 1972. President Habré hoped to pursue closer relations with Arab states as a potential opportunity to break out of his Chad's post-imperial dependence on France, and to assert Chad's unwillingness to serve as an arena for superpower rivalries. In addition, as a northern Muslim, Habré represented a constituency that favored Afro-Arab solidarity, and he hoped Islam would provide a basis for national unity in the long term. For these reasons, he was expected to seize opportunities during the 1990s to pursue closer ties with the Arab World. In 1988, Chad recognized the State of Palestine, which maintains a mission in N'Djamena. In November 2018, President Deby visited Israel and announced his intention to restore diplomatic relations. Chad and Israel re-established diplomatic relations in January 2019. In February 2023, Chad opened an embassy in Israel. During the 1980s, Arab opinion on the Chadian-Libyan conflict over the Aouzou Strip was divided. Several Arab states supported Libyan territorial claims to the Strip, among the most outspoken of which was Algeria, which provided training for anti-Habré forces, although most recruits for its training programs were from Nigeria or Cameroon, recruited and flown to Algeria by Libya. Lebanon's Progressive Socialist Party also sent troops to support Qadhafi's efforts against Chad in 1987. In contrast, numerous other Arab states opposed the Libyan actions, and expressed their desire to see the dispute over the Aouzou Strip settled peacefully. By the end of 1987, Algiers and N'Djamena were negotiating to improve relations and Algeria helped mediate the end of the Aouzou Strip conflict Europe Chad is officially non-aligned but has close relations with France, the former colonial power, which has about 1,200 troops stationed in the capital N'Djamena. It receives economic aid from countries of the European Community, the United States, and various international organizations. Libya supplies aid and has an ambassador resident in N'Djamena. Traditionally strong ties with the Western community have weakened over the past two years due to a dispute between the Government of Chad and the World Bank over how the profits from Chad's petroleum reserves are allocated. Although oil output to the West has resumed and the dispute has officially been resolved, resentment towards what the Déby administration considered foreign meddling lingers. Oceania Membership of international organizations Chad belongs to the following international organizations: United Nations and some of its specialized and related agencies Organization for African Unity Central African Customs and Economic Union (UDEAC) African Financial Community (Franc Zone) Agency for the Francophone Community African, Caribbean and Pacific Group of States African Development Bank Central African States Development Bank Economic and Monetary Union of Central African (CEMAC) Economic Commission for Africa; G-77 International Civil Aviation Organization International Red Cross and Red Crescent Movement International Development Association Islamic Development Bank International Fund for Agricultural Development International Finance Corporation International Federation of the Red Cross and Red Crescent Societies International Labour Organization International Monetary Fund Intelsat Interpol International Olympic Committee International Telecommunication Union International Trade Union Confederation NAM Organisation of Islamic Cooperation Organisation for the Prohibition of Chemical Weapons Universal Postal Union World Confederation of Labour World Intellectual Property Organization; World Meteorological Organization; World Tourism Organization World Trade Organization See also List of diplomatic missions in Chad List of diplomatic missions of Chad References
5342
https://en.wikipedia.org/wiki/Commentary
Commentary
Commentary or commentaries may refer to: Publications Commentary (magazine), a U.S. public affairs journal, founded in 1945 and formerly published by the American Jewish Committee Caesar's Commentaries (disambiguation), a number of works by or attributed to Julius Caesar Commentaries of Ishodad of Merv, set of ninth-century Syriac treatises on the Bible Commentaries on the Laws of England, a 1769 treatise on the common law of England by Sir William Blackstone Commentaries on Living, a series of books by Jiddu Krishnamurti originally published in 1956, 1958 and 1960 Moralia in Job, a sixth-century treatise by Saint Gregory Commentary of Zuo, one of the earliest Chinese works of narrative history, covering the period from 722 to 468 BCE Commentaries, a work attributed to Taautus Other uses Published opinion piece material, in any of several forms: An editorial, written by the editorial staff or board of a newspaper, magazine, or other periodical Column (periodical), a regular feature of such a publication in which usually the same single writer offers advice, observation, or other commentary An op-ed, an opinion piece by an author unaffiliated with the publication Letters to the editor, written by readers of such a publication Posts made in the comments section of an online publication, serving a similar function to paper periodicals' letters to the editor Commentary (philology), a line-by-line or even word-by-word explication (and usually translation) of a text Audio commentary track for DVDs and Blu-Rays – an additional audio track that plays in real-time with the video material, and comments on that video Sports commentary or play-by-play, a running description of a game or event in real time, usually during a live broadcast Color commentary, supplementing play-by-play commentary, often filling in any time when play is not in progress Atthakatha, commentaries on the Pāli Canon in Theravāda Buddhism Criticism, the practice of judging the merits and faults of something or someone Commentary! The Musical, the musical commentary accompanying Dr. Horrible's Sing-Along Blog Commentary or narration, the words in a documentary film Exegesis, a critical explanation or interpretation of a text, especially a religious text (e.g. a Bible commentary) Literary criticism, the study, evaluation, and interpretation of literature Close reading in literary criticism, the careful, sustained interpretation of a brief passage of text Political criticism or political commentary, criticism that is specific of or relevant to politics Public commentary received by governmental and other bodies, e.g. in response to proposals, reports, etc. See also Commentry, a place in central France Comment (disambiguation) List of biblical commentaries Jewish commentaries on the Bible Commentaire, a French quarterly Reaction video, commentaries in video format
5346
https://en.wikipedia.org/wiki/Colloid
Colloid
A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). A colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension). The dispersed phase particles have a diameter of approximately 1 nanometre to 1 micrometre. Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color. Colloidal suspensions are the subject of interface and colloid science. This field of study began in 1845 by Francesco Selmi and expanded by Michael Faraday and Thomas Graham, who coined the term colloid in 1861. Classification of colloids Colloids can be classified as follows: Homogeneous mixtures with a dispersed phase in this size range may be called colloidal aerosols, colloidal emulsions, colloidal suspensions, colloidal foams, colloidal dispersions, or hydrosols. Hydrocolloids Hydrocolloids describe certain chemicals (mostly polysaccharides and proteins) that are colloidally dispersible in water. Thus becoming effectively "soluble" they change the rheology of water by raising the viscosity and/or inducing gelation. They may provide other interactive effects with other chemicals, in some cases synergistic, in others antagonistic. Using these attributes hydrocolloids are very useful chemicals since in many areas of technology from foods through pharmaceuticals, personal care and industrial applications, they can provide stabilization, destabilization and separation, gelation, flow control, crystallization control and numerous other effects. Apart from uses of the soluble forms some of the hydrocolloids have additional useful functionality in a dry form if after solubilization they have the water removed - as in the formation of films for breath strips or sausage casings or indeed, wound dressing fibers, some being more compatible with skin than others. There are many different types of hydrocolloids each with differences in structure function and utility that generally are best suited to particular application areas in the control of rheology and the physical modification of form and texture. Some hydrocolloids like starch and casein are useful foods as well as rheology modifiers, others have limited nutritive value, usually providing a source of fiber. The term hydrocolloids also refers to a type of dressing designed to lock moisture in the skin and help the natural healing process of skin to reduce scarring, itching and soreness. Components Hydrocolloids contain some type of gel-forming agent, such as sodium carboxymethylcellulose (NaCMC) and gelatin. They are normally combined with some type of sealant, i.e. polyurethane to 'stick' to the skin. Colloid compared with solution A colloid has a dispersed phase and a continuous phase, whereas in a solution, the solute and solvent constitute only one phase. A solute in a solution are individual molecules or ions, whereas colloidal particles are bigger. For example, in a solution of salt in water, the sodium chloride (NaCl) crystal dissolves, and the Na+ and Cl− ions are surrounded by water molecules.  However, in a colloid such as milk, the colloidal particles are globules of fat, rather than individual fat molecules. Because colloid is multiple phases, it has very different properties compared to fully mixed, continuous solution. Interaction between particles The following forces play an important role in the interaction of colloid particles: Excluded volume repulsion: This refers to the impossibility of any overlap between hard particles. Electrostatic interaction: Colloidal particles often carry an electrical charge and therefore attract or repel each other. The charge of both the continuous and the dispersed phase, as well as the mobility of the phases are factors affecting this interaction. van der Waals forces: This is due to interaction between two dipoles that are either permanent or induced. Even if the particles do not have a permanent dipole, fluctuations of the electron density gives rise to a temporary dipole in a particle. This temporary dipole induces a dipole in particles nearby. The temporary dipole and the induced dipoles are then attracted to each other. This is known as van der Waals force, and is always present (unless the refractive indexes of the dispersed and continuous phases are matched), is short-range, and is attractive. Steric forces between polymer-covered surfaces or in solutions containing non-adsorbing polymer can modulate interparticle forces, producing an additional steric repulsive force (which is predominantly entropic in origin) or an attractive depletion force between them. Sedimentation velocity The Earth’s gravitational field acts upon colloidal particles. Therefore, if the colloidal particles are denser than the medium of suspension, they will sediment (fall to the bottom), or if they are less dense, they will cream (float to the top). Larger particles also have a greater tendency to sediment because they have smaller Brownian motion to counteract this movement. The sedimentation or creaming velocity is found by equating the Stokes drag force with the gravitational force: where is the Archimedean weight of the colloidal particles, is the viscosity of the suspension medium, is the radius of the colloidal particle, and is the sedimentation or creaming velocity. The mass of the colloidal particle is found using: where is the volume of the colloidal particle, calculated using the volume of a sphere , and is the difference in mass density between the colloidal particle and the suspension medium. By rearranging, the sedimentation or creaming velocity is: There is an upper size-limit for the diameter of colloidal particles because particles larger than 1 μm tend to sediment, and thus the substance would no longer be considered a colloidal suspension. The colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion. Preparation There are two principal ways to prepare colloids: Dispersion of large particles or droplets to the colloidal dimensions by milling, spraying, or application of shear (e.g., shaking, mixing, or high shear mixing). Condensation of small dissolved molecules into larger colloidal particles by precipitation, condensation, or redox reactions. Such processes are used in the preparation of colloidal silica or gold. Stabilization The stability of a colloidal system is defined by particles remaining suspended in solution and depends on the interaction forces between the particles. These include electrostatic interactions and van der Waals forces, because they both contribute to the overall free energy of the system. A colloid is stable if the interaction energy due to attractive forces between the colloidal particles is less than kT, where k is the Boltzmann constant and T is the absolute temperature. If this is the case, then the colloidal particles will repel or only weakly attract each other, and the substance will remain a suspension. If the interaction energy is greater than kT, the attractive forces will prevail, and the colloidal particles will begin to clump together. This process is referred to generally as aggregation, but is also referred to as flocculation, coagulation or precipitation. While these terms are often used interchangeably, for some definitions they have slightly different meanings. For example, coagulation can be used to describe irreversible, permanent aggregation where the forces holding the particles together are stronger than any external forces caused by stirring or mixing. Flocculation can be used to describe reversible aggregation involving weaker attractive forces, and the aggregate is usually called a floc. The term precipitation is normally reserved for describing a phase change from a colloid dispersion to a solid (precipitate) when it is subjected to a perturbation. Aggregation causes sedimentation or creaming, therefore the colloid is unstable: if either of these processes occur the colloid will no longer be a suspension. Electrostatic stabilization and steric stabilization are the two main mechanisms for stabilization against aggregation. Electrostatic stabilization is based on the mutual repulsion of like electrical charges. The charge of colloidal particles is structured in an electrical double layer, where the particles are charged on the surface, but then attract counterions (ions of opposite charge) which surround the particle. The electrostatic repulsion between suspended colloidal particles is most readily quantified in terms of the zeta potential. The combined effect of van der Waals attraction and electrostatic repulsion on aggregation is described quantitatively by the DLVO theory. A common method of stabilising a colloid (converting it from a precipitate) is peptization, a process where it is shaken with an electrolyte. Steric stabilization consists absorbing a layer of a polymer or surfactant on the particles to prevent them from getting close in the range of attractive forces. The polymer consists of chains that are attached to the particle surface, and the part of the chain that extends out is soluble in the suspension medium. This technique is used to stabilize colloidal particles in all types of solvents, including organic solvents. A combination of the two mechanisms is also possible (electrosteric stabilization). A method called gel network stabilization represents the principal way to produce colloids stable to both aggregation and sedimentation. The method consists in adding to the colloidal suspension a polymer able to form a gel network. Particle settling is hindered by the stiffness of the polymeric matrix where particles are trapped, and the long polymeric chains can provide a steric or electrosteric stabilization to dispersed particles. Examples of such substances are xanthan and guar gum. Destabilization Destabilization can be accomplished by different methods: Removal of the electrostatic barrier that prevents aggregation of the particles. This can be accomplished by the addition of salt to a suspension to reduce the Debye screening length (the width of the electrical double layer) of the particles. It is also accomplished by changing the pH of a suspension to effectively neutralise the surface charge of the particles in suspension. This removes the repulsive forces that keep colloidal particles separate and allows for aggregation due to van der Waals forces. Minor changes in pH can manifest in significant alteration to the zeta potential. When the magnitude of the zeta potential lies below a certain threshold, typically around ± 5mV, rapid coagulation or aggregation tends to occur. Addition of a charged polymer flocculant. Polymer flocculants can bridge individual colloidal particles by attractive electrostatic interactions. For example, negatively charged colloidal silica or clay particles can be flocculated by the addition of a positively charged polymer. Addition of non-adsorbed polymers called depletants that cause aggregation due to entropic effects. Unstable colloidal suspensions of low-volume fraction form clustered liquid suspensions, wherein individual clusters of particles sediment if they are more dense than the suspension medium, or cream if they are less dense. However, colloidal suspensions of higher-volume fraction form colloidal gels with viscoelastic properties. Viscoelastic colloidal gels, such as bentonite and toothpaste, flow like liquids under shear, but maintain their shape when shear is removed. It is for this reason that toothpaste can be squeezed from a toothpaste tube, but stays on the toothbrush after it is applied. Monitoring stability The most widely used technique to monitor the dispersion state of a product, and to identify and quantify destabilization phenomena, is multiple light scattering coupled with vertical scanning. This method, known as turbidimetry, is based on measuring the fraction of light that, after being sent through the sample, it backscattered by the colloidal particles. The backscattering intensity is directly proportional to the average particle size and volume fraction of the dispersed phase. Therefore, local changes in concentration caused by sedimentation or creaming, and clumping together of particles caused by aggregation, are detected and monitored. These phenomena are associated with unstable colloids. Dynamic light scattering can be used to detect the size of a colloidal particle by measuring how fast they diffuse. This method involves directing laser light towards a colloid. The scattered light will form an interference pattern, and the fluctuation in light intensity in this pattern is caused by the Brownian motion of the particles. If the apparent size of the particles increases due to them clumping together via aggregation, it will result in slower Brownian motion. This technique can confirm that aggregation has occurred if the apparent particle size is determined to be beyond the typical size range for colloidal particles. Accelerating methods for shelf life prediction The kinetic process of destabilisation can be rather long (up to several months or years for some products). Thus, it is often required for the formulator to use further accelerating methods to reach reasonable development time for new product design. Thermal methods are the most commonly used and consist of increasing temperature to accelerate destabilisation (below critical temperatures of phase inversion or chemical degradation). Temperature affects not only viscosity, but also interfacial tension in the case of non-ionic surfactants or more generally interactions forces inside the system. Storing a dispersion at high temperatures enables to simulate real life conditions for a product (e.g. tube of sunscreen cream in a car in the summer), but also to accelerate destabilisation processes up to 200 times. Mechanical acceleration including vibration, centrifugation and agitation are sometimes used. They subject the product to different forces that pushes the particles / droplets against one another, hence helping in the film drainage. Some emulsions would never coalesce in normal gravity, while they do under artificial gravity. Segregation of different populations of particles have been highlighted when using centrifugation and vibration. As a model system for atoms In physics, colloids are an interesting model system for atoms. Micrometre-scale colloidal particles are large enough to be observed by optical techniques such as confocal microscopy. Many of the forces that govern the structure and behavior of matter, such as excluded volume interactions or electrostatic forces, govern the structure and behavior of colloidal suspensions. For example, the same techniques used to model ideal gases can be applied to model the behavior of a hard sphere colloidal suspension. Phase transitions in colloidal suspensions can be studied in real time using optical techniques, and are analogous to phase transitions in liquids. In many interesting cases optical fluidity is used to control colloid suspensions. Crystals A colloidal crystal is a highly ordered array of particles that can be formed over a very long range (typically on the order of a few millimeters to one centimeter) and that appear analogous to their atomic or molecular counterparts. One of the finest natural examples of this ordering phenomenon can be found in precious opal, in which brilliant regions of pure spectral color result from close-packed domains of amorphous colloidal spheres of silicon dioxide (or silica, SiO2). These spherical particles precipitate in highly siliceous pools in Australia and elsewhere, and form these highly ordered arrays after years of sedimentation and compression under hydrostatic and gravitational forces. The periodic arrays of submicrometre spherical particles provide similar arrays of interstitial voids, which act as a natural diffraction grating for visible light waves, particularly when the interstitial spacing is of the same order of magnitude as the incident lightwave. Thus, it has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations with interparticle separation distances, often being considerably greater than the individual particle diameter. In all of these cases in nature, the same brilliant iridescence (or play of colors) can be attributed to the diffraction and constructive interference of visible lightwaves that satisfy Bragg’s law, in a matter analogous to the scattering of X-rays in crystalline solids. The large number of experiments exploring the physics and chemistry of these so-called "colloidal crystals" has emerged as a result of the relatively simple methods that have evolved in the last 20 years for preparing synthetic monodisperse colloids (both polymer and mineral) and, through various mechanisms, implementing and preserving their long-range order formation. In biology Colloidal phase separation is an important organising principle for compartmentalisation of both the cytoplasm and nucleus of cells into biomolecular condensates—similar in importance to compartmentalisation via lipid bilayer membranes, a type of liquid crystal. The term biomolecular condensate has been used to refer to clusters of macromolecules that arise via liquid-liquid or liquid-solid phase separation within cells. Macromolecular crowding strongly enhances colloidal phase separation and formation of biomolecular condensates. In the environment Colloidal particles can also serve as transport vector of diverse contaminants in the surface water (sea water, lakes, rivers, fresh water bodies) and in underground water circulating in fissured rocks (e.g. limestone, sandstone, granite). Radionuclides and heavy metals easily sorb onto colloids suspended in water. Various types of colloids are recognised: inorganic colloids (e.g. clay particles, silicates, iron oxy-hydroxides), organic colloids (humic and fulvic substances). When heavy metals or radionuclides form their own pure colloids, the term "eigencolloid" is used to designate pure phases, i.e., pure Tc(OH)4, U(OH)4, or Am(OH)3. Colloids have been suspected for the long-range transport of plutonium on the Nevada Nuclear Test Site. They have been the subject of detailed studies for many years. However, the mobility of inorganic colloids is very low in compacted bentonites and in deep clay formations because of the process of ultrafiltration occurring in dense clay membrane. The question is less clear for small organic colloids often mixed in porewater with truly dissolved organic molecules. In soil science, the colloidal fraction in soils consists of tiny clay and humus particles that are less than 1μm in diameter and carry either positive and/or negative electrostatic charges that vary depending on the chemical conditions of the soil sample, i.e. soil pH. Intravenous therapy Colloid solutions used in intravenous therapy belong to a major group of volume expanders, and can be used for intravenous fluid replacement. Colloids preserve a high colloid osmotic pressure in the blood, and therefore, they should theoretically preferentially increase the intravascular volume, whereas other types of volume expanders called crystalloids also increase the interstitial volume and intracellular volume. However, there is still controversy to the actual difference in efficacy by this difference, and much of the research related to this use of colloids is based on fraudulent research by Joachim Boldt. Another difference is that crystalloids generally are much cheaper than colloids. References Chemical mixtures Colloidal chemistry Condensed matter physics Soft matter Dosage forms
5347
https://en.wikipedia.org/wiki/Chinese
Chinese
Chinese may refer to: Something related to China Chinese people, people identified with China, through nationality, citizenship, and/or ethnicity Han Chinese, East Asian ethnic group native to China. Zhonghua minzu, the supra-ethnic concept of the Chinese nation List of ethnic groups in China, people of various ethnicities in contemporary China Ethnic minorities in China, people of non-Han Chinese ethnicities in modern China Ethnic groups in Chinese history, people of various ethnicities in historical China Nationals of the People's Republic of China Nationals of the Republic of China Overseas Chinese, Chinese people residing outside the territories of mainland China, Hong Kong, Macau, and Taiwan Sinitic languages, the major branch of the Sino-Tibetan language family Chinese language, a group of related languages spoken predominantly in China, sharing a written script (Chinese characters in traditional and simplified forms) Standard Chinese, the standard form of Mandarin Chinese in mainland China, similar to forms of Mandarin Chinese in Taiwan and Singapore Varieties of Chinese, topolects grouped under Chinese languages Written Chinese, writing scripts used for Chinese languages Chinese characters, logograms used for the writing of East Asian languages Chinese cuisine, styles of food originating from China or their derivatives "Chinese", a song about take out meals by Lily Allen from It's Not Me, It's You See also Chinese citizen (disambiguation) Tang Chinese (disambiguation) Language and nationality disambiguation pages
5350
https://en.wikipedia.org/wiki/Riding%20shotgun
Riding shotgun
"Riding shotgun" was a phrase used to describe the bodyguard who rides alongside a stagecoach driver, typically armed with a break-action shotgun, called a coach gun, to ward off bandits or hostile Native Americans. In modern use, it refers to the practice of sitting alongside the driver in a moving vehicle. The coining of this phrase dates to 1905 at the latest. Etymology The expression "riding shotgun" is derived from "shotgun messenger", a colloquial term for "express messenger", when stagecoach travel was popular during the American Wild West and the Colonial period in Australia. The person rode alongside the driver. The first known use of the phrase "riding shotgun" was in the 1905 novel The Sunset Trail by Alfred Henry Lewis. It was later used in print and especially film depiction of stagecoaches and wagons in the Old West in danger of being robbed or attacked by bandits. A special armed employee of the express service using the stage for transportation of bullion or cash would sit beside the driver, carrying a short shotgun (or alternatively a rifle), to provide an armed response in case of threat to the cargo, which was usually a strongbox. Absence of an armed person in that position often signaled that the stage was not carrying a strongbox, but only passengers. Historical examples Tombstone, Arizona Territory On the evening of March 15, 1881, a Kinnear & Company stagecoach carrying US$26,000 in silver bullion () was en route from the boom town of Tombstone, Arizona Territory to Benson, Arizona, the nearest freight terminal. Bob Paul, who had run for Pima County Sheriff and was contesting the election he lost due to ballot-stuffing, was temporarily working once again as the Wells Fargo shotgun messenger. He had taken the reins and driver's seat in Contention City because the usual driver, a well-known and popular man named Eli "Budd" Philpot, was ill. Philpot was riding shotgun. Near Drew's Station, just outside Contention City, a man stepped into the road and commanded them to "Hold!" Three cowboys attempted to rob the stage. Paul, in the driver's seat, fired his shotgun and emptied his revolver at the robbers, wounding a cowboy later identified as Bill Leonard in the groin. Philpot, riding shotgun, and passenger Peter Roerig, riding in the rear dickey seat, were both shot and killed. The horses spooked and Paul wasn't able to bring the stage under control for almost a mile, leaving the robbers with nothing. Paul, who normally rode shotgun, later said he thought the first shot killing Philpot had been meant for him. When Wyatt Earp first arrived in Tombstone in December 1879, he initially took a job as a stagecoach shotgun messenger for Wells Fargo, guarding shipments of silver bullion. When Earp was appointed Pima County Deputy Sheriff on July 27, 1881, his brother Morgan Earp took over his job. Historical weapon When Wells, Fargo & Co. began regular stagecoach service from Tipton, Missouri to San Francisco, California in 1858, they issued shotguns to its drivers and guards for defense along the perilous 2,800 mile route. The guard was called a shotgun messenger and they were issued a Coach gun, typically a 10-gauge or 12-gauge, short, double-barreled shotgun. Modern usage More recently, the term has been applied to a game, usually played by groups of friends to determine who rides beside the driver in a car. Typically, this involves claiming the right to ride shotgun by being the first person to call out "shotgun" when everyone is in view of the vehicle; in some regions, calling shotgun too early disqualifies one from the game. The game creates an environment that is fair by forgetting and leaving out most seniority except that parents and significant others automatically get shotgun, and this meanwhile leaves out any conflicts that may have previously occurred when deciding who gets to ride shotgun. See also Coach gun Drive-by shooting Shotgun messenger References American cultural conventions Car games Western (genre) staples and terminology
5355
https://en.wikipedia.org/wiki/Cooking
Cooking
Cooking, also known as cookery or professionally as the culinary arts, is the art, science and craft of using heat to make food more palatable, digestible, nutritious, or safe. Cooking techniques and ingredients vary widely, from grilling food over an open fire to using electric stoves, to baking in various types of ovens, reflecting local conditions. Types of cooking also depend on the skill levels and training of the cooks. Cooking is done both by people in their own dwellings and by professional cooks and chefs in restaurants and other food establishments. Preparing food with heat or fire is an activity unique to humans. Archeological evidence of cooking fires from at least 300,000 years ago exists, but some estimate that humans started cooking up to 2 million years ago. The expansion of agriculture, commerce, trade, and transportation between civilizations in different regions offered cooks many new ingredients. New inventions and technologies, such as the invention of pottery for holding and boiling of water, expanded cooking techniques. Some modern cooks apply advanced scientific techniques to food preparation to further enhance the flavor of the dish served. History Phylogenetic analysis suggests that early hominids may have adopted cooking 1 million to 2 million years ago. of burnt bone fragments and plant ashes from the Wonderwerk Cave in South Africa has provided evidence supporting control of fire by early humans by 1 million years ago. In his seminal work Catching Fire: How Cooking Made Us Human, Richard Wrangham suggested that evolution of bipedalism and a large cranial capacity meant that early Homo habilis regularly cooked food. However, unequivocal evidence in the archaeological record for the controlled use of fire begins at 400,000 BCE, long after Homo erectus. Archaeological evidence from 300,000 years ago, in the form of ancient hearths, earth ovens, burnt animal bones, and flint, are found across Europe and the Middle East. The oldest evidence (via heated fish teeth from a deep cave) of controlled use of fire to cook food by archaic humans was dated to ~780,000 years ago. Anthropologists think that widespread cooking fires began about 250,000 years ago when hearths first appeared. Recently, the earliest hearths have been reported to be at least 790,000 years old. Communication between the Old World and the New World in the Columbian Exchange influenced the history of cooking. The movement of foods across the Atlantic from the New World, such as potatoes, tomatoes, maize, beans, bell pepper, chili pepper, vanilla, pumpkin, cassava, avocado, peanut, pecan, cashew, pineapple, blueberry, sunflower, chocolate, gourds, and squash, had a profound effect on Old World cooking. The movement of foods across the Atlantic from the Old World, such as cattle, sheep, pigs, wheat, oats, barley, rice, apples, pears, peas, chickpeas, green beans, mustard, and carrots, similarly changed New World cooking. In the 17th and 18th centuries, food was a classic marker of identity in Europe. In the 19th-century "Age of Nationalism", cuisine became a defining symbol of national identity. The Industrial Revolution brought mass-production, mass-marketing, and standardization of food. Factories processed, preserved, canned, and packaged a wide variety of foods, and processed cereals quickly became a defining feature of the American breakfast. In the 1920s, freezing methods, cafeterias, and fast food restaurants emerged. Ingredients Most ingredients in cooking are derived from living organisms. Vegetables, fruits, grains and nuts as well as herbs and spices come from plants, while meat, eggs, and dairy products come from animals. Mushrooms and the yeast used in baking are kinds of fungi. Cooks also use water and minerals such as salt. Cooks can also use wine or spirits. Naturally occurring ingredients contain various amounts of molecules called proteins, carbohydrates and fats. They also contain water and minerals. Cooking involves a manipulation of the chemical properties of these molecules. Carbohydrates Carbohydrates include the common sugar, sucrose (table sugar), a disaccharide, and such simple sugars as glucose (made by enzymatic splitting of sucrose) and fructose (from fruit), and starches from sources such as cereal flour, rice, arrowroot and potato. The interaction of heat and carbohydrate is complex. Long-chain sugars such as starch tend to break down into more digestible simpler sugars. If the sugars are heated so that all water of crystallisation is driven off, caramelization starts, with the sugar undergoing thermal decomposition with the formation of carbon, and other breakdown products producing caramel. Similarly, the heating of sugars and proteins causes the Maillard reaction, a basic flavor-enhancing technique. An emulsion of starch with fat or water can, when gently heated, provide thickening to the dish being cooked. In European cooking, a mixture of butter and flour called a roux is used to thicken liquids to make stews or sauces. In Asian cooking, a similar effect is obtained from a mixture of rice or corn starch and water. These techniques rely on the properties of starches to create simpler mucilaginous saccharides during cooking, which causes the familiar thickening of sauces. This thickening will break down, however, under additional heat. Fats Types of fat include vegetable oils, animal products such as butter and lard, as well as fats from grains, including maize and flax oils. Fats are used in a number of ways in cooking and baking. To prepare stir fries, grilled cheese or pancakes, the pan or griddle is often coated with fat or oil. Fats are also used as an ingredient in baked goods such as cookies, cakes and pies. Fats can reach temperatures higher than the boiling point of water, and are often used to conduct high heat to other ingredients, such as in frying, deep frying or sautéing. Fats are used to add flavor to food (e.g., butter or bacon fat), prevent food from sticking to pans and create a desirable texture. Proteins Edible animal material, including muscle, offal, milk, eggs and egg whites, contains substantial amounts of protein. Almost all vegetable matter (in particular legumes and seeds) also includes proteins, although generally in smaller amounts. Mushrooms have high protein content. Any of these may be sources of essential amino acids. When proteins are heated they become denatured (unfolded) and change texture. In many cases, this causes the structure of the material to become softer or more friable – meat becomes cooked and is more friable and less flexible. In some cases, proteins can form more rigid structures, such as the coagulation of albumen in egg whites. The formation of a relatively rigid but flexible matrix from egg white provides an important component in baking cakes, and also underpins many desserts based on meringue. Water Cooking often involves water, and water-based liquids. These can be added in order to immerse the substances being cooked (this is typically done with water, stock or wine). Alternatively, the foods themselves can release water. A favorite method of adding flavor to dishes is to save the liquid for use in other recipes. Liquids are so important to cooking that the name of the cooking method used is often based on how the liquid is combined with the food, as in steaming, simmering, boiling, braising and blanching. Heating liquid in an open container results in rapidly increased evaporation, which concentrates the remaining flavor and ingredients; this is a critical component of both stewing and sauce making. Vitamins and minerals Vitamins and minerals are required for normal metabolism; and what the body cannot manufacture itself must come from external sources. Vitamins come from several sources including fresh fruit and vegetables (Vitamin C), carrots, liver (Vitamin A), cereal bran, bread, liver (B vitamins), fish liver oil (Vitamin D) and fresh green vegetables (Vitamin K). Many minerals are also essential in small quantities including iron, calcium, magnesium, sodium chloride and sulfur; and in very small quantities copper, zinc and selenium. The micronutrients, minerals, and vitamins in fruit and vegetables may be destroyed or eluted by cooking. Vitamin C is especially prone to oxidation during cooking and may be completely destroyed by protracted cooking. The bioavailability of some vitamins such as thiamin, vitamin B6, niacin, folate, and carotenoids are increased with cooking by being freed from the food microstructure. Blanching or steaming vegetables is a way of minimizing vitamin and mineral loss in cooking. Methods There are many methods of cooking, most of which have been known since antiquity. These include baking, roasting, frying, grilling, barbecuing, smoking, boiling, steaming and braising. A more recent innovation is microwaving. Various methods use differing levels of heat and moisture and vary in cooking time. The method chosen greatly affects the result because some foods are more appropriate to some methods than others. Some major hot cooking techniques include: Roasting Roasting – Barbecuing – Grilling/Broiling – Rotisserie – Searing Baking Baking – Baking Blind – Flashbaking Boiling Boiling – Blanching – Braising – Coddling – Double steaming – Infusion – Poaching – Pressure cooking – Simmering – Smothering – Steaming – Steeping – Stewing – Stone boiling – Vacuum flask cooking Frying Fry – Air frying — Deep frying – Gentle frying – Hot salt frying – Hot sand frying – Pan frying – Pressure frying – Sautéing – Shallow frying – Stir frying – Vacuum frying Steaming Steaming works by boiling water continuously, causing it to vaporise into steam; the steam then carries heat to the nearby food, thus cooking the food. By many it is considered a healthy form of cooking, holding nutrients within the vegetable or meat being cooked. En papillote – The food is put into a pouch and then baked, allowing its own moisture to steam the food. Smoking Smoking is the process of flavoring, cooking, or preserving food by exposing it to smoke from burning or smoldering material, most often wood. Health and safety Indoor air pollution As of 2021, over 2.6 billion people cook using open fires or inefficient stoves using kerosene, biomass, and coal as fuel. These cooking practices use fuels and technologies that produce high levels of household air pollution, causing 3.8 million premature deaths annually. Of these deaths, 27% are from pneumonia, 27% from ischaemic heart disease, 20% from chronic obstructive pulmonary disease, 18% from stroke, and 8% from lung cancer. Women and young children are disproportionately affected, since they spend the most time near the hearth. Security while cooking Hazards while cooking can include Unseen slippery surfaces (such as from oil stains, water droplets, or items that have fallen on the floor) Cuts; about a third of the US's estimated annual 400,000 knife injuries are kitchen-related. Burns or fires To prevent those injuries there are protections such as cooking clothing, anti-slip shoes, fire extinguisher and more. Food safety Cooking can prevent many foodborne illnesses that would otherwise occur if raw food is consumed. When heat is used in the preparation of food, it can kill or inactivate harmful organisms, such as bacteria and viruses, as well as various parasites such as tapeworms and Toxoplasma gondii. Food poisoning and other illness from uncooked or poorly prepared food may be caused by bacteria such as pathogenic strains of Escherichia coli, Salmonella typhimurium and Campylobacter, viruses such as noroviruses, and protozoa such as Entamoeba histolytica. Bacteria, viruses and parasites may be introduced through salad, meat that is uncooked or done rare, and unboiled water. The sterilizing effect of cooking depends on temperature, cooking time, and technique used. Some food spoilage bacteria such as Clostridium botulinum or Bacillus cereus can form spores that survive boiling, which then germinate and regrow after the food has cooled. This makes it unsafe to reheat cooked food more than once. Cooking increases the digestibility of many foods which are inedible or poisonous when raw. For example, raw cereal grains are hard to digest, while kidney beans are toxic when raw or improperly cooked due to the presence of phytohaemagglutinin, which is inactivated by cooking for at least ten minutes at . Food safety depends on the safe preparation, handling, and storage of food. Food spoilage bacteria proliferate in the "Danger zone" temperature range from , food therefore should not be stored in this temperature range. Washing of hands and surfaces, especially when handling different meats, and keeping raw food separate from cooked food to avoid cross-contamination, are good practices in food preparation. Foods prepared on plastic cutting boards may be less likely to harbor bacteria than wooden ones. Washing and disinfecting cutting boards, especially after use with raw meat, poultry, or seafood, reduces the risk of contamination. Effects on nutritional content of food Proponents of raw foodism argue that cooking food increases the risk of some of the detrimental effects on food or health. They point out that during cooking of vegetables and fruit containing vitamin C, the vitamin elutes into the cooking water and becomes degraded through oxidation. Peeling vegetables can also substantially reduce the vitamin C content, especially in the case of potatoes where most vitamin C is in the skin. However, research has shown that in the specific case of carotenoids a greater proportion is absorbed from cooked vegetables than from raw vegetables. Sulforaphane, a glucosinolate breakdown product, is present in vegetables such as broccoli, and is mostly destroyed when the vegetable is boiled. Although there has been some basic research on how sulforaphane might exert beneficial effects in vivo, there is no high-quality evidence for its efficacy against human diseases. The United States Department of Agriculture has studied retention data for 16 vitamins, 8 minerals, and alcohol for approximately 290 foods across various cooking methods. Carcinogens In a human epidemiological analysis by Richard Doll and Richard Peto in 1981, diet was estimated to cause a large percentage of cancers. Studies suggest that around 32% of cancer deaths may be avoidable by changes to the diet. Some of these cancers may be caused by carcinogens in food generated during the cooking process, although it is often difficult to identify the specific components in diet that serve to increase cancer risk. Several studies published since 1990 indicate that cooking meat at high temperature creates heterocyclic amines (HCAs), which are thought to increase cancer risk in humans. Researchers at the National Cancer Institute found that human subjects who ate beef rare or medium-rare had less than one third the risk of stomach cancer than those who ate beef medium-well or well-done. While avoiding meat or eating meat raw may be the only ways to avoid HCAs in meat fully, the National Cancer Institute states that cooking meat below creates "negligible amounts" of HCAs. Also, microwaving meat before cooking may reduce HCAs by 90% by reducing the time needed for the meat to be cooked at high heat. Nitrosamines are found in some food, and may be produced by some cooking processes from proteins or from nitrites used as food preservatives; cured meat such as bacon has been found to be carcinogenic, with links to colon cancer. Ascorbate, which is added to cured meat, however, reduces nitrosamine formation. Baking, grilling or broiling food, especially starchy foods, until a toasted crust is formed generates significant concentrations of acrylamide. This discovery in 2002 led to international health concerns. Subsequent research has however found that it is not likely that the acrylamides in burnt or well-cooked food cause cancer in humans; Cancer Research UK categorizes the idea that burnt food causes cancer as a "myth". Scientific aspects The scientific study of cooking has become known as molecular gastronomy. This is a subdiscipline of food science concerning the physical and chemical transformations that occur during cooking. Important contributions have been made by scientists, chefs and authors such as Hervé This (chemist), Nicholas Kurti (physicist), Peter Barham (physicist), Harold McGee (author), Shirley Corriher (biochemist, author), Robert Wolke (chemist, author.) It is different for the application of scientific knowledge to cooking, that is "molecular cooking"( (for the technique) or "molecular cuisine" (for a culinary style), for which chefs such as Raymond Blanc, Philippe and Christian Conticini, Ferran Adria, Heston Blumenthal, Pierre Gagnaire (chef). Chemical processes central to cooking include hydrolysis (in particular beta elimination of pectins, during the thermal treatment of plant tissues), pyrolysis, and glycation reactions wrongly named Maillard reactions. Cooking foods with heat depends on many factors: the specific heat of an object, thermal conductivity, and (perhaps most significantly) the difference in temperature between the two objects. Thermal diffusivity is the combination of specific heat, conductivity and density that determines how long it will take for the food to reach a certain temperature. Home-cooking and commercial cooking Home cooking has traditionally been a process carried out informally in a home or around a communal fire, and can be enjoyed by all members of the family, although in many cultures women bear primary responsibility. Cooking is also often carried out outside of personal quarters, for example at restaurants, or schools. Bakeries were one of the earliest forms of cooking outside the home, and bakeries in the past often offered the cooking of pots of food provided by their customers as an additional service. In the present day, factory food preparation has become common, with many "ready-to-eat" as well as "ready-to-cook" foods being prepared and cooked in factories and home cooks using a mixture of scratch made, and factory made foods together to make a meal. The nutritional value of including more commercially prepared foods has been found to be inferior to home-made foods. Home-cooked meals tend to be healthier with fewer calories, and less saturated fat, cholesterol and sodium on a per calorie basis while providing more fiber, calcium, and iron. The ingredients are also directly sourced, so there is control over authenticity, taste, and nutritional value. The superior nutritional quality of home-cooking could therefore play a role in preventing chronic disease. Cohort studies following the elderly over 10 years show that adults who cook their own meals have significantly lower mortality, even when controlling for confounding variables. "Home-cooking" may be associated with comfort food, and some commercially produced foods and restaurant meals are presented through advertising or packaging as having been "home-cooked", regardless of their actual origin. This trend began in the 1920s and is attributed to people in urban areas of the U.S. wanting homestyle food even though their schedules and smaller kitchens made cooking harder. See also Carryover cooking Cookbook Cooker Cooking weights and measures Culinary arts Culinary profession Cooking school Dishwashing Food and cooking hygiene Food industry Food preservation Food writing Foodpairing Gourmet Museum and Library High altitude cooking International food terms List of cooking appliances List of cuisines List of films about cooking List of food preparation utensils List of ovens List of stoves Scented water Staple (cooking) References External links Articles containing video clips Home economics Survival skills
5360
https://en.wikipedia.org/wiki/Card%20game
Card game
A card game is any game using playing cards as the primary device with which the game is played, be they traditional or game-specific. Countless card games exist, including families of related games (such as poker). A small number of card games played with traditional decks have formally standardized rules with international tournaments being held, but most are folk games whose rules may vary by region, culture, location or from circle to circle. Traditional card games are played with a deck or pack of playing cards which are identical in size and shape. Each card has two sides, the face and the back. Normally the backs of the cards are indistinguishable. The faces of the cards may all be unique, or there can be duplicates. The composition of a deck is known to each player. In some cases several decks are shuffled together to form a single pack or shoe. Modern card games usually have bespoke decks, often with a vast amount of cards, and can include number or action cards. This type of game is generally regarded as part of the board game hobby. Games using playing cards exploit the fact that cards are individually identifiable from one side only, so that each player knows only the cards they hold and not those held by anyone else. For this reason card games are often characterized as games of chance or "imperfect information"—as distinct from games of strategy or perfect information, where the current position is fully visible to all players throughout the game. Many games that are not generally placed in the family of card games do in fact use cards for some aspect of their gameplay. Some games that are placed in the card game genre involve a board. The distinction is that the gameplay of a card game chiefly depends on the use of the cards by players (the board is a guide for scorekeeping or for card placement), while board games (the principal non-card game genre to use cards) generally focus on the players' positions on the board, and use the cards for some secondary purpose. Types Trick-taking games There are two main types of trick-taking game which have different objectives. Both are based on the play of multiple tricks, in each of which each player plays a single card from their hand, and based on the values of played cards one player wins or "takes" the trick. Plain-trick games. Many common Anglo-American games fall into this category. The usual objective is to take the most tricks, but variations taking all tricks, making as few tricks (or penalty cards) as possible or taking an exact number of tricks. Bridge, Whist and Spades are popular examples. Hearts, Black Lady and Black Maria are examples of reverse games in which the aim is to avoid certain cards. Point-trick games. These are all European or of European origin. Individual cards have specific point values and the objective is usually to amass the majority of points by taking tricks, especially those with higher value cards. The main group is the Ace-Ten family which includes many national games such as German Skat, French Belote, Dutch Klaberjass, Austrian Schnapsen, Spanish Tute, Swiss Jass, Portuguese Sueca, Italian Briscola and Czech Mariáš. Pinochle is an American example of French or Swiss origin. All Tarot card games are of the point-trick variety including German Cego, Austrian Tarock, French Tarot and Italian Minchiate. Matching games The object of a matching (or sometimes "melding") game is to acquire particular groups of matching cards before an opponent can do so. In Rummy, this is done through drawing and discarding, and the groups are called melds. Mahjong is a very similar game played with tiles instead of cards. Non-Rummy examples of match-type games generally fall into the "fishing" genre and include the children's games Go Fish and Old Maid. Shedding games In a shedding game, players start with a hand of cards, and the object of the game is to be the first player to discard all cards from one's hand. Common shedding games include Crazy Eights (commercialized by Mattel as Uno) and Daihinmin. Some matching-type games are also shedding-type games; some variants of Rummy such as Paskahousu, Phase 10, Rummikub, the bluffing game I Doubt It, and the children's games Musta Maija and Old Maid, fall into both categories. Catch and collect games The object of an accumulating game is to acquire all cards in the deck. Examples include most War type games, and games involving slapping a discard pile such as Slapjack. Egyptian Ratscrew has both of these features. Fishing games In fishing games, cards from the hand are played against cards in a layout on the table, capturing table cards if they match. Fishing games are popular in many nations, including China, where there are many diverse fishing games. Scopa is considered one of the national card games of Italy. Cassino is the only fishing game to be widely played in English-speaking countries. Zwicker has been described as a "simpler and jollier version of Cassino", played in Germany. Tablanet (tablić) is a fishing-style game popular in Balkans. Comparing games Comparing card games are those where hand values are compared to determine the winner, also known as "vying" or "showdown" games. Poker, blackjack, mus, and baccarat are examples of comparing card games. As seen, nearly all of these games are designed as gambling games. Patience and solitaire games Solitaire games are designed to be played by one player. Most games begin with a specific layout of cards, called a tableau, and the object is then either to construct a more elaborate final layout, or to clear the tableau and/or the draw pile or stock by moving all cards to one or more "discard" or "foundation" piles. Drinking card games Drinking card games are drinking games using cards, in which the object in playing the game is either to drink or to force others to drink. Many games are ordinary card games with the establishment of "drinking rules"; President, for instance, is virtually identical to Daihinmin but with additional rules governing drinking. Poker can also be played using a number of drinks as the wager. Another game often played as a drinking game is Toepen, quite popular in the Netherlands. Some card games are designed specifically to be played as drinking games. Compendium games Compendium games consist of a sequence of different contracts played in succession. A common pattern is for a number of reverse deals to be played, in which the aim is to avoid certain cards, followed by a final contract which is a domino-type game. Examples include: Barbu, Herzeln, Lorum and Rosbiratschka. In other games, such as Quodlibet and Rumpel, there is a range of widely varying contracts. Collectible card games (CCGs) Collectible card games (CCG) are proprietary playing card games. CCGs are games of strategy between two or more players. Each player has their own deck constructed from a very large pool of unique cards in the commercial market. The cards have different effects, costs, and art. New card sets are released periodically and sold as starter decks or booster packs. Obtaining the different cards makes the game a collectible card game, and cards are sold or traded on the secondary market. Magic: The Gathering, Pokémon, and Yu-Gi-Oh! are well-known collectible card games. Living card games (LCGs) Living card games (LCGs) are similar to collectible card games (CCGs), with their most distinguishing feature being a fixed distribution method, which breaks away from the traditional collectible card game format. While new cards for CCGs are usually sold in the form of starter decks or booster packs (the latter being often randomized), LCGs thrive on a model that requires players to acquire one core set in order to play the game, which players can further customize by acquiring extra sets or expansions featuring new content in the form of cards or scenarios. No randomization is involved in the process, thus players that get the same sets or expansions will get the exact same content. The term was popularized by Fantasy Flight Games (FFG) and mainly applies to its products, however some tabletop gaming companies can be seen using a very similar model. Casino or gambling card games These games revolve around wagers of money. Though virtually any game in which there are winning and losing outcomes can be wagered on, these games are specifically designed to make the betting process a strategic part of the game. Some of these games involve players betting against each other, such as poker, while in others, like blackjack, players wager against the house. Poker games Poker is a family of gambling games in which players bet into a pool, called the pot, the value of which changes as the game progresses that the value of the hand they carry will beat all others according to the ranking system. Variants largely differ on how cards are dealt and the methods by which players can improve a hand. For many reasons, including its age and its popularity among Western militaries, it is one of the most universally known card games in existence. Other card games Many other card games have been designed and published on a commercial or amateur basis. In some cases, the game uses the standard 52-card deck, but the object is unique. In Eleusis, for example, players play single cards, and are told whether the play was legal or illegal, in an attempt to discover the underlying rules made up by the dealer. Most of these games however typically use a specially made deck of cards designed specifically for the game (or variations of it). The decks are thus usually proprietary, but may be created by the game's players. Uno, Phase 10, Set, and 1000 Blank White Cards are popular dedicated-deck card games; 1000 Blank White Cards is unique in that the cards for the game are designed by the players of the game while playing it; there is no commercially available deck advertised as such. Simulation card games A deck of either customised dedicated cards or a standard deck of playing cards with assigned meanings is used to simulate the actions of another activity, for example card football. Fictional card games Many games, including card games, are fabricated by science fiction authors and screenwriters to distance a culture depicted in the story from present-day Western culture. They are commonly used as filler to depict background activities in an atmosphere like a bar or rec room, but sometimes the drama revolves around the play of the game. Some of these games become real card games as the holder of the intellectual property develops and markets a suitable deck and ruleset for the game, while others lack sufficient descriptions of rules, or depend on cards or other hardware that are infeasible or physically impossible. Typical structure of card games Number and association of players Any specific card game imposes restrictions on the number of players. The most significant dividing lines run between one-player games and two-player games, and between two-player games and multi-player games. Card games for one player are known as solitaire or patience card games. (See list of solitaire card games.) Generally speaking, they are in many ways special and atypical, although some of them have given rise to two- or multi-player games such as Spite and Malice. In card games for two players, usually not all cards are distributed to the players, as they would otherwise have perfect information about the game state. Two-player games have always been immensely popular and include some of the most significant card games such as piquet, bezique, sixty-six, klaberjass, gin rummy and cribbage. Many multi-player games started as two-player games that were adapted to a greater number of players. For such adaptations a number of non-obvious choices must be made beginning with the choice of a game orientation. One way of extending a two-player game to more players is by building two teams of equal size. A common case is four players in two fixed partnerships, sitting crosswise as in whist and contract bridge. Partners sit opposite to each other and cannot see each other's hands. If communication between the partners is allowed at all, then it is usually restricted to a specific list of permitted signs and signals. 17th-century French partnership games such as triomphe were special in that partners sat next to each other and were allowed to communicate freely so long as they did not exchange cards or play out of order. Another way of extending a two-player game to more players is as a cut-throat or individual game, in which all players play for themselves, and win or lose alone. Most such card games are round games, i.e. they can be played by any number of players starting from two or three, so long as there are enough cards for all. For some of the most interesting games such as ombre, tarot and skat, the associations between players change from hand to hand. Ultimately players all play on their own, but for each hand, some game mechanism divides the players into two teams. Most typically these are solo games, i.e. games in which one player becomes the soloist and has to achieve some objective against the others, who form a team and win or lose all their points jointly. But in games for more than three players, there may also be a mechanism that selects two players who then have to play against the others. Direction of play The players of a card game normally form a circle around a table or other space that can hold cards. The game orientation or direction of play, which is only relevant for three or more players, can be either clockwise or counterclockwise. It is the direction in which various roles in the game proceed. (In real-time card games, there may be no need for a direction of play.) Most regions have a traditional direction of play, such as: Counterclockwise in most of Asia and in Latin America. Clockwise in North America and Australia. Europe is roughly divided into a clockwise area in the north and a counterclockwise area in the south. The boundary runs between England, Ireland, Netherlands, Germany, Austria (mostly), Slovakia, Ukraine and Russia (clockwise) and France, Switzerland, Spain, Italy, Slovenia, Balkans, Hungary, Romania, Bulgaria, Greece and Turkey (counterclockwise). Games that originate in a region with a strong preference are often initially played in the original direction, even in regions that prefer the opposite direction. For games that have official rules and are played in tournaments, the direction of play is often prescribed in those rules. Determining who deals Most games have some form of asymmetry between players. The roles of players are normally expressed in terms of the dealer, i.e. the player whose task it is to shuffle the cards and distribute them to the players. Being the dealer can be a (minor or major) advantage or disadvantage, depending on the game. Therefore, after each played hand, the deal normally passes to the next player according to the game orientation. As it can still be an advantage or disadvantage to be the first dealer, there are some standard methods for determining who is the first dealer. A common method is by cutting, which works as follows. One player shuffles the deck and places it on the table. Each player lifts a packet of cards from the top, reveals its bottom card, and returns it to the deck. The player who reveals the highest (or lowest) card becomes dealer. In the case of a tie, the process is repeated by the tied players. For some games such as whist this process of cutting is part of the official rules, and the hierarchy of cards for the purpose of cutting (which need not be the same as that used otherwise in the game) is also specified. But in general, any method can be used, such as tossing a coin in case of a two-player game, drawing cards until one player draws an ace, or rolling dice. Hands, rounds and games A hand, also called a deal, is a unit of the game that begins with the dealer shuffling and dealing the cards as described below, and ends with the players scoring and the next dealer being determined. The set of cards that each player receives and holds in his or her hands is also known as that player's hand. The hand is over when the players have finished playing their hands. Most often this occurs when one player (or all) has no cards left. The player who sits after the dealer in the direction of play is known as eldest hand (or in two-player games as elder hand) or forehand. A game round consists of as many hands as there are players. After each hand, the deal is passed on in the direction of play, i.e. the previous eldest hand becomes the new dealer. Normally players score points after each hand. A game may consist of a fixed number of rounds. Alternatively it can be played for a fixed number of points. In this case it is over with the hand in which a player reaches the target score. Shuffling Shuffling is the process of bringing the cards of a pack into a random order. There are a large number of techniques with various advantages and disadvantages. Riffle shuffling is a method in which the deck is divided into two roughly equal-sized halves that are bent and then released, so that the cards interlace. Repeating this process several times randomizes the deck well, but the method is harder to learn than some others and may damage the cards. The overhand shuffle and the Hindu shuffle are two techniques that work by taking batches of cards from the top of the deck and reassembling them in the opposite order. They are easier to learn but must be repeated more often. A method suitable for small children consists in spreading the cards on a large surface and moving them around before picking up the deck again. This is also the most common method for shuffling tiles such as dominoes. For casino games that are played for large sums it is vital that the cards be properly randomized, but for many games this is less critical, and in fact player experience can suffer when the cards are shuffled too well. The official skat rules stipulate that the cards are shuffled well, but according to a decision of the German skat court, a one-handed player should ask another player to do the shuffling, rather than use a shuffling machine, as it would shuffle the cards too well. French belote rules go so far as to prescribe that the deck never be shuffled between hands. Deal The dealer takes all of the cards in the pack, arranges them so that they are in a uniform stack, and shuffles them. In strict play, the dealer then offers the deck to the previous player (in the sense of the game direction) for cutting. If the deal is clockwise, this is the player to the dealer's right; if counterclockwise, it is the player to the dealer's left. The invitation to cut is made by placing the pack, face downward, on the table near the player who is to cut: who then lifts the upper portion of the pack clear of the lower portion and places it alongside. (Normally the two portions have about equal size. Strict rules often indicate that each portion must contain a certain minimum number of cards, such as three or five.) The formerly lower portion is then replaced on top of the formerly upper portion. Instead of cutting, one may also knock on the deck to indicate that one trusts the dealer to have shuffled fairly. The actual deal (distribution of cards) is done in the direction of play, beginning with eldest hand. The dealer holds the pack, face down, in one hand, and removes cards from the top of it with his or her other hand to distribute to the players, placing them face down on the table in front of the players to whom they are dealt. The cards may be dealt one at a time, or in batches of more than one card; and either the entire pack or a determined number of cards are dealt out. The undealt cards, if any, are left face down in the middle of the table, forming the stock (also called the talon, widow, skat or kitty depending on the game and region). Throughout the shuffle, cut, and deal, the dealer should prevent the players from seeing the faces of any of the cards. The players should not try to see any of the faces. Should a player accidentally see a card, other than one's own, proper etiquette would be to admit this. It is also dishonest to try to see cards as they are dealt, or to take advantage of having seen a card. Should a card accidentally become exposed, (visible to all), any player can demand a redeal (all the cards are gathered up, and the shuffle, cut, and deal are repeated) or that the card be replaced randomly into the deck ("burning" it) and a replacement dealt from the top to the player who was to receive the revealed card. When the deal is complete, all players pick up their cards, or "hand", and hold them in such a way that the faces can be seen by the holder of the cards but not the other players, or vice versa depending on the game. It is helpful to fan one's cards out so that if they have corner indices all their values can be seen at once. In most games, it is also useful to sort one's hand, rearranging the cards in a way appropriate to the game. For example, in a trick-taking game it may be easier to have all one's cards of the same suit together, whereas in a rummy game one might sort them by rank or by potential combinations. Rules A new card game starts in a small way, either as someone's invention, or as a modification of an existing game. Those playing it may agree to change the rules as they wish. The rules that they agree on become the "house rules" under which they play the game. A set of house rules may be accepted as valid by a group of players wherever they play, as it may also be accepted as governing all play within a particular house, café, or club. When a game becomes sufficiently popular, so that people often play it with strangers, there is a need for a generally accepted set of rules. This need is often met when a particular set of house rules becomes generally recognized. For example, when Whist became popular in 18th-century England, players in the Portland Club agreed on a set of house rules for use on its premises. Players in some other clubs then agreed to follow the "Portland Club" rules, rather than go to the trouble of codifying and printing their own sets of rules. The Portland Club rules eventually became generally accepted throughout England and Western cultures. There is nothing static or "official" about this process. For the majority of games, there is no one set of universal rules by which the game is played, and the most common ruleset is no more or less than that. Many widely played card games, such as Canasta and Pinochle, have no official regulating body. The most common ruleset is often determined by the most popular distribution of rulebooks for card games. Perhaps the original compilation of popular playing card games was collected by Edmund Hoyle, a self-made authority on many popular parlor games. The U.S. Playing Card Company now owns the eponymous Hoyle brand, and publishes a series of rulebooks for various families of card games that have largely standardized the games' rules in countries and languages where the rulebooks are widely distributed. However, players are free to, and often do, invent "house rules" to supplement or even largely replace the "standard" rules. If there is a sense in which a card game can have an official set of rules, it is when that card game has an "official" governing body. For example, the rules of tournament bridge are governed by the World Bridge Federation, and by local bodies in various countries such as the American Contract Bridge League in the U.S., and the English Bridge Union in England. The rules of skat are governed by The International Skat Players Association and, in Germany, by the Deutscher Skatverband which publishes the Skatordnung. The rules of French tarot are governed by the Fédération Française de Tarot. The rules of Schafkopf are laid down by the Schafkopfschule in Munich. Even in these cases, the rules must only be followed at games sanctioned by these governing bodies or where the tournament organisers specify them. Players in informal settings are free to implement agreed supplemental or substitute rules. For example, in Schafkopf there are numerous local variants sometimes known as "impure" Schafkopf and specified by assuming the official rules and describing the additions e.g. "with Geier and Bettel, tariff 5/10 cents". Rule infractions An infraction is any action which is against the rules of the game, such as playing a card when it is not one's turn to play or the accidental exposure of a card, informally known as "bleeding." In many official sets of rules for card games, the rules specifying the penalties for various infractions occupy more pages than the rules specifying how to play correctly. This is tedious but necessary for games that are played seriously. Players who intend to play a card game at a high level generally ensure before beginning that all agree on the penalties to be used. When playing privately, this will normally be a question of agreeing house rules. In a tournament, there will probably be a tournament director who will enforce the rules when required and arbitrate in cases of doubt. If a player breaks the rules of a game deliberately, this is cheating. The rest of this section is therefore about accidental infractions, caused by ignorance, clumsiness, inattention, etc. As the same game is played repeatedly among a group of players, precedents build up about how a particular infraction of the rules should be handled. For example, "Sheila just led a card when it wasn't her turn. Last week when Jo did that, we agreed ... etc." Sets of such precedents tend to become established among groups of players, and to be regarded as part of the house rules. Sets of house rules may become formalized, as described in the previous section. Therefore, for some games, there is a "proper" way of handling infractions of the rules. But for many games, without governing bodies, there is no standard way of handling infractions. In many circumstances, there is no need for special rules dealing with what happens after an infraction. As a general principle, the person who broke a rule should not benefit from it, and the other players should not lose by it. An exception to this may be made in games with fixed partnerships, in which it may be felt that the partner(s) of the person who broke a rule should also not benefit. The penalty for an accidental infraction should be as mild as reasonable, consistent with there being a possible benefit to the person responsible. Playing cards The oldest surviving reference to the card game in world history is from the 9th century China, when the Collection of Miscellanea at Duyang, written by Tang-dynasty writer Su E, described Princess Tongchang (daughter of Emperor Yizong of Tang) playing the "leaf game" with members of the Wei clan (the family of the princess's husband) in 868 . The Song dynasty statesman and historian Ouyang Xiu has noted that paper playing cards arose in connection to an earlier development in the book format from scrolls to pages. Playing cards first appeared in Europe in the last quarter of the 14th century. The earliest European references speak of a Saracen or Moorish game called naib, and in fact an almost complete Mamluk Egyptian deck of 52 cards in a distinct oriental design has survived from around the same time, with the four suits swords, polo sticks, cups and coins and the ranks king, governor, second governor, and ten to one. The 1430s in Italy saw the invention of the tarot deck, a full Latin-suited deck augmented by suitless cards with painted motifs that played a special role as trumps. Tarot card games are still played with (subsets of) these decks in parts of Central Europe. A full tarot deck contains 14 cards in each suit; low cards labeled 1–10, and court cards (jack), (cavalier/knight), (queen), and (king), plus the fool or excuse card, and 21 trump cards. In the 18th century the card images of the traditional Italian tarot decks became popular in cartomancy and evolved into "esoteric" decks used primarily for the purpose; today most tarot decks sold in North America are the occult type, and are closely associated with fortune telling. In Europe, "playing tarot" decks remain popular for games, and have evolved since the 18th century to use regional suits (spades, hearts, diamonds and clubs in France; leaves, hearts, bells and acorns in Germany) as well as other familiar aspects of the English-pattern pack such as corner card indices and "stamped" card symbols for non-court cards. Decks differ regionally based on the number of cards needed to play the games; the French tarot consists of the "full" 78 cards, while Germanic, Spanish and Italian Tarot variants remove certain values (usually low suited cards) from the deck, creating a deck with as few as 32 cards. The French suits were introduced around 1480 and, in France, mostly replaced the earlier Latin suits of swords, clubs, cups and coins. (which are still common in Spanish- and Portuguese-speaking countries as well as in some northern regions of Italy) The suit symbols, being very simple and single-color, could be stamped onto the playing cards to create a deck, thus only requiring special full-color card art for the court cards. This drastically simplifies the production of a deck of cards versus the traditional Italian deck, which used unique full-color art for each card in the deck. The French suits became popular in English playing cards in the 16th century (despite historic animosity between France and England), and from there were introduced to British colonies including North America. The rise of Western culture has led to the near-universal popularity and availability of French-suited playing cards even in areas with their own regional card art. In Japan, a distinct 48-card hanafuda deck is popular. It is derived from 16th-century Portuguese decks, after undergoing a long evolution driven by laws enacted by the Tokugawa shogunate attempting to ban the use of playing cards The best-known deck internationally is the English pattern of the 52-card French deck, also called the International or Anglo-American pattern, used for such games as poker and contract bridge. It contains one card for each unique combination of thirteen ranks and the four French suits spades, hearts, diamonds, and clubs. The ranks (from highest to lowest in bridge and poker) are ace, king, queen, jack (or knave), and the numbers from ten down to two (or deuce). The trump cards and knight cards from the French playing tarot are not included. Originally the term knave was more common than "jack"; the card had been called a jack as part of the terminology of All-Fours since the 17th century, but the word was considered vulgar. (Note the exclamation by Estella in Charles Dickens's novel Great Expectations: "He calls the knaves, Jacks, this boy!") However, because the card abbreviation for knave ("Kn") was so close to that of the king, it was very easy to confuse them, especially after suits and rankings were moved to the corners of the card in order to enable people to fan them in one hand and still see all the values. (The earliest known deck to place suits and rankings in the corner of the card is from 1693, but these cards did not become common until after 1864 when Hart reintroduced them along with the knave-to-jack change.) However, books of card games published in the third quarter of the 19th century evidently still referred to the "knave", and the term with this definition is still recognized in the United Kingdom. In the 17th century, a French, five-trick, gambling game called Bête became popular and spread to Germany, where it was called La Bete and England where it was named Beast. It was a derivative of Triomphe and was the first card game in history to introduce the concept of bidding. Chinese handmade mother-of-pearl gaming counters were used in scoring and bidding of card games in the West during the approximate period of 1700–1840. The gaming counters would bear an engraving such as a coat of arms or a monogram to identify a family or individual. Many of the gaming counters also depict Chinese scenes, flowers or animals. Queen Charlotte is one prominent British individual who is known to have played with the Chinese gaming counters. Card games such as Ombre, Quadrille and Pope Joan were popular at the time and required counters for scoring. The production of counters declined after Whist, with its different scoring method, became the most popular card game in the West. Based on the association of card games and gambling, Pope Benedict XIV banned card games on October 17, 1750. See also Game of chance Game of skill R. F. Foster (games) Henry Jones (writer) who wrote under the pseudonym "Cavendish" John Scarne Dice game List of card games by number of cards References External links International Playing Card Society Rules for historic card games Collection of rules to many card games Tabletop games
5361
https://en.wikipedia.org/wiki/Cross-stitch
Cross-stitch
Cross-stitch is a form of sewing and a popular form of counted-thread embroidery in which X-shaped stitches in a tiled, raster-like pattern are used to form a picture. The stitcher counts the threads on a piece of evenweave fabric (such as linen) in each direction so that the stitches are of uniform size and appearance. This form of cross-stitch is also called counted cross-stitch in order to distinguish it from other forms of cross-stitch. Sometimes cross-stitch is done on designs printed on the fabric (stamped cross-stitch); the stitcher simply stitches over the printed pattern. Cross-stitch is often executed on easily countable fabric called aida cloth whose weave creates a plainly visible grid of squares with holes for the needle at each corner. Fabrics used in cross-stitch include linen, aida cloth, and mixed-content fabrics called 'evenweave' such as jobelan. All cross-stitch fabrics are technically "evenweave" as the term refers to the fact that the fabric is woven to make sure that there are the same number of threads per inch in both the warp and the weft (i.e. vertically and horizontally). Fabrics are categorized by threads per inch (referred to as 'count'), which can range from 11 to 40 count. Counted cross-stitch projects are worked from a gridded pattern called a chart and can be used on any count fabric; the count of the fabric and the number of threads per stitch determine the size of the finished stitching. For example, if a given design is stitched on a 28 count cross-stitch fabric with each cross worked over two threads, the finished stitching size is the same as it would be on a 14 count aida cloth fabric with each cross worked over one square. These methods are referred to as "2 over 2" (2 embroidery threads used to stitch over 2 fabric threads) and "1 over 1" (1 embroidery thread used to stitch over 1 fabric thread or square), respectively. There are different methods of stitching a pattern, including the cross-country method where one colour is stitched at a time, or the parking method where one block of fabric is stitched at a time and the end of the thread is "parked" at the next point the same colour occurs in the pattern. History Cross-stitch can be found all over the world since the middle ages. Many folk museums show examples of clothing decorated with cross-stitch, especially from continental Europe and Asia. The cross-stitch sampler is called that because it was generally stitched by a young girl to learn how to stitch and to record alphabet and other patterns to be used in her household sewing. These samples of her stitching could be referred back to over the years. Often, motifs and initials were stitched on household items to identify their owner, or simply to decorate the otherwise-plain cloth. The earliest known cross stitch sampler made in the United States is currently housed at Pilgrim Hall in Plymouth, Massachusetts. The sampler was created by Loara Standish, daughter of Captain Myles Standish and pioneer of the Leviathan stitch, circa 1653. Traditionally, cross-stitch was used to embellish items like household linens, tablecloths, dishcloths, and doilies (only a small portion of which would actually be embroidered, such as a border). Although there are many cross-stitchers who still employ it in this fashion, it is now increasingly popular to work the pattern on pieces of fabric and hang them on the wall for decoration. Cross-stitch is also often used to make greeting cards, pillow tops, or as inserts for box tops, coasters and trivets. Multicoloured, shaded, painting-like patterns as we know them today are a fairly modern development, deriving from similar shaded patterns of Berlin wool work of the mid-nineteenth century. Besides designs created expressly for cross-stitch, there are software programs that convert a photograph or a fine art image into a chart suitable for stitching. One example of this is in the cross-stitched reproduction of the Sistine Chapel charted and stitched by Joanna Lopianowski-Roberts. There are many cross-stitching "guilds" and groups across the United States and Europe which offer classes, collaborate on large projects, stitch for charity, and provide other ways for local cross-stitchers to get to know one another. Individually owned local needlework shops (LNS) often have stitching nights at their shops, or host weekend stitching retreats. Today, cotton floss is the most common embroidery thread. It is a thread made of mercerized cotton, composed of six strands that are only loosely twisted together and easily separable. While there are other manufacturers, the two most-commonly used (and oldest) brands are DMC and Anchor, both of which have been manufacturing embroidery floss since the 1800s. Other materials used are pearl (or perle) cotton, Danish flower thread, silk and Rayon. Different wool threads, metallic threads or other novelty threads are also used, sometimes for the whole work, but often for accents and embellishments. Hand-dyed cross-stitch floss is created just as the name implies—it is dyed by hand. Because of this, there are variations in the amount of color throughout the thread. Some variations can be subtle, while some can be a huge contrast. Some also have more than one color per thread. Cross-stitch is widely used in traditional Palestinian dressmaking. Related stitches and forms of embroidery The cross-stitch can be executed partially such as in quarter-, half-, and three-quarter-stitches. A single straight stitch, done in the form of backstitching, is often used as an outline, to add detail or definition. There are many stitches which are related structurally to cross-stitch. The best known are Italian cross-stitch (as seen in Assisi embroidery), long-armed cross-stitch, and Montenegrin stitch. Italian cross-stitch and Montenegrin stitch are reversible, meaning the work looks the same on both sides. These styles have a slightly different look than ordinary cross-stitch. These more difficult stitches are rarely used in mainstream embroidery, but they are still used to recreate historical pieces of embroidery or by the creative and adventurous stitcher. The double cross-stitch, also known as a Leviathan stitch or Smyrna cross-stitch, combines a cross-stitch with an upright cross-stitch. Berlin wool work and similar petit point stitchery resembles the heavily shaded, opulent styles of cross-stitch, and sometimes also used charted patterns on paper. Cross-stitch is often combined with other popular forms of embroidery, such as Hardanger embroidery or blackwork embroidery. Cross-stitch may also be combined with other work, such as canvaswork or drawn thread work. Beadwork and other embellishments such as paillettes, charms, small buttons and specialty threads of various kinds may also be used. Cross stitch can often used in needlepoint. Recent trends for cross stitch Cross-stitch has become increasingly popular with the younger generation of Europe in recent years. Retailers such as John Lewis experienced a 17% rise in sales of haberdashery products between 2009 and 2010. Hobbycraft, a chain of stores selling craft supplies, also enjoyed an 11% increase in sales over the year to February 22, 2009. Knitting and cross-stitching have become more popular hobbies for a younger market, in contrast to its traditional reputation as a hobby for retirees. Sewing and craft groups such as Stitch and Bitch London have resurrected the idea of the traditional craft club. At Clothes Show Live 2010 there was a new area called "Sknitch" promoting modern sewing, knitting and embroidery. In a departure from the traditional designs associated with cross-stitch, there is a current trend for more postmodern or tongue-in-cheek designs featuring retro images or contemporary sayings. It is linked to a concept known as 'subversive cross-stitch', which involves more risque designs, often fusing the traditional sampler style with sayings designed to shock or be incongruous with the old-fashioned image of cross-stitch. Stitching designs on other materials can be accomplished by using waste canvas. This is a temporary gridded canvas similar to regular canvas used for embroidery that is held together by a water-soluble glue, which is removed after completion of stitch design. Other crafters have taken to cross-stitching on all manner of gridded objects as well including old kitchen strainers or chain-link fences. Traditionally, it is believed that cross stitch is a woman's craft. But lately there are men who are also addicted to this. Cross-stitch and feminism In the 21st century, an emphasis on feminist design has emerged within cross-stitch communities. Some cross-stitchers have commented on the way that the practice of embroidery makes them feel connected to the women who practised it before them. There is a push for all embroidery, including cross-stitch, to be respected as a significant art form. Cross-stitch and computers The development of computer technology has also affected such a seemingly conservative craft as cross-stitch. With the help of computer visualization algorithms, it is now possible to create embroidery designs using a photograph or any other picture. Visualisation uses a drawing on a graphical grid, representing colors and / or symbols, which gives the user an indication of the possible use of colors, the position of those colors, and the type of stitch used, such as full cross or quarter stitch. Flosstube An increasingly popular activity for cross-stitchers is to watch and make YouTube videos detailing their hobby. Flosstubers, as they are known, typically cover WIPs (Works in Progress), FOs (Finished Objects), and Haul (new patterns, thread, and fabric, as well as cross-stitching accessories, such as needle minders). Other accessories include but are not limited to: Floss organizers, thread conditioner, pin cushions, aida cloth or plastic canvas, and embroidery needles. See also Mosaic Pixel art Embroidery Notes References Caulfield, S. F. A., and B. C. Saward, The Dictionary of Needlework, 1885. Enthoven, Jacqueline: The Creative Stitches of Embroidery, Van Norstrand Rheinhold, 1964, . Gillow, John, and Bryan Sentance: World Textiles, Bulfinch Press/Little, Brown, 1999, . Reader's Digest, Complete Guide to Needlework. The Reader's Digest Association, Inc. (March 1992) . External links Articles related to the recent comeback in popularity of cross stitch. "Is Cross Stitch Dead?" Embroidery stitches Sewing stitches fr:Broderie#Le point de croix
5362
https://en.wikipedia.org/wiki/Casino%20game
Casino game
Games available in most casinos are commonly called casino games. In a casino game, the players gamble cash or casino chips on various possible random outcomes or combinations of outcomes. Casino games are also available in online casinos, where permitted by law. Casino games can also be played outside of casinos for entertainment purposes, like in parties or in school competitions, on machines that simulate gambling. Categories There are three general categories of casino games: gaming machines, table games, and random number games. Gaming machines, such as slot machines and pachinko, are usually played by one player at a time and do not require the involvement of casino employees. Tables games, such as blackjack or craps, involve one or more players who are competing against the house (the casino itself) rather than each other. Table games are usually conducted by casino employees known as croupiers or dealers. Random number games are based on the selection of random numbers, either from a computerized random number generator or from other gaming equipment. Random number games may be played at a table or through the purchase of paper tickets or cards, such as keno or bingo. Some casino games combine multiple of the above aspects; for example, roulette is a table game conducted by a dealer, that involves random numbers. Casinos may also offer other types of gaming, such as hosting poker games or tournaments where players compete against each other. Common casino games Games commonly found at casinos include table games, gaming machines and random number games. Table games In the United States, 'table game' is the term used for games of chance such as blackjack, craps, roulette, and baccarat that are played against the casino and operated by one or more live croupiers, as opposed to those played on a mechanical device like a slot machine or against other players instead of the casino, such as standard poker. Table games are popularly played in casinos and involve some form of legal gambling, but they are also played privately under varying house rules. The term has significance in that some jurisdictions permit casinos to have only slots and no table games. In some states, this law has resulted in casinos employing electronic table games, such as roulette, blackjack, and craps. Table games found in casinos include: Baccarat Blackjack Craps Roulette Poker (Texas hold'em, Five-card draw, Omaha hold'em) Big Six wheel Pool Gaming machines Gaming machines found in casinos include: Pachinko Slot machine Video lottery terminal Video poker Random numbers games Random numbers games found in casinos include: Bingo Keno House advantage Casino games typically provide a predictable long-term advantage to the casino, or "house", while offering the players the possibility of a short-term gain that in some cases can be large. Some casino games have a skill element, where the players' decisions have an impact on the results. Players possessing sufficient skills to eliminate the inherent long-term disadvantage (the house edge or vigorish) in a casino game are referred to as advantage players. The players' disadvantage is a result of the casino not paying winning wagers according to the game's "true odds", which are the payouts that would be expected considering the odds of a wager either winning or losing. For example, if a game is played by wagering on the number that would result from the roll of one die, the true odds would be 6 times the amount wagered since there is a 1 in 6 chance of any single number appearing, assuming that the player gets the original amount wagered back. However, the casino may only pay 4 times the amount wagered for a winning wager. The house edge, or vigorish, is defined as the casino profit expressed as a percentage of the player's original bet. (In games such as blackjack or Spanish 21, the final bet may be several times the original bet, if the player doubles and splits.) In American roulette, there are two "zeroes" (0, 00) and 36 non-zero numbers (18 red and 18 black). This leads to a higher house edge compared to European roulette. The chances of a player, who bets 1 unit on red, winning are 18/38 and his chances of losing 1 unit are 20/38. The player's expected value is EV = (18/38 × 1) + (20/38 × (−1)) = 18/38 − 20/38 = −2/38 = −5.26%. Therefore, the house edge is 5.26%. After 10 spins, betting 1 unit per spin, the average house profit will be 10 × 1 × 5.26% = 0.53 units. European roulette wheels have only one "zero" and therefore the house advantage (ignoring the en prison rule) is equal to 1/37 = 2.7%. The house edge of casino games varies greatly with the game, with some games having an edge as low as 0.3%. Keno can have house edges of up to 25%, slot machines having up to 15%. The calculation of the roulette house edge is a trivial exercise; for other games, this is not usually the case. Combinatorial analysis and/or computer simulation is necessary to complete the task. In games that have a skill element, such as blackjack or Spanish 21, the house edge is defined as the house advantage from optimal play (without the use of advanced techniques such as card counting), on the first hand of the shoe (the container that holds the cards). The set of optimal plays for all possible hands is known as "basic strategy" and is highly dependent on the specific rules and even the number of decks used. Traditionally, the majority of casinos have refused to reveal the house edge information for their slots games, and due to the unknown number of symbols and weightings of the reels, in most cases, it is much more difficult to calculate the house edge than in other casino games. However, due to some online properties revealing this information and some independent research conducted by Michael Shackleford in the offline sector, this pattern is slowly changing. In games where players are not competing against the house, such as poker, the casino usually earns money via a commission, known as a "rake". Standard deviation The luck factor in a casino game is quantified using standard deviations (SD). The standard deviation of a simple game like roulette can be calculated using the binomial distribution. In the binomial distribution, SD = , where n = number of rounds played, p = probability of winning, and q = probability of losing. The binomial distribution assumes a result of 1 unit for a win, and 0 units for a loss, rather than −1 units for a loss, which doubles the range of possible outcomes. Furthermore, if we flat bet at 10 units per round instead of 1 unit, the range of possible outcomes increases 10 fold. SD (roulette, even-money bet) = 2b , where b = flat bet per round, n = number of rounds, p = 18/38, and q = 20/38. For example, after 10 rounds at 1 unit per round, the standard deviation will be 2 × 1 × = 3.16 units. After 10 rounds, the expected loss will be 10 × 1 × 5.26% = 0.53. As you can see, standard deviation is many times the magnitude of the expected loss. The standard deviation for pai gow poker is the lowest out of all common casino games. Many casino games, particularly slot machines, have extremely high standard deviations. The bigger size of the potential payouts, the more the standard deviation may increase. As the number of rounds increases, eventually, the expected loss will exceed the standard deviation, many times over. From the formula, we can see that the standard deviation is proportional to the square root of the number of rounds played, while the expected loss is proportional to the number of rounds played. As the number of rounds increases, the expected loss increases at a much faster rate. This is why it is impossible for a gambler to win in the long term. It is the high ratio of short-term standard deviation to expected loss that fools gamblers into thinking that they can win. It is important for a casino to know both the house edge and variance for all of their games. The house edge tells them what kind of profit they will make as a percentage of turnover, and the variance tells them how much they need in the way of cash reserves. The mathematicians and computer programmers that do this kind of work are called gaming mathematicians and gaming analysts. Casinos do not have in-house expertise in this field, so they outsource their requirements to experts in the gaming analysis field. See also Gambler's fallacy References Game
5363
https://en.wikipedia.org/wiki/Video%20game
Video game
A video game or computer game is an electronic game that involves interaction with a user interface or input device (such as a joystick, controller, keyboard, or motion sensing device) to generate visual feedback from a display device, most commonly shown in a video format on a television set, computer monitor, flat-panel display or touchscreen on handheld devices, or a virtual reality headset. Most modern video games are audiovisual, with audio complement delivered through speakers or headphones, and sometimes also with other types of sensory feedback (e.g., haptic technology that provides tactile sensations), and some video games also allow microphone and webcam inputs for in-game chatting and livestreaming. Video games are typically categorized according to their hardware platform, which traditionally includes arcade video games, console games, and computer (PC) games; the latter also encompasses LAN games, online games, and browser games. More recently, the video game industry has expanded onto mobile gaming through mobile devices (such as smartphones and tablet computers), virtual and augmented reality systems, and remote cloud gaming. Video games are also classified into a wide range of genres based on their style of gameplay and target audience. The first video game prototypes in the 1950s and 1960s were simple extensions of electronic games using video-like output from large, room-sized mainframe computers. The first consumer video game was the arcade video game Computer Space in 1971. In 1972 came the iconic hit game Pong and the first home console, the Magnavox Odyssey. The industry grew quickly during the "golden age" of arcade video games from the late 1970s to early 1980s but suffered from the crash of the North American video game market in 1983 due to loss of publishing control and saturation of the market. Following the crash, the industry matured, was dominated by Japanese companies such as Nintendo, Sega, and Sony, and established practices and methods around the development and distribution of video games to prevent a similar crash in the future, many of which continue to be followed. In the 2000s, the core industry centered on "AAA" games, leaving little room for riskier experimental games. Coupled with the availability of the Internet and digital distribution, this gave room for independent video game development (or "indie games") to gain prominence into the 2010s. Since then, the commercial importance of the video game industry has been increasing. The emerging Asian markets and proliferation of smartphone games in particular are altering player demographics towards casual gaming and increasing monetization by incorporating games as a service. Today, video game development requires numerous interdisciplinary skills, vision, teamwork, and liaisons between different parties, including developers, publishers, distributors, retailers, hardware manufacturers, and other marketers, to successfully bring a game to its consumers. , the global video game market had estimated annual revenues of across hardware, software, and services, which is three times the size of the global music industry and four times that of the film industry in 2019, making it a formidable heavyweight across the modern entertainment industry. The video game market is also a major influence behind the electronics industry, where personal computer component, console, and peripheral sales, as well as consumer demands for better game performance, have been powerful driving factors for hardware design and innovation. Origins Early video games use interactive electronic devices with various display formats. The earliest example is from 1947—a "cathode-ray tube amusement device" was filed for a patent on 25 January 1947, by Thomas T. Goldsmith Jr. and Estle Ray Mann, and issued on 14 December 1948, as U.S. Patent 2455992. Inspired by radar display technology, it consists of an analog device allowing a user to control the parabolic arc of a dot on the screen to simulate a missile being fired at targets, which are paper drawings fixed to the screen. Other early examples include Christopher Strachey's draughts game, the Nimrod computer at the 1951 Festival of Britain; OXO, a tic-tac-toe computer game by Alexander S. Douglas for the EDSAC in 1952; Tennis for Two, an electronic interactive game engineered by William Higinbotham in 1958; and Spacewar!, written by Massachusetts Institute of Technology students Martin Graetz, Steve Russell, and Wayne Wiitanen's on a DEC PDP-1 computer in 1961. Each game has different means of display: NIMROD has a panel of lights to play the game of Nim, OXO has a graphical display to play tic-tac-toe, Tennis for Two has an oscilloscope to display a side view of a tennis court, and Spacewar! has the DEC PDP-1's vector display to have two spaceships battle each other. These preliminary inventions paved the way for the origins of video games today. Ralph H. Baer, while working at Sanders Associates in 1966, devised a control system to play a rudimentary game of table tennis on a television screen. With the company's approval, Baer built the prototype "Brown Box". Sanders patented Baer's inventions and licensed them to Magnavox, which commercialized it as the first home video game console, the Magnavox Odyssey, released in 1972. Separately, Nolan Bushnell and Ted Dabney, inspired by seeing Spacewar! running at Stanford University, devised a similar version running in a smaller coin-operated arcade cabinet using a less expensive computer. This was released as Computer Space, the first arcade video game, in 1971. Bushnell and Dabney went on to form Atari, Inc., and with Allan Alcorn, created their second arcade game in 1972, the hit ping pong-style Pong, which was directly inspired by the table tennis game on the Odyssey. Sanders and Magnavox sued Atari for infringement of Baer's patents, but Atari settled out of court, paying for perpetual rights to the patents. Following their agreement, Atari made a home version of Pong, which was released by Christmas 1975. The success of the Odyssey and Pong, both as an arcade game and home machine, launched the video game industry. Both Baer and Bushnell have been titled "Father of Video Games" for their contributions. Terminology The term "video game" was developed to distinguish this class of electronic games that were played on some type of video display rather than on a teletype printer, audio speaker or similar device. This also distinguished from many handheld electronic games like Merlin which commonly used LED lights for indicators but did not use these in combination for imaging purposes. "Computer game" may also be used as a descriptor, as all these types of games essentially require the use of a computer processor, and in some cases, it is used interchangeably with "video game". Particularly in the United Kingdom and Western Europe, this is common due to the historic relevance of domestically produced microcomputers. Other terms used include digital game, for example by the Australian Bureau of Statistics. However, the term "computer game" can also be used to more specifically refer to games played primarily on personal computers or other type of flexible hardware systems (also known as a PC game), as a way distinguish them from console games, arcade games or mobile games. Other terms such as "television game" or "telegame" had been used in the 1970s and early 1980s, particularly for the home gaming consoles that rely on connection to a television set. In Japan, where consoles like the Odyssey were first imported and then made within the country by the large television manufacturers such as Toshiba and Sharp Corporation, such games are known as "TV games", or TV geemu or terebi geemu. "Electronic game" may also be used to refer to video games, but this also incorporates devices like early handheld electronic games that lack any video output. and the term "TV game" is still commonly used into the 21st century. The first appearance of the term "video game" emerged around 1973. The Oxford English Dictionary cited a 10 November 1973 BusinessWeek article as the first printed use of the term. Though Bushnell believed the term came from a vending magazine review of Computer Space in 1971, a review of the major vending magazines Vending Times and Cashbox showed that the term came much earlier, appearing first around March 1973 in these magazines in mass usage including by the arcade game manufacturers. As analyzed by video game historian Keith Smith, the sudden appearance suggested that the term had been proposed and readily adopted by those involved. This appeared to trace to Ed Adlum, who ran Cashboxs coin-operated section until 1972 and then later founded RePlay Magazine, covering the coin-op amusement field, in 1975. In a September 1982 issue of RePlay, Adlum is credited with first naming these games as "video games": "RePlay's Eddie Adlum worked at 'Cash Box' when 'TV games' first came out. The personalities in those days were Bushnell, his sales manager Pat Karns and a handful of other 'TV game' manufacturers like Henry Leyser and the McEwan brothers. It seemed awkward to call their products 'TV games', so borrowing a word from Billboards description of movie jukeboxes, Adlum started to refer to this new breed of amusement machine as 'video games.' The phrase stuck." Adlum explained in 1985 that up until the early 1970s, amusement arcades typically had non-video arcade games such as pinball machines and electro-mechanical games. With the arrival of video games in arcades during the early 1970s, there was initially some confusion in the arcade industry over what term should be used to describe the new games. He "wrestled with descriptions of this type of game," alternating between "TV game" and "television game" but "finally woke up one day" and said, "what the hell... video game!" For many years, the traveling Videotopia exhibit served as the closest representation of such a vital resource. In addition to collecting home video game consoles, the Electronics Conservancy organization set out to locate and restore 400 antique arcade cabinets after realizing that the majority of these games had been destroyed and feared the loss of their historical significance. Video games have significantly began to be seen in the real-world as a purpose to present history in a way of understanding the methodology and terms that are being compared. Researchers have looked at how historical representations affect how the public perceives the past, and digital humanists encourage historians to use video games as primary materials. Video games, considering their past and age, have over time progressed as what a video game really means. Whether played through a monitor, TV, or a hand-held device, there are many ways that video games are being displayed for users to enjoy. People have drawn comparisons between flow-state-engaged video gamers and pupils in conventional school settings. In traditional, teacher-led classrooms, students have little say in what they learn, are passive consumers of the information selected by teachers, are required to follow the pace and skill level of the group (group teaching), and receive brief, imprecise, normative feedback on their work. Video games, as they continue to develop into better graphic definition and genre's, create new terminology when something unknown tends to become known. Yearly, consoles are being created to compete against other brands with similar functioning features that tends to lead the consumer into which they'd like to purchase. Now, companies have moved towards games only the specific console can play to grasp the consumer into purchasing their product compared to when video games first began, there was little to no variety. In 1989, a console war begun with Nintendo, one of the biggest in gaming was up against target, Sega with their brand new Master System which, failed to compete, allowing the Nintendo Emulator System to be one of the most consumed product in the world. More technology continued to be created, as the computer began to be used in people's houses for more than just office and daily use. Games began being implemented into computers and have progressively grown since then with coded robots to play against you. Early games like tic-tac-toe, solitaire, and Tennis for Two were great ways to bring new gaming to another system rather than one specifically meant for gaming. Definition While many games readily fall into a clear, well-understood definition of video games, new genres and innovations in game development have raised the question of what are the essential factors of a video game that separate the medium from other forms of entertainment. The introduction of interactive films in the 1980s with games like Dragon's Lair, featured games with full motion video played off a form of media but only limited user interaction. This had required a means to distinguish these games from more traditional board games that happen to also use external media, such as the Clue VCR Mystery Game which required players to watch VCR clips between turns. To distinguish between these two, video games are considered to require some interactivity that affects the visual display. Most video games tend to feature some type of victory or winning conditions, such as a scoring mechanism or a final boss fight. The introduction of walking simulators (adventure games that allow for exploration but lack any objectives) like Gone Home, and empathy games (video games that tend to focus on emotion) like That Dragon, Cancer brought the idea of games that did not have any such type of winning condition and raising the question of whether these were actually games. These are still commonly justified as video games as they provide a game world that the player can interact with by some means. The lack of any industry definition for a video game by 2021 was an issue during the case Epic Games v. Apple which dealt with video games offered on Apple's iOS App Store. Among concerns raised were games like Fortnite Creative and Roblox which created metaverses of interactive experiences, and whether the larger game and the individual experiences themselves were games or not in relation to fees that Apple charged for the App Store. Judge Yvonne Gonzalez Rogers, recognizing that there was yet an industry standard definition for a video game, established for her ruling that "At a bare minimum, videogames appear to require some level of interactivity or involvement between the player and the medium" compared to passive entertainment like film, music, and television, and "videogames are also generally graphically rendered or animated, as opposed to being recorded live or via motion capture as in films or television". Rogers still concluded that what is a video game "appears highly eclectic and diverse". Video game terminology The gameplay experience varies radically between video games, but many common elements exist. Most games will launch into a title screen and give the player a chance to review options such as the number of players before starting a game. Most games are divided into levels which the player must work the avatar through, scoring points, collecting power-ups to boost the avatar's innate attributes, all while either using special attacks to defeat enemies or moves to avoid them. This information is relayed to the player through a type of on-screen user interface such as a heads-up display atop the rendering of the game itself. Taking damage will deplete their avatar's health, and if that falls to zero or if the avatar otherwise falls into an impossible-to-escape location, the player will lose one of their lives. Should they lose all their lives without gaining an extra life or "1-UP", then the player will reach the "game over" screen. Many levels as well as the game's finale end with a type of boss character the player must defeat to continue on. In some games, intermediate points between levels will offer save points where the player can create a saved game on storage media to restart the game should they lose all their lives or need to stop the game and restart at a later time. These also may be in the form of a passage that can be written down and reentered at the title screen. Product flaws include software bugs which can manifest as glitches which may be exploited by the player; this is often the foundation of speedrunning a video game. These bugs, along with cheat codes, Easter eggs, and other hidden secrets that were intentionally added to the game can also be exploited. On some consoles, cheat cartridges allow players to execute these cheat codes, and user-developed trainers allow similar bypassing for computer software games. Both of which might make the game easier, give the player additional power-ups, or change the appearance of the game. Components To distinguish from electronic games, a video game is generally considered to require a platform, the hardware which contains computing elements, to process player interaction from some type of input device and displays the results to a video output display. Platform Video games require a platform, a specific combination of electronic components or computer hardware and associated software, to operate. The term system is also commonly used. Games are typically designed to be played on one or a limited number of platforms, and exclusivity to a platform is used as a competitive edge in the video game market. However, games may be developed for alternative platforms than intended, which are described as ports or conversions. These also may be remasters - where most of the original game's source code is reused and art assets, models, and game levels are updated for modern systems – and remakes, where in addition to asset improvements, significant reworking of the original game and possibly from scratch is performed. The list below is not exhaustive and excludes other electronic devices capable of playing video games such as PDAs and graphing calculators. PC games PC games involve a player interacting with a personal computer (PC) connected to a video monitor. Personal computers are not dedicated game platforms, so there may be differences running the same game on different hardware. Also, the openness allows some features to developers like reduced software cost, increased flexibility, increased innovation, emulation, creation of modifications or mods, open hosting for online gaming (in which a person plays a video game with people who are in a different household) and others. A gaming computer is a PC or laptop intended specifically for gaming, typically using high-performance, high-cost components. In additional to personal computer gaming, there also exist games that work on mainframe computers and other similarly shared systems, with users logging in remotely to use the computer. Home console A console game is played on a home console, a specialized electronic device that connects to a common television set or composite video monitor. Home consoles are specifically designed to play games using a dedicated hardware environment, giving developers a concrete hardware target for development and assurances of what features will be available, simplifying development compared to PC game development. Usually consoles only run games developed for it, or games from other platform made by the same company, but never games developed by its direct competitor, even if the same game is available on different platforms. It often comes with a specific game controller. Major console platforms include Xbox, PlayStation and Nintendo. Handheld console A handheld game console is a small, self-contained electronic device that is portable and can be held in a user's hands. It features the console, a small screen, speakers and buttons, joystick or other game controllers in a single unit. Like consoles, handhelds are dedicated platforms, and share almost the same characteristics. Handheld hardware usually is less powerful than PC or console hardware. Some handheld games from the late 1970s and early 1980s could only play one game. In the 1990s and 2000s, a number of handheld games used cartridges, which enabled them to be used to play many different games. The handheld console has waned in the 2010s as mobile device gaming has become a more dominant factor. Arcade video game An arcade video game generally refers to a game played on an even more specialized type of electronic device that is typically designed to play only one game and is encased in a special, large coin-operated cabinet which has one built-in console, controllers (joystick, buttons, etc.), a CRT screen, and audio amplifier and speakers. Arcade games often have brightly painted logos and images relating to the theme of the game. While most arcade games are housed in a vertical cabinet, which the user typically stands in front of to play, some arcade games use a tabletop approach, in which the display screen is housed in a table-style cabinet with a see-through table top. With table-top games, the users typically sit to play. In the 1990s and 2000s, some arcade games offered players a choice of multiple games. In the 1980s, video arcades were businesses in which game players could use a number of arcade video games. In the 2010s, there are far fewer video arcades, but some movie theaters and family entertainment centers still have them. Browser game A browser game takes advantages of standardizations of technologies for the functionality of web browsers across multiple devices providing a cross-platform environment. These games may be identified based on the website that they appear, such as with Miniclip games. Others are named based on the programming platform used to develop them, such as Java and Flash games. Mobile game With the introduction of smartphones and tablet computers standardized on the iOS and Android operating systems, mobile gaming has become a significant platform. These games may use unique features of mobile devices that are not necessary present on other platforms, such as accelerometers, global positing information and camera devices to support augmented reality gameplay. Cloud gaming Cloud gaming requires a minimal hardware device, such as a basic computer, console, laptop, mobile phone or even a dedicated hardware device connected to a display with good Internet connectivity that connects to hardware systems by the cloud gaming provider. The game is computed and rendered on the remote hardware, using a number of predictive methods to reduce the network latency between player input and output on their display device. For example, the Xbox Cloud Gaming and PlayStation Now platforms use dedicated custom server blade hardware in cloud computing centers. Virtual reality Virtual reality (VR) games generally require players to use a special head-mounted unit that provides stereoscopic screens and motion tracking to immerse a player within virtual environment that responds to their head movements. Some VR systems include control units for the player's hands as to provide a direct way to interact with the virtual world. VR systems generally require a separate computer, console, or other processing device that couples with the head-mounted unit. Emulation An emulator enables games from a console or otherwise different system to be run in a type of virtual machine on a modern system, simulating the hardware of the original and allows old games to be played. While emulators themselves have been found to be legal in United States case law, the act of obtaining the game software that one does not already own may violate copyrights. However, there are some official releases of emulated software from game manufacturers, such as Nintendo with its Virtual Console or Nintendo Switch Online offerings. Backward compatibility Backward compatibility is similar in nature to emulation in that older games can be played on newer platforms, but typically directly though hardware and build-in software within the platform. For example, the PlayStation 2 is capable of playing original PlayStation games simply by inserting the original game media into the newer console, while Nintendo's Wii could play GameCube titles as well in the same manner. Game media Early arcade games, home consoles, and handheld games were dedicated hardware units with the game's logic built into the electronic componentry of the hardware. Since then, most video game platforms are considered programmable, having means to read and play multiple games distributed on different types of media or formats. Physical formats include ROM cartridges, magnetic storage including magnetic-tape data storage and floppy discs, optical media formats including CD-ROM and DVDs, and flash memory cards. Furthermore digital distribution over the Internet or other communication methods as well as cloud gaming alleviate the need for any physical media. In some cases, the media serves as the direct read-only memory for the game, or it may be the form of installation media that is used to write the main assets to the player's platform's local storage for faster loading periods and later updates. Games can be extended with new content and software patches through either expansion packs which are typically available as physical media, or as downloadable content nominally available via digital distribution. These can be offered freely or can be used to monetize a game following its initial release. Several games offer players the ability to create user-generated content to share with others to play. Other games, mostly those on personal computers, can be extended with user-created modifications or mods that alter or add onto the game; these often are unofficial and were developed by players from reverse engineering of the game, but other games provide official support for modding the game. Input device Video game can use several types of input devices to translate human actions to a game. Most common are the use of game controllers like gamepads and joysticks for most consoles, and as accessories for personal computer systems along keyboard and mouse controls. Common controls on the most recent controllers include face buttons, shoulder triggers, analog sticks, and directional pads ("d-pads"). Consoles typically include standard controllers which are shipped or bundled with the console itself, while peripheral controllers are available as a separate purchase from the console manufacturer or third-party vendors. Similar control sets are built into handheld consoles and onto arcade cabinets. Newer technology improvements have incorporated additional technology into the controller or the game platform, such as touchscreens and motion detection sensors that give more options for how the player interacts with the game. Specialized controllers may be used for certain genres of games, including racing wheels, light guns and dance pads. Digital cameras and motion detection can capture movements of the player as input into the game, which can, in some cases, effectively eliminate the control, and on other systems such as virtual reality, are used to enhance immersion into the game. Display and output By definition, all video games are intended to output graphics to an external video display, such as cathode-ray tube televisions, newer liquid-crystal display (LCD) televisions and built-in screens, projectors or computer monitors, depending on the type of platform the game is played on. Features such as color depth, refresh rate, frame rate, and screen resolution are a combination of the limitations of the game platform and display device and the program efficiency of the game itself. The game's output can range from fixed displays using LED or LCD elements, text-based games, two-dimensional and three-dimensional graphics, and augmented reality displays. The game's graphics are often accompanied by sound produced by internal speakers on the game platform or external speakers attached to the platform, as directed by the game's programming. This often will include sound effects tied to the player's actions to provide audio feedback, as well as background music for the game. Some platforms support additional feedback mechanics to the player that a game can take advantage of. This is most commonly haptic technology built into the game controller, such as causing the controller to shake in the player's hands to simulate a shaking earthquake occurring in game. Classifications Video games are frequently classified by a number of factors related to how one plays them. Genre A video game, like most other forms of media, may be categorized into genres. However, unlike film or television which use visual or narrative elements, video games are generally categorized into genres based on their gameplay interaction, since this is the primary means which one interacts with a video game. The narrative setting does not impact gameplay; a shooter game is still a shooter game, regardless of whether it takes place in a fantasy world or in outer space. An exception is the horror game genre, used for games that are based on narrative elements of horror fiction, the supernatural, and psychological horror. Genre names are normally self-describing in terms of the type of gameplay, such as action game, role playing game, or shoot 'em up, though some genres have derivations from influential works that have defined that genre, such as roguelikes from Rogue, Grand Theft Auto clones from Grand Theft Auto III, and battle royale games from the film Battle Royale. The names may shift over time as players, developers and the media come up with new terms; for example, first-person shooters were originally called "Doom clones" based on the 1993 game. A hierarchy of game genres exist, with top-level genres like "shooter game" and "action game" that broadly capture the game's main gameplay style, and several subgenres of specific implementation, such as within the shooter game first-person shooter and third-person shooter. Some cross-genre types also exist that fall until multiple top-level genres such as action-adventure game. Mode A video game's mode describes how many players can use the game at the same type. This is primarily distinguished by single-player video games and multiplayer video games. Within the latter category, multiplayer games can be played in a variety of ways, including locally at the same device, on separate devices connected through a local network such as LAN parties, or online via separate Internet connections. Most multiplayer games are based on competitive gameplay, but many offer cooperative and team-based options as well as asymmetric gameplay. Online games use server structures that can also enable massively multiplayer online games (MMOs) to support hundreds of players at the same time. A small number of video games are zero-player games, in which the player has very limited interaction with the game itself. These are most commonly simulation games where the player may establish a starting state and then let the game proceed on its own, watching the results as a passive observer, such as with many computerized simulations of Conway's Game of Life. Types Most video games are intended for entertainment purposes. Different game types include: Core games Core or hard-core games refer to the typical perception of video games, developed for entertainment purposes. These games typically require a fair amount of time to learn and master, in contrast to casual games, and thus are most appealing to gamers rather than a broader audience. Most of the AAA video game industry is based around the delivery of core games. Casual games In contrast to core games, casual games are designed for ease of accessibility, simple to understand gameplay and quick to grasp rule sets, and aimed at mass market audience. They frequently support the ability to jump in and out of play on demand, such as during commuting or lunch breaks. Numerous browser and mobile games fall into the casual game area, and casual games often are from genres with low intensity game elements such as match three, hidden object, time management, and puzzle games. Causal games frequently use social-network game mechanics, where players can enlist the help of friends on their social media networks for extra turns or moves each day. Popular casual games include Tetris and Candy Crush Saga. More recent, starting in the late 2010s, are hyper-casual games which use even more simplistic rules for short but infinitely replayable games, such as Flappy Bird. Educational games Education software has been used in homes and classrooms to help teach children and students, and video games have been similarly adapted for these reasons, all designed to provide a form of interactivity and entertainment tied to game design elements. There are a variety of differences in their designs and how they educate the user. These are broadly split between edutainment games that tend to focus on the entertainment value and rote learning but are unlikely to engage in critical thinking, and educational video games that are geared towards problem solving through motivation and positive reinforcement while downplaying the entertainment value. Examples of educational games include The Oregon Trail and the Carmen Sandiego series. Further, games not initially developed for educational purposes have found their way into the classroom after release, such as that feature open worlds or virtual sandboxes like Minecraft, or offer critical thinking skills through puzzle video games like SpaceChem. Serious games Further extending from educational games, serious games are those where the entertainment factor may be augmented, overshadowed, or even eliminated by other purposes for the game. Game design is used to reinforce the non-entertainment purpose of the game, such as using video game technology for the game's interactive world, or gamification for reinforcement training. Educational games are a form of serious games, but other types of games include fitness games that incorporate significant physical exercise to help keep the player fit (such as Wii Fit), simulator games that resemble fight simulators to pilot aircraft (such as Microsoft Flight Simulator), advergames that are built around the advertising of a product (such as Pepsiman), and newsgames aimed at conveying a specific advocacy message (such as NarcoGuerra). Art games Although video games have been considered an art form on their own, games may be developed to try to purposely communicate a story or message, using the medium as a work of art. These art or arthouse games are designed to generate emotion and empathy from the player by challenging societal norms and offering critique through the interactivity of the video game medium. They may not have any type of win condition and are designed to let the player explore through the game world and scenarios. Most art games are indie games in nature, designed based on personal experiences or stories through a single developer or small team. Examples of art games include Passage, Flower, and That Dragon, Cancer. Content rating Video games can be subject to national and international content rating requirements. Like with film content ratings, video game ratings typing identify the target age group that the national or regional ratings board believes is appropriate for the player, ranging from all-ages, to a teenager-or-older, to mature, to the infrequent adult-only games. Most content review is based on the level of violence, both in the type of violence and how graphic it may be represented, and sexual content, but other themes such as drug and alcohol use and gambling that can influence children may also be identified. A primary identifier based on a minimum age is used by nearly all systems, along with additional descriptors to identify specific content that players and parents should be aware of. The regulations vary from country to country but generally are voluntary systems upheld by vendor practices, with penalty and fines issued by the ratings body on the video game publisher for misuse of the ratings. Among the major content rating systems include: Entertainment Software Rating Board (ESRB) that oversees games released in the United States. ESRB ratings are voluntary and rated along a E (Everyone), E10+ (Everyone 10 and older), T (Teen), M (Mature), and AO (Adults Only). Attempts to mandate video games ratings in the U.S. subsequently led to the landmark Supreme Court case, Brown v. Entertainment Merchants Association in 2011 which ruled video games were a protected form of art, a key victory for the video game industry. Pan European Game Information (PEGI) covering the United Kingdom, most of the European Union and other European countries, replacing previous national-based systems. The PEGI system uses content rated based on minimum recommended ages, which include 3+, 8+, 12+, 16+, and 18+. Australian Classification Board (ACB) oversees the ratings of games and other works in Australia, using ratings of G (General), PG (Parental Guidance), M (Mature), MA15+ (Mature Accompanied), R18+ (Restricted), and X (Restricted for pornographic material). ACB can also deny to give a rating to game (RC – Refused Classification). The ACB's ratings are enforceable by law, and importantly, games cannot be imported or purchased digitally in Australia if they have failed to gain a rating or were given the RC rating, leading to a number of notable banned games. Computer Entertainment Rating Organization (CERO) rates games for Japan. Their ratings include A (all ages), B (12 and older), C (15 and over), D (17 and over), and Z (18 and over). Additionally, the major content system provides have worked to create the International Age Rating Coalition (IARC), a means to streamline and align the content ratings system between different region, so that a publisher would only need to complete the content ratings review for one provider, and use the IARC transition to affirm the content rating for all other regions. Certain nations have even more restrictive rules related to political or ideological content. Within Germany, until 2018, the Unterhaltungssoftware Selbstkontrolle (Entertainment Software Self-Regulation) would refuse to classify, and thus allow sale, of any game depicting Nazi imagery, and thus often requiring developers to replace such imagery with fictional ones. This ruling was relaxed in 2018 to allow for such imagery for "social adequacy" purposes that applied to other works of art. China's video game segment is mostly isolated from the rest of the world due to the government's censorship, and all games published there must adhere to strict government review, disallowing content such as smearing the image of the Chinese Communist Party. Foreign games published in China often require modification by developers and publishers to meet these requirements. Development Video game development and authorship, much like any other form of entertainment, is frequently a cross-disciplinary field. Video game developers, as employees within this industry are commonly referred, primarily include programmers and graphic designers. Over the years this has expanded to include almost every type of skill that one might see prevalent in the creation of any movie or television program, including sound designers, musicians, and other technicians; as well as skills that are specific to video games, such as the game designer. All of these are managed by producers. In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the "one-man shop" is gone, as this is still sometimes found in the casual gaming and handheld markets, where smaller games are prevalent due to technical limitations such as limited RAM or lack of dedicated 3D graphics rendering capabilities on the target platform (e.g., some PDAs). Video games are programmed like any other piece of computer software. Prior to the mid-1970s, arcade and home consoles were programmed by assembling discrete electro-mechanical components on circuit boards, which limited games to relatively simple logic. By 1975, low-cost microprocessors were available at volume to be used for video game hardware, which allowed game developers to program more detailed games, widening the scope of what was possible. Ongoing improvements in computer hardware technology has expanded what has become possible to create in video games, coupled with convergence of common hardware between console, computer, and arcade platforms to simplify the development process. Today, game developers have a number of commercial and open source tools available for use to make games, often which are across multiple platforms to support portability, or may still opt to create their own for more specialized features and direct control of the game. Today, many games are built around a game engine that handles the bulk of the game's logic, gameplay, and rendering. These engines can be augmented with specialized engines for specific features, such as a physics engine that simulates the physics of objects in real-time. A variety of middleware exists to help developers to access other features, such as for playback of videos within games, network-oriented code for games that communicate via online services, matchmaking for online games, and similar features. These features can be used from a developers' programming language of choice, or they may opt to also use game development kits that minimize the amount of direct programming they have to do but can also limit the amount of customization they can add into a game. Like all software, video games usually undergo quality testing before release to assure there are no bugs or glitches in the product, though frequently developers will release patches and updates. With the growth of the size of development teams in the industry, the problem of cost has increased. Development studios need the best talent, while publishers reduce costs to maintain profitability on their investment. Typically, a video game console development team ranges from 5 to 50 people, and some exceed 100. In May 2009, Assassin's Creed II was reported to have a development staff of 450. The growth of team size combined with greater pressure to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines, rushed games and the release of unfinished products. While amateur and hobbyist game programming had existed since the late 1970s with the introduction of home computers, a newer trend since the mid-2000s is indie game development. Indie games are made by small teams outside any direct publisher control, their games being smaller in scope than those from the larger "AAA" game studios, and are often experiment in gameplay and art style. Indie game development are aided by larger availability of digital distribution, including the newer mobile gaming marker, and readily-available and low-cost development tools for these platforms. Game theory and studies Although departments of computer science have been studying the technical aspects of video games for years, theories that examine games as an artistic medium are a relatively recent development in the humanities. The two most visible schools in this emerging field are ludology and narratology. Narrativists approach video games in the context of what Janet Murray calls "Cyberdrama". That is to say, their major concern is with video games as a storytelling medium, one that arises out of interactive fiction. Murray puts video games in the context of the Holodeck, a fictional piece of technology from Star Trek, arguing for the video game as a medium in which the player is allowed to become another person, and to act out in another world. This image of video games received early widespread popular support, and forms the basis of films such as Tron, eXistenZ and The Last Starfighter. Ludologists break sharply and radically from this idea. They argue that a video game is first and foremost a game, which must be understood in terms of its rules, interface, and the concept of play that it deploys. Espen J. Aarseth argues that, although games certainly have plots, characters, and aspects of traditional narratives, these aspects are incidental to gameplay. For example, Aarseth is critical of the widespread attention that narrativists have given to the heroine of the game Tomb Raider, saying that "the dimensions of Lara Croft's body, already analyzed to death by film theorists, are irrelevant to me as a player, because a different-looking body would not make me play differently... When I play, I don't even see her body, but see through it and past it." Simply put, ludologists reject traditional theories of art because they claim that the artistic and socially relevant qualities of a video game are primarily determined by the underlying set of rules, demands, and expectations imposed on the player. While many games rely on emergent principles, video games commonly present simulated story worlds where emergent behavior occurs within the context of the game. The term "emergent narrative" has been used to describe how, in a simulated environment, storyline can be created simply by "what happens to the player." However, emergent behavior is not limited to sophisticated games. In general, any place where event-driven instructions occur for AI in a game, emergent behavior will exist. For instance, take a racing game in which cars are programmed to avoid crashing, and they encounter an obstacle in the track: the cars might then maneuver to avoid the obstacle causing the cars behind them to slow or maneuver to accommodate the cars in front of them and the obstacle. The programmer never wrote code to specifically create a traffic jam, yet one now exists in the game. Intellectual property for video games Most commonly, video games are protected by copyright, though both patents and trademarks have been used as well. Though local copyright regulations vary to the degree of protection, video games qualify as copyrighted visual-audio works, and enjoy cross-country protection under the Berne Convention. This typically only applies to the underlying code, as well as to the artistic aspects of the game such as its writing, art assets, and music. Gameplay itself is generally not considered copyrightable; in the United States among other countries, video games are considered to fall into the idea–expression distinction in that it is how the game is presented and expressed to the player that can be copyrighted, but not the underlying principles of the game. Because gameplay is normally ineligible for copyright, gameplay ideas in popular games are often replicated and built upon in other games. At times, this repurposing of gameplay can be seen as beneficial and a fundamental part of how the industry has grown by building on the ideas of others. For example Doom (1993) and Grand Theft Auto III (2001) introduced gameplay that created popular new game genres, the first-person shooter and the Grand Theft Auto clone, respectively, in the few years after their release. However, at times and more frequently at the onset of the industry, developers would intentionally create video game clones of successful games and game hardware with few changes, which led to the flooded arcade and dedicated home console market around 1978. Cloning is also a major issue with countries that do not have strong intellectual property protection laws, such as within China. The lax oversight by China's government and the difficulty for foreign companies to take Chinese entities to court had enabled China to support a large grey market of cloned hardware and software systems. The industry remains challenged to distinguish between creating new games based on refinements of past successful games to create a new type of gameplay, and intentionally creating a clone of a game that may simply swap out art assets. Industry History The early history of the video game industry, following the first game hardware releases and through 1983, had little structure. Video games quickly took off during the golden age of arcade video games from the late 1970s to early 1980s, but the newfound industry was mainly composed of game developers with little business experience. This led to numerous companies forming simply to create clones of popular games to try to capitalize on the market. Due to loss of publishing control and oversaturation of the market, the North American home video game market crashed in 1983, dropping from revenues of around in 1983 to by 1985. Many of the North American companies created in the prior years closed down. Japan's growing game industry was briefly shocked by this crash but had sufficient longevity to withstand the short-term effects, and Nintendo helped to revitalize the industry with the release of the Nintendo Entertainment System in North America in 1985. Along with it, Nintendo established a number of core industrial practices to prevent unlicensed game development and control game distribution on their platform, methods that continue to be used by console manufacturers today. The industry remained more conservative following the 1983 crash, forming around the concept of publisher-developer dichotomies, and by the 2000s, leading to the industry centralizing around low-risk, triple-A games and studios with large development budgets of at least or more. The advent of the Internet brought digital distribution as a viable means to distribute games, and contributed to the growth of more riskier, experimental independent game development as an alternative to triple-A games in the late 2000s and which has continued to grow as a significant portion of the video game industry. Industry roles Video games have a large network effect that draw on many different sectors that tie into the larger video game industry. While video game developers are a significant portion of the industry, other key participants in the market include: Publishers: Companies generally that oversee bringing the game from the developer to market. This often includes performing the marketing, public relations, and advertising of the game. Publishers frequently pay the developers ahead of time to make their games and will be involved in critical decisions about the direction of the game's progress, and then pay the developers additional royalties or bonuses based on sales performances. Other smaller, boutique publishers may simply offer to perform the publishing of a game for a small fee and a portion of the sales, and otherwise leave the developer with the creative freedom to proceed. A range of other publisher-developer relationships exist between these points. Distributors: Publishers often are able to produce their own game media and take the role of distributor, but there are also third-party distributors that can mass-produce game media and distribute to retailers. Digital storefronts like Steam and the iOS App Store also serve as distributors and retailers in the digital space. Retailers: Physical storefronts, which include large online retailers, department and electronic stores, and specialty video game stores, sell games, consoles, and other accessories to consumers. This has also including a trade-in market in certain regions, allowing players to turn in used games for partial refunds or credit towards other games. However, with the uprising of digital marketplaces and e-commerce revolution, retailers have been performing worse than in the past. Hardware manufacturers: The video game console manufacturers produce console hardware, often through a value chain system that include numerous component suppliers and contract manufacturer that assemble the consoles. Further, these console manufacturers typically require a license to develop for their platform and may control the production of some games, such as Nintendo does with the use of game cartridges for its systems. In exchange, the manufacturers may help promote games for their system and may seek console exclusivity for certain games. For games on personal computers, a number of manufacturers are devoted to high-performance "gaming computer" hardware, particularly in the graphics card area; several of the same companies overlap with component supplies for consoles. A range of third-party manufacturers also exist to provide equipment and gear for consoles post-sale, such as additional controllers for console or carrying cases and gear for handheld devices. Journalism: While journalism around video games used to be primarily print-based, and focused more on post-release reviews and gameplay strategy, the Internet has brought a more proactive press that use web journalism, covering games in the months prior to release as well as beyond, helping to build excitement for games ahead of release. Influencers: With the rising importance of social media, video game companies have found that the opinions of influencers using streaming media to play through their games has had a significant impact on game sales, and have turned to use influencers alongside traditional journalism as a means to build up attention to their game before release. Esports: Esports is a major function of several multiplayer games with numerous professional leagues established since the 2000s, with large viewership numbers, particularly out of southeast Asia since the 2010s. Trade and advocacy groups: Trade groups like the Entertainment Software Association were established to provide a common voice for the industry in response to governmental and other advocacy concerns. They frequently set up the major trade events and conventions for the industry such as E3. Gamers: Proactive hobbyists who are players and consumers of video games. While their representation in the industry is primarily seen through game sales, many companies follow gamers' comments on social media or on user reviews and engage with them to work to improve their products in addition to other feedback from other parts of the industry. Demographics of the larger player community also impact parts of the market; while once dominated by younger men, the market shifted in the mid-2010s towards women and older players who generally preferred mobile and causal games, leading to further growth in those sectors. Major regional markets The industry itself grew out from both the United States and Japan in the 1970s and 1980s before having a larger worldwide contribution. Today, the video game industry is predominantly led by major companies in North America (primarily the United States and Canada), Europe, and southeast Asia including Japan, South Korea, and China. Hardware production remains an area dominated by Asian companies either directly involved in hardware design or part of the production process, but digital distribution and indie game development of the late 2000s has allowed game developers to flourish nearly anywhere and diversify the field. Game sales According to the market research firm Newzoo, the global video game industry drew estimated revenues of over in 2020. Mobile games accounted for the bulk of this, with a 48% share of the market, followed by console games at 28% and personal computer games at 23%. Sales of different types of games vary widely between countries due to local preferences. Japanese consumers tend to purchase much more handheld games than console games and especially PC games, with a strong preference for games catering to local tastes. Another key difference is that, though having declined in the West, arcade games remain an important sector of the Japanese gaming industry. In South Korea, computer games are generally preferred over console games, especially MMORPG games and real-time strategy games. Computer games are also popular in China. Effects on society Culture Video game culture is a worldwide new media subculture formed around video games and game playing. As computer and video games have increased in popularity over time, they have had a significant influence on popular culture. Video game culture has also evolved over time hand in hand with internet culture as well as the increasing popularity of mobile games. Many people who play video games identify as gamers, which can mean anything from someone who enjoys games to someone who is passionate about it. As video games become more social with multiplayer and online capability, gamers find themselves in growing social networks. Gaming can both be entertainment as well as competition, as a new trend known as electronic sports is becoming more widely accepted. In the 2010s, video games and discussions of video game trends and topics can be seen in social media, politics, television, film and music. The COVID-19 pandemic during 2020–2021 gave further visibility to video games as a pastime to enjoy with friends and family online as a means of social distancing. Since the mid-2000s there has been debate whether video games qualify as art, primarily as the form's interactivity interfered with the artistic intent of the work and that they are designed for commercial appeal. A significant debate on the matter came after film critic Roger Ebert published an essay "Video Games can never be art", which challenged the industry to prove him and other critics wrong. The view that video games were an art form was cemented in 2011 when the U.S. Supreme Court ruled in the landmark case Brown v. Entertainment Merchants Association that video games were a protected form of speech with artistic merit. Since then, video game developers have come to use the form more for artistic expression, including the development of art games, and the cultural heritage of video games as works of arts, beyond their technical capabilities, have been part of major museum exhibits, including The Art of Video Games at the Smithsonian American Art Museum and toured at other museums from 2012 to 2016. Video games will inspire sequels and other video games within the same franchise, but also have influenced works outside of the video game medium. Numerous television shows (both animated and live-action), films, comics and novels have been created based on existing video game franchises. Because video games are an interactive medium there has been trouble in converting them to these passive forms of media, and typically such works have been critically panned or treated as children's media. For example, until 2019, no video game film had ever been received a "Fresh" rating on Rotten Tomatoes, but the releases of Detective Pikachu (2019) and Sonic the Hedgehog (2020), both receiving "Fresh" ratings, shows signs of the film industry having found an approach to adapt video games for the large screen. That said, some early video game-based films have been highly successful at the box office, such as 1995's Mortal Kombat and 2001's Lara Croft: Tomb Raider. More recently since the 2000s, there has also become a larger appreciation of video game music, which ranges from chiptunes composed for limited sound-output devices on early computers and consoles, to fully-scored compositions for most modern games. Such music has frequently served as a platform for covers and remixes, and concerts featuring video game soundtracks performed by bands or orchestras, such as Video Games Live, have also become popular. Video games also frequently incorporate licensed music, particularly in the area of rhythm games, furthering the depth of which video games and music can work together. Further, video games can serve as a virtual environment under full control of a producer to create new works. With the capability to render 3D actors and settings in real-time, a new type of work machinima (short for "machine cinema") grew out from using video game engines to craft narratives. As video game engines gain higher fidelity, they have also become part of the tools used in more traditional filmmaking. Unreal Engine has been used as a backbone by Industrial Light & Magic for their StageCraft technology for shows like The Mandalorian. Separately, video games are also frequently used as part of the promotion and marketing for other media, such as for films, anime, and comics. However, these licensed games in the 1990s and 2000s often had a reputation for poor quality, developed without any input from the intellectual property rights owners, and several of them are considered among lists of games with notably negative reception, such as Superman 64. More recently, with these licensed games being developed by triple-A studios or through studios directly connected to the licensed property owner, there has been a significant improvement in the quality of these games, with an early trendsetting example of Batman: Arkham Asylum. Beneficial uses Besides their entertainment value, appropriately-designed video games have been seen to provide value in education across several ages and comprehension levels. Learning principles found in video games have been identified as possible techniques with which to reform the U.S. education system. It has been noticed that gamers adopt an attitude while playing that is of such high concentration, they do not realize they are learning, and that if the same attitude could be adopted at school, education would enjoy significant benefits. Students are found to be "learning by doing" while playing video games while fostering creative thinking. Video games are also believed to be beneficial to the mind and body. It has been shown that action video game players have better hand–eye coordination and visuo-motor skills, such as their resistance to distraction, their sensitivity to information in the peripheral vision and their ability to count briefly presented objects, than nonplayers. Researchers found that such enhanced abilities could be acquired by training with action games, involving challenges that switch attention between different locations, but not with games requiring concentration on single objects. A 2018 systematic review found evidence that video gaming training had positive effects on cognitive and emotional skills in the adult population, especially with young adults. A 2019 systematic review also added support for the claim that video games are beneficial to the brain, although the beneficial effects of video gaming on the brain differed by video games types. Organisers of video gaming events, such as the organisers of the D-Lux video game festival in Dumfries, Scotland, have emphasised the positive aspects video games can have on mental health. Organisers, mental health workers and mental health nurses at the event emphasised the relationships and friendships that can be built around video games and how playing games can help people learn about others as a precursor to discussing the person's mental health. A study in 2020 from Oxford University also suggested that playing video games can be a benefit to a person's mental health. The report of 3,274 gamers, all over the age of 18, focused on the games Animal Crossing: New Horizons and Plants vs Zombies: Battle for Neighborville and used actual play-time data. The report found that those that played more games tended to report greater "wellbeing". Also in 2020, computer science professor Regan Mandryk of the University of Saskatchewan said her research also showed that video games can have health benefits such as reducing stress and improving mental health. The university's research studied all age groups – "from pre-literate children through to older adults living in long term care homes" – with a main focus on 18 to 55-year-olds. A study of gamers attitudes towards gaming which was reported about in 2018 found that millennials use video games as a key strategy for coping with stress. In the study of 1,000 gamers, 55% said that it "helps them to unwind and relieve stress ... and half said they see the value in gaming as a method of escapism to help them deal with daily work pressures". Controversies Video games have caused controversy since the 1970s. Parents and children's advocates regularly raise concerns that violent video games can influence young players into performing those violent acts in real life, and events such as the Columbine High School massacre in 1999 in which the perpetrators specifically alluded to using video games to plot out their attack, raised further fears. Medical experts and mental health professionals have also raised concerned that video games may be addictive, and the World Health Organization has included "gaming disorder" in the 11th revision of its International Statistical Classification of Diseases. Other health experts, including the American Psychiatric Association, have stated that there is insufficient evidence that video games can create violent tendencies or lead to addictive behavior, though agree that video games typically use a compulsion loop in their core design that can create dopamine that can help reinforce the desire to continue to play through that compulsion loop and potentially lead into violent or addictive behavior. Even with case law establishing that video games qualify as a protected art form, there has been pressure on the video game industry to keep their products in check to avoid over-excessive violence particularly for games aimed at younger children. The potential addictive behavior around games, coupled with increased used of post-sale monetization of video games, has also raised concern among parents, advocates, and government officials about gambling tendencies that may come from video games, such as controversy around the use of loot boxes in many high-profile games. Numerous other controversies around video games and its industry have arisen over the years, among the more notable incidents include the 1993 United States Congressional hearings on violent games like Mortal Kombat which lead to the formation of the ESRB ratings system, numerous legal actions taken by attorney Jack Thompson over violent games such as Grand Theft Auto III and Manhunt from 2003 to 2007, the outrage over the "No Russian" level from Call of Duty: Modern Warfare 2 in 2009 which allowed the player to shoot a number of innocent non-player characters at an airport, and the Gamergate harassment campaign in 2014 that highlighted misogyny from a portion of the player demographic. The industry as a whole has also dealt with issues related to gender, racial, and LGBTQ+ discrimination and mischaracterization of these minority groups in video games. A further issue in the industry is related to working conditions, as development studios and publishers frequently use "crunch time", required extended working hours, in the weeks and months ahead of a game's release to assure on-time delivery. Collecting and preservation Players of video games often maintain collections of games. More recently there has been interest in retrogaming, focusing on games from the first decades. Games in retail packaging in good shape have become collectors items for the early days of the industry, with some rare publications having gone for over . Separately, there is also concern about the preservation of video games, as both game media and the hardware to play them degrade over time. Further, many of the game developers and publishers from the first decades no longer exist, so records of their games have disappeared. Archivists and preservations have worked within the scope of copyright law to save these games as part of the cultural history of the industry. There are many video game museums around the world, including the National Videogame Museum in Frisco, Texas, which serves as the largest museum wholly dedicated to the display and preservation of the industry's most important artifacts. Europe hosts video game museums such as the Computer Games Museum in Berlin and the Museum of Soviet Arcade Machines in Moscow and Saint-Petersburg. The Museum of Art and Digital Entertainment in Oakland, California is a dedicated video game museum focusing on playable exhibits of console and computer games. The Video Game Museum of Rome is also dedicated to preserving video games and their history. The International Center for the History of Electronic Games at The Strong in Rochester, New York contains one of the largest collections of electronic games and game-related historical materials in the world, including a exhibit which allows guests to play their way through the history of video games. The Smithsonian Institution in Washington, DC has three video games on permanent display: Pac-Man, Dragon's Lair, and Pong. The Museum of Modern Art has added a total of 20 video games and one video game console to its permanent Architecture and Design Collection since 2012. In 2012, the Smithsonian American Art Museum ran an exhibition on "The Art of Video Games". However, the reviews of the exhibit were mixed, including questioning whether video games belong in an art museum. See also Lists of video games List of accessories to video games by system Outline of video games Notes References Sources Further reading The Ultimate History of Video Games: From Pong to Pokemon--The Story Behind the Craze That Touched Our Lives and Changed the World by Steven L. Kent, Crown, 2001, The Ultimate History of Video Games, Volume 2: Nintendo, Sony, Microsoft, and the Billion-Dollar Battle to Shape Modern Gaming by Steven L. Kent, Crown, 2021, External links Video games bibliography by the French video game research association Ludoscience The Virtual Museum of Computing (VMoC) (archived 10 October 2014) Games and sports introduced in 1947 Digital media American inventions
5367
https://en.wikipedia.org/wiki/Cambrian
Cambrian
The Cambrian Period ( ; sometimes symbolized Ꞓ) is the first geological period of the Paleozoic Era, and of the Phanerozoic Eon. The Cambrian lasted 53.4 million years from the end of the preceding Ediacaran Period 538.8 million years ago (mya) to the beginning of the Ordovician Period mya. Its subdivisions, and its base, are somewhat in flux. The period was established as "Cambrian series" by Adam Sedgwick, who named it after Cambria, the Latin name for 'Cymru' (Wales), where Britain's Cambrian rocks are best exposed. Sedgwick identified the layer as part of his task, along with Roderick Murchison, to subdivide the large "Transition Series", although the two geologists disagreed for a while on the appropriate categorization. The Cambrian is unique in its unusually high proportion of sedimentary deposits, sites of exceptional preservation where "soft" parts of organisms are preserved as well as their more resistant shells. As a result, our understanding of the Cambrian biology surpasses that of some later periods. The Cambrian marked a profound change in life on Earth: prior to the Cambrian, the majority of living organisms on the whole were small, unicellular and simple (Ediacaran fauna and earlier Tonian Huainan biota being notable exceptions). Complex, multicellular organisms gradually became more common in the millions of years immediately preceding the Cambrian, but it was not until this period that mineralized – hence readily fossilized – organisms became common. The rapid diversification of lifeforms in the Cambrian, known as the Cambrian explosion, produced the first representatives of all modern animal phyla. Phylogenetic analysis has supported the view that before the Cambrian radiation, in the Cryogenian or Tonian, animals (metazoans) evolved monophyletically from a single common ancestor: flagellated colonial protists similar to modern choanoflagellates. Although diverse life forms prospered in the oceans, the land is thought to have been comparatively barren – with nothing more complex than a microbial soil crust and a few molluscs and arthropods (albeit not terrestrial) that emerged to browse on the microbial biofilm. By the end of the Cambrian, myriapods, arachnids, and hexapods started adapting to the land, along with the first plants. Most of the continents were probably dry and rocky due to a lack of vegetation. Shallow seas flanked the margins of several continents created during the breakup of the supercontinent Pannotia. The seas were relatively warm, and polar ice was absent for much of the period. Stratigraphy The Cambrian Period followed the Ediacaran Period and was followed by the Ordovician Period. The base of the Cambrian lies atop a complex assemblage of trace fossils known as the Treptichnus pedum assemblage. The use of Treptichnus pedum, a reference ichnofossil to mark the lower boundary of the Cambrian, is problematic because very similar trace fossils belonging to the Treptichnids group are found well below T. pedum in Namibia, Spain and Newfoundland, and possibly in the western US. The stratigraphic range of T. pedum overlaps the range of the Ediacaran fossils in Namibia, and probably in Spain. Subdivisions The Cambrian is divided into four epochs (series) and ten ages (stages). Currently only three series and six stages are named and have a GSSP (an internationally agreed-upon stratigraphic reference point). Because the international stratigraphic subdivision is not yet complete, many local subdivisions are still widely used. In some of these subdivisions the Cambrian is divided into three epochs with locally differing names – the Early Cambrian (Caerfai or Waucoban, mya), Middle Cambrian (St Davids or Albertan, mya) and Late Cambrian ( mya; also known as Merioneth or Croixan). Trilobite zones allow biostratigraphic correlation in the Cambrian. Rocks of these epochs are referred to as belonging to the Lower, Middle, or Upper Cambrian. Each of the local series is divided into several stages. The Cambrian is divided into several regional faunal stages of which the Russian-Kazakhian system is most used in international parlance: *Most Russian paleontologists define the lower boundary of the Cambrian at the base of the Tommotian Stage, characterized by diversification and global distribution of organisms with mineral skeletons and the appearance of the first Archaeocyath bioherms. Dating the Cambrian The International Commission on Stratigraphy lists the Cambrian Period as beginning at and ending at . The lower boundary of the Cambrian was originally held to represent the first appearance of complex life, represented by trilobites. The recognition of small shelly fossils before the first trilobites, and Ediacara biota substantially earlier, led to calls for a more precisely defined base to the Cambrian Period. Despite the long recognition of its distinction from younger Ordovician rocks and older Precambrian rocks, it was not until 1994 that the Cambrian system/period was internationally ratified. After decades of careful consideration, a continuous sedimentary sequence at Fortune Head, Newfoundland was settled upon as a formal base of the Cambrian Period, which was to be correlated worldwide by the earliest appearance of Treptichnus pedum. Discovery of this fossil a few metres below the GSSP led to the refinement of this statement, and it is the T. pedum ichnofossil assemblage that is now formally used to correlate the base of the Cambrian. This formal designation allowed radiometric dates to be obtained from samples across the globe that corresponded to the base of the Cambrian. Early dates of quickly gained favour, though the methods used to obtain this number are now considered to be unsuitable and inaccurate. A more precise date using modern radiometric dating yield a date of . The ash horizon in Oman from which this date was recovered corresponds to a marked fall in the abundance of carbon-13 that correlates to equivalent excursions elsewhere in the world, and to the disappearance of distinctive Ediacaran fossils (Namacalathus, Cloudina). Nevertheless, there are arguments that the dated horizon in Oman does not correspond to the Ediacaran-Cambrian boundary, but represents a facies change from marine to evaporite-dominated strata – which would mean that dates from other sections, ranging from 544 or 542 Ma, are more suitable. Paleogeography Plate reconstructions suggest a global supercontinent, Pannotia, was in the process of breaking up early in the Cambrian, with Laurentia (North America), Baltica, and Siberia having separated from the main supercontinent of Gondwana to form isolated land masses. Most continental land was clustered in the Southern Hemisphere at this time, but was drifting north. Large, high-velocity rotational movement of Gondwana appears to have occurred in the Early Cambrian. With a lack of sea ice – the great glaciers of the Marinoan Snowball Earth were long melted – the sea level was high, which led to large areas of the continents being flooded in warm, shallow seas ideal for sea life. The sea levels fluctuated somewhat, suggesting there were "ice ages", associated with pulses of expansion and contraction of a south polar ice cap. In Baltoscandia a Lower Cambrian transgression transformed large swathes of the Sub-Cambrian peneplain into an epicontinental sea. Climate Glaciers likely existed during the earliest Cambrian at high and possibly even at middle palaeolatitudes, possibly due to the ancient continent of Gondwana covering the South Pole and cutting off polar ocean currents. Middle Terreneuvian deposits, corresponding to the boundary between the Fortunian and Stage 2, show evidence of glaciation. However, other authors believe these very early, pretrilobitic glacial deposits may not even be of Cambrian age at all but instead date back to the Neoproterozoic, an era characterised by numerous severe icehouse periods. The beginning of Stage 3 was relatively cool, with the period between 521 and 517 Ma being known as the Cambrian Arthropod Radiation Cool Event (CARCE). The Earth was generally very warm during Stage 4; its climate was comparable to the hot greenhouse of the Late Cretaceous and Early Palaeogene, as evidenced by a maximum in continental weathering rates over the last 900 million years and the presence of tropical, lateritic palaeosols at high palaeolatitudes during this time. The Archaecyathid Extinction Warm Event (AEWE), lasting from 511 to 510.5 Ma, was particularly warm. Another warm event, the Redlichiid-Olenid Extinction Warm Event, occurred at the beginning of the Wuliuan. It became even warmer towards the end of the period, and sea levels rose dramatically. This warming trend continued into the Early Ordovician, the start of which was characterised by an extremely hot global climate. Flora The Cambrian flora was little different from the Ediacaran. The principal taxa were the marine macroalgae Fuxianospira, Sinocylindra, and Marpolia. No calcareous macroalgae are known from the period. No land plant (embryophyte) fossils are known from the Cambrian. However, biofilms and microbial mats were well developed on Cambrian tidal flats and beaches 500 mya, and microbes forming microbial Earth ecosystems, comparable with modern soil crust of desert regions, contributing to soil formation. Although molecular clock estimates suggest terrestrial plants may have first emerged during the Middle or Late Cambrian, the consequent large-scale removal of the greenhouse gas CO2 from the atmosphere through sequestration did not begin until the Ordovician. Oceanic life The Cambrian explosion was a period of rapid multicellular growth. Most animal life during the Cambrian was aquatic. Trilobites were once assumed to be the dominant life form at that time, but this has proven to be incorrect. Arthropods were by far the most dominant animals in the ocean, but trilobites were only a minor part of the total arthropod diversity. What made them so apparently abundant was their heavy armor reinforced by calcium carbonate (CaCO3), which fossilized far more easily than the fragile chitinous exoskeletons of other arthropods, leaving numerous preserved remains. The period marked a steep change in the diversity and composition of Earth's biosphere. The Ediacaran biota suffered a mass extinction at the start of the Cambrian Period, which corresponded with an increase in the abundance and complexity of burrowing behaviour. This behaviour had a profound and irreversible effect on the substrate which transformed the seabed ecosystems. Before the Cambrian, the sea floor was covered by microbial mats. By the end of the Cambrian, burrowing animals had destroyed the mats in many areas through bioturbation. As a consequence, many of those organisms that were dependent on the mats became extinct, while the other species adapted to the changed environment that now offered new ecological niches. Around the same time there was a seemingly rapid appearance of representatives of all the mineralized phyla, including the Bryozoa, which were once thought to have only appeared in the Lower Ordovician. However, many of those phyla were represented only by stem-group forms; and since mineralized phyla generally have a benthic origin, they may not be a good proxy for (more abundant) non-mineralized phyla. While the early Cambrian showed such diversification that it has been named the Cambrian Explosion, this changed later in the period, when there occurred a sharp drop in biodiversity. About 515 million years ago, the number of species going extinct exceeded the number of new species appearing. Five million years later, the number of genera had dropped from an earlier peak of about 600 to just 450. Also, the speciation rate in many groups was reduced to between a fifth and a third of previous levels. 500 million years ago, oxygen levels fell dramatically in the oceans, leading to hypoxia, while the level of poisonous hydrogen sulfide simultaneously increased, causing another extinction. The later half of Cambrian was surprisingly barren and showed evidence of several rapid extinction events; the stromatolites which had been replaced by reef building sponges known as Archaeocyatha, returned once more as the archaeocyathids became extinct. This declining trend did not change until the Great Ordovician Biodiversification Event. Some Cambrian organisms ventured onto land, producing the trace fossils Protichnites and Climactichnites. Fossil evidence suggests that euthycarcinoids, an extinct group of arthropods, produced at least some of the Protichnites. Fossils of the track-maker of Climactichnites have not been found; however, fossil trackways and resting traces suggest a large, slug-like mollusc. In contrast to later periods, the Cambrian fauna was somewhat restricted; free-floating organisms were rare, with the majority living on or close to the sea floor; and mineralizing animals were rarer than in future periods, in part due to the unfavourable ocean chemistry. Many modes of preservation are unique to the Cambrian, and some preserve soft body parts, resulting in an abundance of . These include Sirius Passet, the Sinsk Algal Lens, the Maotianshan Shales, the Emu Bay Shale, and the Burgess Shale,. Symbol The United States Federal Geographic Data Committee uses a "barred capital C" character to represent the Cambrian Period. The Unicode character is . Gallery See also Cambrian–Ordovician extinction event – circa 488 mya Dresbachian extinction event—circa 499 mya End Botomian extinction event—circa 513 mya List of fossil sites (with link directory) Type locality (geology), the locality where a particular rock type, stratigraphic unit, fossil or mineral species is first identified References Further reading External links Biostratigraphy – includes information on Cambrian trilobite biostratigraphy Sam Gon's trilobite pages (contains numerous Cambrian trilobites) Examples of Cambrian Fossils Paleomap Project Report on the web on Amthor and others from Geology vol. 31 Weird Life on the Mats Chronostratigraphy scale v.2018/08 | Cambrian Geological periods
5370
https://en.wikipedia.org/wiki/Theory%20of%20categories
Theory of categories
In ontology, the theory of categories concerns itself with the categories of being: the highest genera or kinds of entities according to Amie Thomasson. To investigate the categories of being, or simply categories, is to determine the most fundamental and the broadest classes of entities. A distinction between such categories, in making the categories or applying them, is called an ontological distinction. Various systems of categories have been proposed, they often include categories for substances, properties, relations, states of affairs or events. A representative question within the theory of categories might articulate itself, for example, in a query like, "Are universals prior to particulars?" Early development The process of abstraction required to discover the number and names of the categories of being has been undertaken by many philosophers since Aristotle and involves the careful inspection of each concept to ensure that there is no higher category or categories under which that concept could be subsumed. The scholars of the twelfth and thirteenth centuries developed Aristotle's ideas. For example, Gilbert of Poitiers divides Aristotle's ten categories into two sets, primary and secondary, according to whether they inhere in the subject or not: Primary categories: Substance, Relation, Quantity and Quality Secondary categories: Place, Time, Situation, Condition, Action, Passion Furthermore, following Porphyry’s likening of the classificatory hierarchy to a tree, they concluded that the major classes could be subdivided to form subclasses, for example, Substance could be divided into Genus and Species, and Quality could be subdivided into Property and Accident, depending on whether the property was necessary or contingent. An alternative line of development was taken by Plotinus in the second century who by a process of abstraction reduced Aristotle's list of ten categories to five: Substance, Relation, Quantity, Motion and Quality. Plotinus further suggested that the latter three categories of his list, namely Quantity, Motion and Quality correspond to three different kinds of relation and that these three categories could therefore be subsumed under the category of Relation. This was to lead to the supposition that there were only two categories at the top of the hierarchical tree, namely Substance and Relation. Many supposed that relations only exist in the mind. Substance and Relation, then, are closely commutative with Mind and Matter--this is expressed most clearly in the dualism of René Descartes. Vaisheshika Stoic Aristotle One of Aristotle’s early interests lay in the classification of the natural world, how for example the genus "animal" could be first divided into "two-footed animal" and then into "wingless, two-footed animal". He realised that the distinctions were being made according to the qualities the animal possesses, the quantity of its parts and the kind of motion that it exhibits. To fully complete the proposition "this animal is ..." Aristotle stated in his work on the Categories that there were ten kinds of predicate where ... "... each signifies either substance or quantity or quality or relation or where or when or being-in-a-position or having or acting or being acted upon". He realised that predicates could be simple or complex. The simple kinds consist of a subject and a predicate linked together by the "categorical" or inherent type of relation. For Aristotle the more complex kinds were limited to propositions where the predicate is compounded of two of the above categories for example "this is a horse running". More complex kinds of proposition were only discovered after Aristotle by the Stoic, Chrysippus, who developed the "hypothetical" and "disjunctive" types of syllogism and these were terms which were to be developed through the Middle Ages and were to reappear in Kant's system of categories. Category came into use with Aristotle's essay Categories, in which he discussed univocal and equivocal terms, predication, and ten categories: Substance, essence (ousia) – examples of primary substance: this man, this horse; secondary substance (species, genera): man, horse Quantity (poson, how much), discrete or continuous – examples: two cubits long, number, space, (length of) time. Quality (poion, of what kind or description) – examples: white, black, grammatical, hot, sweet, curved, straight. Relation (pros ti, toward something) – examples: double, half, large, master, knowledge. Place (pou, where) – examples: in a marketplace, in the Lyceum Time (pote, when) – examples: yesterday, last year Position, posture, attitude (keisthai, to lie) – examples: sitting, lying, standing State, condition (echein, to have or be) – examples: shod, armed Action (poiein, to make or do) – examples: to lance, to heat, to cool (something) Affection, passion (paschein, to suffer or undergo) – examples: to be lanced, to be heated, to be cooled Plotinus Plotinus in writing his Enneads around AD 250 recorded that "philosophy at a very early age investigated the number and character of the existents ... some found ten, others less .... to some the genera were the first principles, to others only a generic classification of existents". He realised that some categories were reducible to others saying "why are not Beauty, Goodness and the virtues, Knowledge and Intelligence included among the primary genera?" He concluded that such transcendental categories and even the categories of Aristotle were in some way posterior to the three Eleatic categories first recorded in Plato's dialogue Parmenides and which comprised the following three coupled terms: Unity/Plurality Motion/Stability Identity/Difference Plotinus called these "the hearth of reality" deriving from them not only the three categories of Quantity, Motion and Quality but also what came to be known as "the three moments of the Neoplatonic world process": First, there existed the "One", and his view that "the origin of things is a contemplation" The Second "is certainly an activity ... a secondary phase ... life streaming from life ... energy running through the universe" The Third is some kind of Intelligence concerning which he wrote "Activity is prior to Intellection ... and self knowledge" Plotinus likened the three to the centre, the radii and the circumference of a circle, and clearly thought that the principles underlying the categories were the first principles of creation. "From a single root all being multiplies". Similar ideas were to be introduced into Early Christian thought by, for example, Gregory of Nazianzus who summed it up saying "Therefore, Unity, having from all eternity arrived by motion at duality, came to rest in trinity". Modern development Kant and Hegel accused the Aristotelian table of categories of being 'rhapsodic', derived arbitrarily and in bulk from experience, without any systematic necessity. The early modern dualism, which has been described above, of Mind and Matter or Subject and Relation, as reflected in the writings of Descartes underwent a substantial revision in the late 18th century. The first objections to this stance were formulated in the eighteenth century by Immanuel Kant who realised that we can say nothing about Substance except through the relation of the subject to other things. For example: In the sentence "This is a house" the substantive subject "house" only gains meaning in relation to human use patterns or to other similar houses. The category of Substance disappears from Kant's tables, and under the heading of Relation, Kant lists inter alia the three relationship types of Disjunction, Causality and Inherence. The three older concepts of Quantity, Motion and Quality, as Peirce discovered, could be subsumed under these three broader headings in that Quantity relates to the subject through the relation of Disjunction; Motion relates to the subject through the relation of Causality; and Quality relates to the subject through the relation of Inherence. Sets of three continued to play an important part in the nineteenth century development of the categories, most notably in G.W.F. Hegel's extensive tabulation of categories, and in C.S. Peirce's categories set out in his work on the logic of relations. One of Peirce's contributions was to call the three primary categories Firstness, Secondness and Thirdness which both emphasises their general nature, and avoids the confusion of having the same name for both the category itself and for a concept within that category. In a separate development, and building on the notion of primary and secondary categories introduced by the Scholastics, Kant introduced the idea that secondary or "derivative" categories could be derived from the primary categories through the combination of one primary category with another. This would result in the formation of three secondary categories: the first, "Community" was an example that Kant gave of such a derivative category; the second, "Modality", introduced by Kant, was a term which Hegel, in developing Kant's dialectical method, showed could also be seen as a derivative category; and the third, "Spirit" or "Will" were terms that Hegel and Schopenhauer were developing separately for use in their own systems. Karl Jaspers in the twentieth century, in his development of existential categories, brought the three together, allowing for differences in terminology, as Substantiality, Communication and Will. This pattern of three primary and three secondary categories was used most notably in the nineteenth century by Peter Mark Roget to form the six headings of his Thesaurus of English Words and Phrases. The headings used were the three objective categories of Abstract Relation, Space (including Motion) and Matter and the three subjective categories of Intellect, Feeling and Volition, and he found that under these six headings all the words of the English language, and hence any possible predicate, could be assembled. Kant In the Critique of Pure Reason (1781), Immanuel Kant argued that the categories are part of our own mental structure and consist of a set of a priori concepts through which we interpret the world around us. These concepts correspond to twelve logical functions of the understanding which we use to make judgements and there are therefore two tables given in the Critique, one of the Judgements and a corresponding one for the Categories. To give an example, the logical function behind our reasoning from ground to consequence (based on the Hypothetical relation) underlies our understanding of the world in terms of cause and effect (the Causal relation). In each table the number twelve arises from, firstly, an initial division into two: the Mathematical and the Dynamical; a second division of each of these headings into a further two: Quantity and Quality, and Relation and Modality respectively; and, thirdly, each of these then divides into a further three subheadings as follows. Table of Judgements Mathematical Quantity Universal Particular Singular Quality Affirmative Negative Infinite Dynamical Relation Categorical Hypothetical Disjunctive Modality Problematic Assertoric Apodictic Table of Categories Mathematical Quantity Unity Plurality Totality Quality Reality Negation Limitation Dynamical Relation Inherence and Subsistence (substance and accident) Causality and Dependence (cause and effect) Community (reciprocity) Modality Possibility Existence Necessity Criticism of Kant's system followed, firstly, by Arthur Schopenhauer, who amongst other things was unhappy with the term "Community", and declared that the tables "do open violence to truth, treating it as nature was treated by old-fashioned gardeners", and secondly, by W.T.Stace who in his book The Philosophy of Hegel suggested that in order to make Kant's structure completely symmetrical a third category would need to be added to the Mathematical and the Dynamical. This, he said, Hegel was to do with his category of concept. Hegel G.W.F. Hegel in his Science of Logic (1812) attempted to provide a more comprehensive system of categories than Kant and developed a structure that was almost entirely triadic. So important were the categories to Hegel that he claimed the first principle of the world, which he called the "absolute", is "a system of categories  the categories must be the reason of which the world is a consequent". Using his own logical method of sublation, later called the Hegelian dialectic, reasoning from the abstract through the negative to the concrete, he arrived at a hierarchy of some 270 categories, as explained by W. T. Stace. The three very highest categories were "logic", "nature" and "spirit". The three highest categories of "logic", however, he called "being", "essence", and "notion" which he explained as follows: Being was differentiated from Nothing by containing with it the concept of the "other", an initial internal division that can be compared with Kant's category of disjunction. Stace called the category of Being the sphere of common sense containing concepts such as consciousness, sensation, quantity, quality and measure. Essence. The "other" separates itself from the "one" by a kind of motion, reflected in Hegel's first synthesis of "becoming". For Stace this category represented the sphere of science containing within it firstly, the thing, its form and properties; secondly, cause, effect and reciprocity, and thirdly, the principles of classification, identity and difference. Notion. Having passed over into the "Other" there is an almost neoplatonic return into a higher unity that in embracing the "one" and the "other" enables them to be considered together through their inherent qualities. This according to Stace is the sphere of philosophy proper where we find not only the three types of logical proposition: disjunctive, hypothetical, and categorical but also the three transcendental concepts of beauty, goodness and truth. Schopenhauer's category that corresponded with "notion" was that of "idea", which in his Four-Fold Root of Sufficient Reason he complemented with the category of the "will". The title of his major work was The World as Will and Idea. The two other complementary categories, reflecting one of Hegel's initial divisions, were those of Being and Becoming. At around the same time, Goethe was developing his colour theories in the of 1810, and introduced similar principles of combination and complementation, symbolising, for Goethe, "the primordial relations which belong both to nature and vision". Hegel in his Science of Logic accordingly asks us to see his system not as a tree but as a circle. Twentieth-century development In the twentieth century the primacy of the division between the subjective and the objective, or between mind and matter, was disputed by, among others, Bertrand Russell and Gilbert Ryle. Philosophy began to move away from the metaphysics of categorisation towards the linguistic problem of trying to differentiate between, and define, the words being used. Ludwig Wittgenstein’s conclusion was that there were no clear definitions which we can give to words and categories but only a "halo" or "corona" of related meanings radiating around each term. Gilbert Ryle thought the problem could be seen in terms of dealing with "a galaxy of ideas" rather than a single idea, and suggested that category mistakes are made when a concept (e.g. "university"), understood as falling under one category (e.g. abstract idea), is used as though it falls under another (e.g. physical object). With regard to the visual analogies being used, Peirce and Lewis, just like Plotinus earlier, likened the terms of propositions to points, and the relations between the terms to lines. Peirce, taking this further, talked of univalent, bivalent and trivalent relations linking predicates to their subject and it is just the number and types of relation linking subject and predicate that determine the category into which a predicate might fall. Primary categories contain concepts where there is one dominant kind of relation to the subject. Secondary categories contain concepts where there are two dominant kinds of relation. Examples of the latter were given by Heidegger in his two propositions "the house is on the creek" where the two dominant relations are spatial location (Disjunction) and cultural association (Inherence), and "the house is eighteenth century" where the two relations are temporal location (Causality) and cultural quality (Inherence). A third example may be inferred from Kant in the proposition "the house is impressive or sublime" where the two relations are spatial or mathematical disposition (Disjunction) and dynamic or motive power (Causality). Both Peirce and Wittgenstein introduced the analogy of colour theory in order to illustrate the shades of meanings of words. Primary categories, like primary colours, are analytical representing the furthest we can go in terms of analysis and abstraction and include Quantity, Motion and Quality. Secondary categories, like secondary colours, are synthetic and include concepts such as Substance, Community and Spirit. Apart from these, the categorial scheme of Alfred North Whitehead and his Process Philosophy, alongside Nicolai Hartmann and his Critical Realism, remain one of the most detailed and advanced systems in categorial research in metaphysics. Peirce Charles Sanders Peirce, who had read Kant and Hegel closely, and who also had some knowledge of Aristotle, proposed a system of merely three phenomenological categories: Firstness, Secondness, and Thirdness, which he repeatedly invoked in his subsequent writings. Like Hegel, C.S. Peirce attempted to develop a system of categories from a single indisputable principle, in Peirce's case the notion that in the first instance he could only be aware of his own ideas. "It seems that the true categories of consciousness are first, feeling ... second, a sense of resistance ... and third, synthetic consciousness, or thought". Elsewhere he called the three primary categories: Quality, Reaction and Meaning, and even Firstness, Secondness and Thirdness, saying, "perhaps it is not right to call these categories conceptions, they are so intangible that they are rather tones or tints upon conceptions": Firstness (Quality): "The first is predominant in feeling ... we must think of a quality without parts, e.g. the colour of magenta ... When I say it is a quality I do not mean that it "inheres" in a subject ... The whole content of consciousness is made up of qualities of feeling, as truly as the whole of space is made up of points, or the whole of time by instants". Secondness (Reaction): "This is present even in such a rudimentary fragment of experience as a simple feeling ... an action and reaction between our soul and the stimulus ... The idea of second is predominant in the ideas of causation and of statical force ... the real is active; we acknowledge it by calling it the actual". Thirdness (Meaning): "Thirdness is essentially of a general nature ... ideas in which thirdness predominate [include] the idea of a sign or representation ... Every genuine triadic relation involves meaning ... the idea of meaning is irreducible to those of quality and reaction ... synthetical consciousness is the consciousness of a third or medium". Although Peirce's three categories correspond to the three concepts of relation given in Kant's tables, the sequence is now reversed and follows that given by Hegel, and indeed before Hegel of the three moments of the world-process given by Plotinus. Later, Peirce gave a mathematical reason for there being three categories in that although monadic, dyadic and triadic nodes are irreducible, every node of a higher valency is reducible to a "compound of triadic relations". Ferdinand de Saussure, who was developing "semiology" in France just as Peirce was developing "semiotics" in the US, likened each term of a proposition to "the centre of a constellation, the point where other coordinate terms, the sum of which is indefinite, converge". Others Edmund Husserl (1962, 2000) wrote extensively about categorial systems as part of his phenomenology. For Gilbert Ryle (1949), a category (in particular a "category mistake") is an important semantic concept, but one having only loose affinities to an ontological category. Contemporary systems of categories have been proposed by John G. Bennett (The Dramatic Universe, 4 vols., 1956–65), Wilfrid Sellars (1974), Reinhardt Grossmann (1983, 1992), Johansson (1989), Hoffman and Rosenkrantz (1994), Roderick Chisholm (1996), Barry Smith (ontologist) (2003), and Jonathan Lowe (2006). See also Categories (Aristotle) Categories (Peirce) Categories (Stoic) Category (Kant) Metaphysics Modal logic Ontology Schema (Kant) Similarity (philosophy) References Selected bibliography Aristotle, 1953. Metaphysics. Ross, W. D., trans. Oxford University Press. --------, 2004. Categories, Edghill, E. M., trans. Uni. of Adelaide library. John G. Bennett, 1956–1965. The Dramatic Universe. London, Hodder & Stoughton. Gustav Bergmann, 1992. New Foundations of Ontology. Madison: Uni. of Wisconsin Press. Browning, Douglas, 1990. Ontology and the Practical Arena. Pennsylvania State Uni. Butchvarov, Panayot, 1979. Being qua Being: A Theory of Identity, Existence, and Predication. Indiana Uni. Press. Roderick Chisholm, 1996. A Realistic Theory of Categories. Cambridge Uni. Press. Feibleman, James Kern, 1951. Ontology. The Johns Hopkins Press (reprinted 1968, Greenwood Press, Publishers, New York). Grossmann, Reinhardt, 1983. The Categorial Structure of the World. Indiana Uni. Press. Grossmann, Reinhardt, 1992. The Existence of the World: An Introduction to Ontology. Routledge. Haaparanta, Leila and Koskinen, Heikki J., 2012. Categories of Being: Essays on Metaphysics and Logic. New York: Oxford University Press. Hoffman, J., and Rosenkrantz, G. S.,1994. Substance among other Categories. Cambridge Uni. Press. Edmund Husserl, 1962. Ideas: General Introduction to Pure Phenomenology. Boyce Gibson, W. R., trans. Collier. ------, 2000. Logical Investigations, 2nd ed. Findlay, J. N., trans. Routledge. Johansson, Ingvar, 1989. Ontological Investigations. Routledge, 2nd ed. Ontos Verlag 2004. Kahn, Charles H., 2009. Essays on Being, Oxford University Press. Immanuel Kant, 1998. Critique of Pure Reason. Guyer, Paul, and Wood, A. W., trans. Cambridge Uni. Press. Charles Sanders Peirce, 1992, 1998. The Essential Peirce, vols. 1,2. Houser, Nathan et al., eds. Indiana Uni. Press. Gilbert Ryle, 1949. The Concept of Mind. Uni. of Chicago Press. Wilfrid Sellars, 1974, "Toward a Theory of the Categories" in Essays in Philosophy and Its History. Reidel. Barry Smith, 2003. "Ontology" in Blackwell Guide to the Philosophy of Computing and Information. Blackwell. External links Aristotle's Categories at MIT. "Ontological Categories and How to Use Them" – Amie Thomasson. "Recent Advances in Metaphysics" – E. J. Lowe. Theory and History of Ontology – Raul Corazzon. Concepts in metaphysics
5371
https://en.wikipedia.org/wiki/Concrete
Concrete
Concrete is a composite material composed of aggregate bonded together with a fluid cement that cures over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminium combined. When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water through a process called concrete hydration that hardens it over several hours to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. This time allows concrete to not only be cast in forms, but also to have a variety of tooled processes performed. The hydration process is exothermic, which means ambient temperature plays a significant role in how long it takes concrete to set. Often, additives (such as pozzolans or superplasticizers) are included in the mixture to improve the physical properties of the wet mix, delay or accelerate the curing time, or otherwise change the finished material. Most concrete is poured with reinforcing materials (such as steel rebar) embedded to provide tensile strength, yielding reinforced concrete. In the past, lime based cement binders, such as lime putty, were often used but sometimes with other hydraulic cements, (water resistant) such as a calcium aluminate cement or with Portland cement to form Portland cement concrete (named for its visual resemblance to Portland stone). Many other non-cementitious types of concrete exist with other methods of binding aggregate together, including asphalt concrete with a bitumen binder, which is frequently used for road surfaces, and polymer concretes that use polymers as a binder. Concrete is distinct from mortar. Whereas concrete is itself a building material, mortar is a bonding agent that typically holds bricks, tiles and other masonry units together. Grout is another material associated with concrete and cement. It does not contain coarse aggregates and is usually either pourable or thixotropic, and is used to fill gaps between masonry components or coarse aggregate which has already been put in place. Some methods of concrete manufacture and repair involve pumping grout into the gaps to make up a solid mass in situ. Etymology The word concrete comes from the Latin word "concretus" (meaning compact or condensed), the perfect passive participle of "concrescere", from "con-" (together) and "crescere" (to grow). History Ancient times Mayan concrete at the ruins of Uxmal (850-925 A.D.) is referenced in Incidents of Travel in the Yucatán by John L. Stephens. "The roof is flat and had been covered with cement". "The floors were cement, in some places hard, but, by long exposure, broken, and now crumbling under the feet." "But throughout the wall was solid, and consisting of large stones imbedded in mortar, almost as hard as rock." Small-scale production of concrete-like materials was pioneered by the Nabatean traders who occupied and controlled a series of oases and developed a small empire in the regions of southern Syria and northern Jordan from the 4th century BC. They discovered the advantages of hydraulic lime, with some self-cementing properties, by 700 BC. They built kilns to supply mortar for the construction of rubble masonry houses, concrete floors, and underground waterproof cisterns. They kept the cisterns secret as these enabled the Nabataeans to thrive in the desert. Some of these structures survive to this day. Classical era In the Ancient Egyptian and later Roman eras, builders discovered that adding volcanic ash to lime allowed the mix to set underwater. They discovered the pozzolanic reaction. Concrete floors were found in the royal palace of Tiryns, Greece, which dates roughly to 1400-1200 BC. Lime mortars were used in Greece, such as in Crete and Cyprus, in 800 BC. The Assyrian Jerwan Aqueduct (688 BC) made use of waterproof concrete. Concrete was used for construction in many ancient structures. The Romans used concrete extensively from 300 BC to 476 AD. During the Roman Empire, Roman concrete (or opus caementicium) was made from quicklime, pozzolana and an aggregate of pumice. Its widespread use in many Roman structures, a key event in the history of architecture termed the Roman architectural revolution, freed Roman construction from the restrictions of stone and brick materials. It enabled revolutionary new designs in terms of both structural complexity and dimension. The Colosseum in Rome was built largely of concrete, and the Pantheon has the world's largest unreinforced concrete dome. Concrete, as the Romans knew it, was a new and revolutionary material. Laid in the shape of arches, vaults and domes, it quickly hardened into a rigid mass, free from many of the internal thrusts and strains that troubled the builders of similar structures in stone or brick. Modern tests show that opus caementicium had as much compressive strength as modern Portland-cement concrete (ca. ). However, due to the absence of reinforcement, its tensile strength was far lower than modern reinforced concrete, and its mode of application also differed: Modern structural concrete differs from Roman concrete in two important details. First, its mix consistency is fluid and homogeneous, allowing it to be poured into forms rather than requiring hand-layering together with the placement of aggregate, which, in Roman practice, often consisted of rubble. Second, integral reinforcing steel gives modern concrete assemblies great strength in tension, whereas Roman concrete could depend only upon the strength of the concrete bonding to resist tension. The long-term durability of Roman concrete structures has been found to be due to its use of pyroclastic (volcanic) rock and ash, whereby the crystallization of strätlingite (a specific and complex calcium aluminosilicate hydrate) and the coalescence of this and similar calcium–aluminium-silicate–hydrate cementing binders helped give the concrete a greater degree of fracture resistance even in seismically active environments. Roman concrete is significantly more resistant to erosion by seawater than modern concrete; it used pyroclastic materials which react with seawater to form Al-tobermorite crystals over time. The widespread use of concrete in many Roman structures ensured that many survive to the present day. The Baths of Caracalla in Rome are just one example. Many Roman aqueducts and bridges, such as the magnificent Pont du Gard in southern France, have masonry cladding on a concrete core, as does the dome of the Pantheon. Middle Ages After the Roman Empire, the use of burned lime and pozzolana was greatly reduced. Low kiln temperatures in the burning of lime, lack of pozzolana, and poor mixing all contributed to a decline in the quality of concrete and mortar. From the 11th century, the increased use of stone in church and castle construction led to an increased demand for mortar. Quality began to improve in the 12th century through better grinding and sieving. Medieval lime mortars and concretes were non-hydraulic and were used for binding masonry, "hearting" (binding rubble masonry cores) and foundations. Bartholomaeus Anglicus in his De proprietatibus rerum (1240) describes the making of mortar. In an English translation from 1397, it reads "lyme ... is a stone brent; by medlynge thereof with sonde and water sement is made". From the 14th century, the quality of mortar was again excellent, but only from the 17th century was pozzolana commonly added. The Canal du Midi was built using concrete in 1670. Industrial era Perhaps the greatest step forward in the modern use of concrete was Smeaton's Tower, built by British engineer John Smeaton in Devon, England, between 1756 and 1759. This third Eddystone Lighthouse pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate. A method for producing Portland cement was developed in England and patented by Joseph Aspdin in 1824. Aspdin chose the name for its similarity to Portland stone, which was quarried on the Isle of Portland in Dorset, England. His son William continued developments into the 1840s, earning him recognition for the development of "modern" Portland cement. Reinforced concrete was invented in 1849 by Joseph Monier. and the first reinforced concrete house was built by François Coignet in 1853. The first concrete reinforced bridge was designed and built by Joseph Monier in 1875. Prestressed concrete and post-tensioned concrete were pioneered by Eugène Freyssinet, a French structural and civil engineer. Concrete components or structures are compressed by tendon cables during, or after, their fabrication in order to strengthen them against tensile forces developing when put in service. Freyssinet patented the technique on 2 October 1928. Composition Concrete is an artificial composite material, comprising a matrix of cementitious binder (typically Portland cement paste or asphalt) and a dispersed phase or "filler" of aggregate (typically a rocky material, loose stones, and sand). The binder "glues" the filler together to form a synthetic conglomerate. Many types of concrete are available, determined by the formulations of binders and the types of aggregate used to suit the application of the engineered material. These variables determine strength and density, as well as chemical and thermal resistance of the finished product. Construction aggregates consist of large chunks of material in a concrete mix, generally a coarse gravel or crushed rocks such as limestone, or granite, along with finer materials such as sand. Cement paste, most commonly made of Portland cement, is the most prevalent kind of concrete binder. For cementitious binders, water is mixed with the dry cement powder and aggregate, which produces a semi-liquid slurry (paste) that can be shaped, typically by pouring it into a form. The concrete solidifies and hardens through a chemical process called hydration. The water reacts with the cement, which bonds the other components together, creating a robust, stone-like material. Other cementitious materials, such as fly ash and slag cement, are sometimes added—either pre-blended with the cement or directly as a concrete component—and become a part of the binder for the aggregate. Fly ash and slag can enhance some properties of concrete such as fresh properties and durability. Alternatively, other materials can also be used as a concrete binder: the most prevalent substitute is asphalt, which is used as the binder in asphalt concrete. Admixtures are added to modify the cure rate or properties of the material. Mineral admixtures use recycled materials as concrete ingredients. Conspicuous materials include fly ash, a by-product of coal-fired power plants; ground granulated blast furnace slag, a by-product of steelmaking; and silica fume, a by-product of industrial electric arc furnaces. Structures employing Portland cement concrete usually include steel reinforcement because this type of concrete can be formulated with high compressive strength, but always has lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension, typically steel rebar. The mix design depends on the type of structure being built, how the concrete is mixed and delivered, and how it is placed to form the structure. Cement Portland cement is the most common type of cement in general usage. It is a basic ingredient of concrete, mortar, and many plasters. British masonry worker Joseph Aspdin patented Portland cement in 1824. It was named because of the similarity of its color to Portland limestone, quarried from the English Isle of Portland and used extensively in London architecture. It consists of a mixture of calcium silicates (alite, belite), aluminates and ferrites—compounds which combine calcium, silicon, aluminium and iron in forms which will react with water. Portland cement and similar materials are made by heating limestone (a source of calcium) with clay or shale (a source of silicon, aluminium and iron) and grinding this product (called clinker) with a source of sulfate (most commonly gypsum). In modern cement kilns, many advanced features are used to lower the fuel consumption per ton of clinker produced. Cement kilns are extremely large, complex, and inherently dusty industrial installations, and have emissions which must be controlled. Of the various ingredients used to produce a given quantity of concrete, the cement is the most energetically expensive. Even complex and efficient kilns require 3.3 to 3.6 gigajoules of energy to produce a ton of clinker and then grind it into cement. Many kilns can be fueled with difficult-to-dispose-of wastes, the most common being used tires. The extremely high temperatures and long periods of time at those temperatures allows cement kilns to efficiently and completely burn even difficult-to-use fuels. Water Combining water with a cementitious material forms a cement paste by the process of hydration. The cement paste glues the aggregate together, fills voids within it, and makes it flow more freely. As stated by Abrams' law, a lower water-to-cement ratio yields a stronger, more durable concrete, whereas more water gives a freer-flowing concrete with a higher slump. Impure water used to make concrete can cause problems when setting or in causing premature failure of the structure. Portland cement consists of five major compounds of calcium silicates and aluminates ranging from 5 to 50% in weight, which all undergo hydration to contribute to final material's strength. Thus, the hydration of cement involves many reactions, often occurring at the same time. As the reactions proceed, the products of the cement hydration process gradually bond together the individual sand and gravel particles and other components of the concrete to form a solid mass. Hydration of tricalcium silicate Cement chemist notation: C3S + H → C-S-H + CH + heat Standard notation: Ca3SiO5 + H2O → CaO・SiO2・H2O (gel) + Ca(OH)2 + heat Balanced: 2 Ca3SiO5 + 7 H2O → 3 CaO・2 SiO2・4 H2O (gel) + 3 Ca(OH)2 + heat (approximately as the exact ratios of CaO, SiO2 and H2O in C-S-H can vary) Due to the nature of the chemical bonds created in these reactions and the final characteristics of the hardened cement paste formed, the process of cement hydration is considered irreversible. Aggregates Fine and coarse aggregates make up the bulk of a concrete mixture. Sand, natural gravel, and crushed stone are used mainly for this purpose. Recycled aggregates (from construction, demolition, and excavation waste) are increasingly used as partial replacements for natural aggregates, while a number of manufactured aggregates, including air-cooled blast furnace slag and bottom ash are also permitted. The size distribution of the aggregate determines how much binder is required. Aggregate with a very even size distribution has the biggest gaps whereas adding aggregate with smaller particles tends to fill these gaps. The binder must fill the gaps between the aggregate as well as paste the surfaces of the aggregate together, and is typically the most expensive component. Thus, variation in sizes of the aggregate reduces the cost of concrete. The aggregate is nearly always stronger than the binder, so its use does not negatively affect the strength of the concrete. Redistribution of aggregates after compaction often creates non-homogeneity due to the influence of vibration. This can lead to strength gradients. Decorative stones such as quartzite, small river stones or crushed glass are sometimes added to the surface of concrete for a decorative "exposed aggregate" finish, popular among landscape designers. Admixtures Admixtures are materials in the form of powder or fluids that are added to the concrete to give it certain characteristics not obtainable with plain concrete mixes. Admixtures are defined as additions "made as the concrete mix is being prepared". The most common admixtures are retarders and accelerators. In normal use, admixture dosages are less than 5% by mass of cement and are added to the concrete at the time of batching/mixing. (See below.) The common types of admixtures are as follows: Accelerators speed up the hydration (hardening) of the concrete. Typical materials used are calcium chloride, calcium nitrate and sodium nitrate. However, use of chlorides may cause corrosion in steel reinforcing and is prohibited in some countries, so that nitrates may be favored, even though they are less effective than the chloride salt. Accelerating admixtures are especially useful for modifying the properties of concrete in cold weather. Air entraining agents add and entrain tiny air bubbles in the concrete, which reduces damage during freeze-thaw cycles, increasing durability. However, entrained air entails a tradeoff with strength, as each 1% of air may decrease compressive strength by 5%. If too much air becomes trapped in the concrete as a result of the mixing process, defoamers can be used to encourage the air bubble to agglomerate, rise to the surface of the wet concrete and then disperse. Bonding agents are used to create a bond between old and new concrete (typically a type of polymer) with wide temperature tolerance and corrosion resistance. Corrosion inhibitors are used to minimize the corrosion of steel and steel bars in concrete. Crystalline admixtures are typically added during batching of the concrete to lower permeability. The reaction takes place when exposed to water and un-hydrated cement particles to form insoluble needle-shaped crystals, which fill capillary pores and micro-cracks in the concrete to block pathways for water and waterborne contaminates. Concrete with crystalline admixture can expect to self-seal as constant exposure to water will continuously initiate crystallization to ensure permanent waterproof protection. Pigments can be used to change the color of concrete, for aesthetics. Plasticizers increase the workability of plastic, or "fresh", concrete, allowing it to be placed more easily, with less consolidating effort. A typical plasticizer is lignosulfonate. Plasticizers can be used to reduce the water content of a concrete while maintaining workability and are sometimes called water-reducers due to this use. Such treatment improves its strength and durability characteristics. Superplasticizers (also called high-range water-reducers) are a class of plasticizers that have fewer deleterious effects and can be used to increase workability more than is practical with traditional plasticizers. Superplasticizers are used to increase compressive strength. It increases the workability of the concrete and lowers the need for water content by 15–30%. Pumping aids improve pumpability, thicken the paste and reduce separation and bleeding. Retarders slow the hydration of concrete and are used in large or difficult pours where partial setting is undesirable before completion of the pour. Typical polyol retarders are sugar, sucrose, sodium gluconate, glucose, citric acid, and tartaric acid. Mineral admixtures and blended cements Inorganic materials that have pozzolanic or latent hydraulic properties, these very fine-grained materials are added to the concrete mix to improve the properties of concrete (mineral admixtures), or as a replacement for Portland cement (blended cements). Products which incorporate limestone, fly ash, blast furnace slag, and other useful materials with pozzolanic properties into the mix, are being tested and used. These developments are ever growing in relevance to minimize the impacts caused by cement use, notorious for being one of the largest producers (at about 5 to 10%) of global greenhouse gas emissions. The use of alternative materials also is capable of lowering costs, improving concrete properties, and recycling wastes, the latest being relevant for circular economy aspects of the construction industry, whose demand is ever growing with greater impacts on raw material extraction, waste generation and landfill practices. Fly ash: A by-product of coal-fired electric generating plants, it is used to partially replace Portland cement (by up to 60% by mass). The properties of fly ash depend on the type of coal burnt. In general, siliceous fly ash is pozzolanic, while calcareous fly ash has latent hydraulic properties. Ground granulated blast furnace slag (GGBFS or GGBS): A by-product of steel production is used to partially replace Portland cement (by up to 80% by mass). It has latent hydraulic properties. Silica fume: A by-product of the production of silicon and ferrosilicon alloys. Silica fume is similar to fly ash, but has a particle size 100 times smaller. This results in a higher surface-to-volume ratio and a much faster pozzolanic reaction. Silica fume is used to increase strength and durability of concrete, but generally requires the use of superplasticizers for workability. High reactivity metakaolin (HRM): Metakaolin produces concrete with strength and durability similar to concrete made with silica fume. While silica fume is usually dark gray or black in color, high-reactivity metakaolin is usually bright white in color, making it the preferred choice for architectural concrete where appearance is important. Carbon nanofibers can be added to concrete to enhance compressive strength and gain a higher Young's modulus, and also to improve the electrical properties required for strain monitoring, damage evaluation and self-health monitoring of concrete. Carbon fiber has many advantages in terms of mechanical and electrical properties (e.g., higher strength) and self-monitoring behavior due to the high tensile strength and high electrical conductivity. Carbon products have been added to make concrete electrically conductive, for deicing purposes. New research from Japan's University of Kitakyushu shows that a washed and dried recycled mix of used diapers can be an environmental solution to producing less landfill and using less sand in concrete production. A model home was built in Indonesia to test the strength and durability of the new diaper-cement composite. Production Concrete production is the process of mixing together the various ingredients—water, aggregate, cement, and any additives—to produce concrete. Concrete production is time-sensitive. Once the ingredients are mixed, workers must put the concrete in place before it hardens. In modern usage, most concrete production takes place in a large type of industrial facility called a concrete plant, or often a batch plant. The usual method of placement is casting in formwork, which holds the mix in shape until it has set enough to hold its shape unaided. In general usage, concrete plants come in two main types, ready mix plants and central mix plants. A ready-mix plant mixes all the ingredients except water, while a central mix plant mixes all the ingredients including water. A central-mix plant offers more accurate control of the concrete quality through better measurements of the amount of water added, but must be placed closer to the work site where the concrete will be used, since hydration begins at the plant. A concrete plant consists of large storage hoppers for various reactive ingredients like cement, storage for bulk ingredients like aggregate and water, mechanisms for the addition of various additives and amendments, machinery to accurately weigh, move, and mix some or all of those ingredients, and facilities to dispense the mixed concrete, often to a concrete mixer truck. Modern concrete is usually prepared as a viscous fluid, so that it may be poured into forms, which are containers erected in the field to give the concrete its desired shape. Concrete formwork can be prepared in several ways, such as slip forming and steel plate construction. Alternatively, concrete can be mixed into dryer, non-fluid forms and used in factory settings to manufacture precast concrete products. A wide variety of equipment is used for processing concrete, from hand tools to heavy industrial machinery. Whichever equipment builders use, however, the objective is to produce the desired building material; ingredients must be properly mixed, placed, shaped, and retained within time constraints. Any interruption in pouring the concrete can cause the initially placed material to begin to set before the next batch is added on top. This creates a horizontal plane of weakness called a cold joint between the two batches. Once the mix is where it should be, the curing process must be controlled to ensure that the concrete attains the desired attributes. During concrete preparation, various technical details may affect the quality and nature of the product. Design mix Design mix ratios are decided by an engineer after analyzing the properties of the specific ingredients being used. Instead of using a 'nominal mix' of 1 part cement, 2 parts sand, and 4 parts aggregate (the second example from above), a civil engineer will custom-design a concrete mix to exactly meet the requirements of the site and conditions, setting material ratios and often designing an admixture package to fine-tune the properties or increase the performance envelope of the mix. Design-mix concrete can have very broad specifications that cannot be met with more basic nominal mixes, but the involvement of the engineer often increases the cost of the concrete mix. Concrete Mixes are primarily divided into nominal mix, standard mix and design mix. Nominal mix ratios are given in volume of . Nominal mixes are a simple, fast way of getting a basic idea of the properties of the finished concrete without having to perform testing in advance. Various governing bodies (such as British Standards) define nominal mix ratios into a number of grades, usually ranging from lower compressive strength to higher compressive strength. The grades usually indicate the 28-day cube strength. Mixing Thorough mixing is essential to produce uniform, high-quality concrete. has shown that the mixing of cement and water into a paste before combining these materials with aggregates can increase the compressive strength of the resulting concrete. The paste is generally mixed in a , shear-type mixer at a w/c (water to cement ratio) of 0.30 to 0.45 by mass. The cement paste premix may include admixtures such as accelerators or retarders, superplasticizers, pigments, or silica fume. The premixed paste is then blended with aggregates and any remaining batch water and final mixing is completed in conventional concrete mixing equipment. Sample analysis – Workability Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly with the desired work (pouring, pumping, spreading, tamping, vibration) and without reducing the concrete's quality. Workability depends on water content, aggregate (shape and size distribution), cementitious content and age (level of hydration) and can be modified by adding chemical admixtures, like superplasticizer. Raising the water content or adding chemical admixtures increases concrete workability. Excessive water leads to increased bleeding or segregation of aggregates (when the cement and aggregates start to separate), with the resulting concrete having reduced quality. Changes in gradation can also affect workability of the concrete, although a wide range of gradation can be used for various applications. An undesirable gradation can mean using a large aggregate that is too large for the size of the formwork, or which has too few smaller aggregate grades to serve to fill the gaps between the larger grades, or using too little or too much sand for the same reason, or using too little water, or too much cement, or even using jagged crushed stone instead of smoother round aggregate such as pebbles. Any combination of these factors and others may result in a mix which is too harsh, i.e., which does not flow or spread out smoothly, is difficult to get into the formwork, and which is difficult to surface finish. Workability can be measured by the concrete slump test, a simple measure of the plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test standards. Slump is normally measured by filling an "Abrams cone" with a sample from a fresh batch of concrete. The cone is placed with the wide end down onto a level, non-absorptive surface. It is then filled in three layers of equal volume, with each layer being tamped with a steel rod to consolidate the layer. When the cone is carefully lifted off, the enclosed material slumps a certain amount, owing to gravity. A relatively dry sample slumps very little, having a slump value of one or two inches (25 or 50 mm) out of . A relatively wet concrete sample may slump as much as eight inches. Workability can also be measured by the flow table test. Slump can be increased by addition of chemical admixtures such as plasticizer or superplasticizer without changing the water-cement ratio. Some other admixtures, especially air-entraining admixture, can increase the slump of a mix. High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring methods. One of these methods includes placing the cone on the narrow end and observing how the mix flows through the cone while it is gradually lifted. After mixing, concrete is a fluid and can be pumped to the location where needed. Curing Maintaining optimal conditions for cement hydration Concrete must be kept moist during curing in order to achieve optimal strength and durability. During curing hydration occurs, allowing calcium-silicate hydrate (C-S-H) to form. Over 90% of a mix's final strength is typically reached within four weeks, with the remaining 10% achieved over years or even decades. The conversion of calcium hydroxide in the concrete into calcium carbonate from absorption of CO2 over several decades further strengthens the concrete and makes it more resistant to damage. This carbonation reaction, however, lowers the pH of the cement pore solution and can corrode the reinforcement bars. Hydration and hardening of concrete during the first three days is critical. Abnormally fast drying and shrinkage due to factors such as evaporation from wind during placement may lead to increased tensile stresses at a time when it has not yet gained sufficient strength, resulting in greater shrinkage cracking. The early strength of the concrete can be increased if it is kept damp during the curing process. Minimizing stress prior to curing minimizes cracking. High-early-strength concrete is designed to hydrate faster, often by increased use of cement that increases shrinkage and cracking. The strength of concrete changes (increases) for up to three years. It depends on cross-section dimension of elements and conditions of structure exploitation. Addition of short-cut polymer fibers can improve (reduce) shrinkage-induced stresses during curing and increase early and ultimate compression strength. Properly curing concrete leads to increased strength and lower permeability and avoids cracking where the surface dries out prematurely. Care must also be taken to avoid freezing or overheating due to the exothermic setting of cement. Improper curing can cause scaling, reduced strength, poor abrasion resistance and cracking. Curing techniques avoiding water loss by evaporation During the curing period, concrete is ideally maintained at controlled temperature and humidity. To ensure full hydration during curing, concrete slabs are often sprayed with "curing compounds" that create a water-retaining film over the concrete. Typical films are made of wax or related hydrophobic compounds. After the concrete is sufficiently cured, the film is allowed to abrade from the concrete through normal use. Traditional conditions for curing involve spraying or ponding the concrete surface with water. The adjacent picture shows one of many ways to achieve this, ponding—submerging setting concrete in water and wrapping in plastic to prevent dehydration. Additional common curing methods include wet burlap and plastic sheeting covering the fresh concrete. For higher-strength applications, accelerated curing techniques may be applied to the concrete. A common technique involves heating the poured concrete with steam, which serves to both keep it damp and raise the temperature so that the hydration process proceeds more quickly and more thoroughly. Alternative types Asphalt Asphalt concrete (commonly called asphalt, blacktop, or pavement in North America, and tarmac, bitumen macadam, or rolled asphalt in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, as well as the core of embankment dams. Asphalt mixtures have been used in pavement construction since the beginning of the twentieth century. It consists of mineral aggregate bound together with asphalt, laid in layers, and compacted. The process was refined and enhanced by Belgian inventor and U.S. immigrant Edward De Smedt. The terms asphalt (or asphaltic) concrete, bituminous asphalt concrete, and bituminous mixture are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, AC, is sometimes used for asphalt concrete but can also denote asphalt content or asphalt cement, referring to the liquid asphalt portion of the composite material. Graphene enhanced concrete Graphene enhanced concretes are standard designs of concrete mixes, except that during the cement-mixing or production process, a small amount of chemically engineered graphene is added. These enhanced graphene concretes are designed around the concrete application. Microbial Bacteria such as Bacillus pasteurii, Bacillus pseudofirmus, Bacillus cohnii, Sporosarcina pasteuri, and Arthrobacter crystallopoietes increase the compression strength of concrete through their biomass. However some forms of bacteria can also be concrete-destroying. Bacillus sp. CT-5. can reduce corrosion of reinforcement in reinforced concrete by up to four times. Sporosarcina pasteurii reduces water and chloride permeability. B. pasteurii increases resistance to acid. Bacillus pasteurii and B. sphaericuscan induce calcium carbonate precipitation in the surface of cracks, adding compression strength. Nanoconcrete Nanoconcrete (also spelled "nano concrete"' or "nano-concrete") is a class of materials that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is widely used in foot and highway bridges where high flexural and compressive strength are indicated. Pervious Pervious concrete is a mix of specially graded coarse aggregate, cement, water, and little-to-no fine aggregates. This concrete is also known as "no-fines" or porous concrete. Mixing the ingredients in a carefully controlled process creates a paste that coats and bonds the aggregate particles. The hardened concrete contains interconnected air voids totaling approximately 15 to 25 percent. Water runs through the voids in the pavement to the soil underneath. Air entrainment admixtures are often used in freeze-thaw climates to minimize the possibility of frost damage. Pervious concrete also permits rainwater to filter through roads and parking lots, to recharge aquifers, instead of contributing to runoff and flooding. Polymer Polymer concretes are mixtures of aggregate and any of various polymers and may be reinforced. The cement is costlier than lime-based cements, but polymer concretes nevertheless have advantages; they have significant tensile strength even without reinforcement, and they are largely impervious to water. Polymer concretes are frequently used for the repair and construction of other applications, such as drains. Volcanic Volcanic concrete substitutes volcanic rock for the limestone that is burned to form clinker. It consumes a similar amount of energy, but does not directly emit carbon as a byproduct. Volcanic rock/ash are used as supplementary cementitious materials in concrete to improve the resistance to sulfate, chloride and alkali silica reaction due to pore refinement. Also, they are generally cost effective in comparison to other aggregates, good for semi and light weight concretes, and good for thermal and acoustic insulation. Pyroclastic materials, such as pumice, scoria, and ashes are formed from cooling magma during explosive volcanic eruptions. They are used as supplementary cementitious materials (SCM) or as aggregates for cements and concretes. They have been extensively used since ancient times to produce materials for building applications. For example, pumice and other volcanic glasses were added as a natural pozzolanic material for mortars and plasters during the construction of the Villa San Marco in the Roman period (89 BC – 79 AD), which remain one of the best-preserved otium villae of the Bay of Naples in Italy. Waste light Waste light is form of polymer modified concrete. The specific polymer admixture allows the replacement of all the traditional aggregates (gravel, sand, stone) by any mixture of solid waste materials in the grain size of 3–10 mm to form a low-compressive-strength (3–20 N/mm2) product for road and building construction. One cubic meter of waste light concrete contains 1.1–1.3 m3 of shredded waste and no other aggregates. Sulfur concrete Sulfur concrete is a special concrete that uses sulfur as a binder and does not require cement or water. Properties Concrete has relatively high compressive strength, but much lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep. Tests can be performed to ensure that the properties of concrete correspond to specifications for the application. The ingredients affect the strengths of the material. Concrete strength values are usually specified as the lower-bound compressive strength of either a cylindrical or cubic specimen as determined by standard test procedures. The strengths of concrete is dictated by its function. Very low-strength— or less—concrete may be used when the concrete must be lightweight. Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, concrete is often used. concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects. Strengths above are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as have been used commercially for these reasons. Energy efficiency The cement produced for making concrete accounts for about 8% of worldwide emissions per year (compared to, e.g., global aviation at 1.9%). The two largest sources of are produced by the cement manufacturing process, arising from (1) the decarbonation reaction of limestone in the cement kiln (T ≈ 950 °C), and (2) from the combustion of fossil fuel to reach the sintering temperature (T ≈ 1450 °C) of cement clinker in the kiln. The energy required for extracting, crushing, and mixing the raw materials (construction aggregates used in the concrete production, and also limestone and clay feeding the cement kiln) is lower. Energy requirement for transportation of ready-mix concrete is also lower because it is produced nearby the construction site from local resources, typically manufactured within 100 kilometers of the job site. The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for many structural and construction materials. Once in place, concrete offers a great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure. Fire safety Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad. Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure. Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces. Earthquake safety As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey). Construction with concrete Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other artificial material in the world. As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth. Reinforced concrete The use of reinforcement, in the form of iron was introduced in the 1850s by French industrialist François Coignet, and it was not until the 1880s that German civil engineer G. A. Wayss used steel as reinforcement. Concrete is a relatively brittle material that is strong under compression but less in tension. Plain, unreinforced concrete is unsuitable for many structures as it is relatively poor at withstanding stresses induced by vibrations, wind loading, and so on. Hence, to increase its overall strength, steel rods, wires, mesh or cables can be embedded in concrete before it is set. This reinforcement, often known as rebar, resists tensile forces. Reinforced concrete (RC) is a versatile composite and one of the most widely used materials in modern construction. It is made up of different constituent materials with very different properties that complement each other. In the case of reinforced concrete, the component materials are almost always concrete and steel. These two materials form a strong bond together and are able to resist a variety of applied forces, effectively acting as a single structural element. Reinforced concrete can be precast or cast-in-place (in situ) concrete, and is used in a wide range of applications such as; slab, wall, beam, column, foundation, and frame construction. Reinforcement is generally placed in areas of the concrete that are likely to be subject to tension, such as the lower portion of beams. Usually, there is a minimum of 50 mm cover, both above and below the steel reinforcement, to resist spalling and corrosion which can lead to structural instability. Other types of non-steel reinforcement, such as Fibre-reinforced concretes are used for specialized applications, predominately as a means of controlling cracking. Precast concrete Precast concrete is concrete which is cast in one place for use elsewhere and is a mobile material. The largest part of precast production is carried out in the works of specialist suppliers, although in some instances, due to economic and geographical factors, scale of product or difficulty of access, the elements are cast on or adjacent to the construction site. Precasting offers considerable advantages because it is carried out in a controlled environment, protected from the elements, but the downside of this is the contribution to greenhouse gas emission from transportation to the construction site. Advantages to be achieved by employing precast concrete: Preferred dimension schemes exist, with elements of tried and tested designs available from a catalogue. Major savings in time result from manufacture of structural elements apart from the series of events which determine overall duration of the construction, known by planning engineers as the 'critical path'. Availability of Laboratory facilities capable of the required control tests, many being certified for specific testing in accordance with National Standards. Equipment with capability suited to specific types of production such as stressing beds with appropriate capacity, moulds and machinery dedicated to particular products. High-quality finishes achieved direct from the mould eliminate the need for interior decoration and ensure low maintenance costs. Mass structures Due to cement's exothermic chemical reaction while setting up, large concrete structures such as dams, navigation locks, large mat foundations, and large breakwaters generate excessive heat during hydration and associated expansion. To mitigate these effects, post-cooling is commonly applied during construction. An early example at Hoover Dam used a network of pipes between vertical concrete placements to circulate cooling water during the curing process to avoid damaging overheating. Similar systems are still used; depending on volume of the pour, the concrete mix used, and ambient air temperature, the cooling process may last for many months after the concrete is placed. Various methods also are used to pre-cool the concrete mix in mass concrete structures. Another approach to mass concrete structures that minimizes cement's thermal by-product is the use of roller-compacted concrete, which uses a dry mix which has a much lower cooling requirement than conventional wet placement. It is deposited in thick layers as a semi-dry material then roller compacted into a dense, strong mass. Surface finishes Raw concrete surfaces tend to be porous and have a relatively uninteresting appearance. Many finishes can be applied to improve the appearance and preserve the surface against staining, water penetration, and freezing. Examples of improved appearance include stamped concrete where the wet concrete has a pattern impressed on the surface, to give a paved, cobbled or brick-like effect, and may be accompanied with coloration. Another popular effect for flooring and table tops is polished concrete where the concrete is polished optically flat with diamond abrasives and sealed with polymers or other sealants. Other finishes can be achieved with chiseling, or more conventional techniques such as painting or covering it with other materials. The proper treatment of the surface of concrete, and therefore its characteristics, is an important stage in the construction and renovation of architectural structures. Prestressed structures Prestressed concrete is a form of reinforced concrete that builds in compressive stresses during construction to oppose tensile stresses experienced in use. This can greatly reduce the weight of beams or slabs, by better distributing the stresses in the structure to make optimal use of the reinforcement. For example, a horizontal beam tends to sag. Prestressed reinforcement along the bottom of the beam counteracts this. In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting. There are two different systems being used: Pretensioned concrete is almost always precast, and contains steel wires (tendons) that are held in tension while the concrete is placed and sets around them. Post-tensioned concrete has ducts through it. After the concrete has gained strength, tendons are pulled through the ducts and stressed. The ducts are then filled with grout. Bridges built in this way have experienced considerable corrosion of the tendons, so external post-tensioning may now be used in which the tendons run along the outer surface of the concrete. More than of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days. For more information see Brutalist architecture. Placement Once mixed, concrete is typically transported to the place where it is intended to become a structural item. Various methods of transportation and placement are used depending on the distances involve, quantity needed, and other details of application. Large amounts are often transported by truck, poured free under gravity or through a tremie, or pumped through a pipe. Smaller amounts may be carried in a skip (a metal container which can be tilted or opened to release the contents, usually transported by crane or hoist), or wheelbarrow, or carried in toggle bags for manual placement underwater. Cold weather placement Extreme weather conditions (extreme heat or cold; windy conditions, and humidity variations) can significantly alter the quality of concrete. Many precautions are observed in cold weather placement. Low temperatures significantly slow the chemical reactions involved in hydration of cement, thus affecting the strength development. Preventing freezing is the most important precaution, as formation of ice crystals can cause damage to the crystalline structure of the hydrated cement paste. If the surface of the concrete pour is insulated from the outside temperatures, the heat of hydration will prevent freezing. The American Concrete Institute (ACI) definition of cold weather placement, ACI 306, is: A period when for more than three successive days the average daily air temperature drops below 40 °F (~ 4.5 °C), and Temperature stays below for more than one-half of any 24-hour period. In Canada, where temperatures tend to be much lower during the cold season, the following criteria are used by CSA A23.1: When the air temperature is ≤ 5 °C, and When there is a probability that the temperature may fall below 5 °C within 24 hours of placing the concrete. The minimum strength before exposing concrete to extreme cold is . CSA A 23.1 specified a compressive strength of 7.0 MPa to be considered safe for exposure to freezing. Underwater placement Concrete may be placed and cured underwater. Care must be taken in the placement method to prevent washing out the cement. Underwater placement methods include the tremie, pumping, skip placement, manual placement using toggle bags, and bagwork. is an alternative method of forming a concrete mass underwater, where the forms are filled with coarse aggregate and the voids then completely filled with pumped grout. Roads Concrete roads are more fuel efficient to drive on, more reflective and last significantly longer than other paving surfaces, yet have a much smaller market share than other paving solutions. Modern-paving methods and design practices have changed the economics of concrete paving, so that a well-designed and placed concrete pavement will be less expensive on initial costs and significantly less expensive over the life cycle. Another major benefit is that pervious concrete can be used, which eliminates the need to place storm drains near the road, and reducing the need for slightly sloped roadway to help rainwater to run off. No longer requiring discarding rainwater through use of drains also means that less electricity is needed (more pumping is otherwise needed in the water-distribution system), and no rainwater gets polluted as it no longer mixes with polluted water. Rather, it is immediately absorbed by the ground. Environment, health and safety The manufacture and use of concrete produce a wide range of environmental, economic and social impacts. Concrete, cement and the environment A major component of concrete is cement, a fine powder used mainly to bind sand and coarser aggregates together in concrete. Although a variety of cement types exist, the most common is "Portland cement", which is produced by mixing clinker with smaller quantities of other additives such as gypsum and ground limestone. The production of clinker, the main constituent of cement, is responsible for the bulk of the sector's greenhouse gas emissions, including both energy intensity and process emissions. The cement industry is one of the three primary producers of carbon dioxide, a major greenhouse gas – the other two being energy production and transportation industries. On average, every tonne of cement produced releases one tonne of CO2 into the atmosphere. Pioneer cement manufacturers have claimed to reach lower carbon intensities, with 590 kg of CO2eq per tonne of cement produced. The emissions are due to combustion and calcination processes, which roughly account for 40% and 60% of the greenhouse gases, respectively. Considering that cement is only a fraction of the constituents of concrete, it is estimated that a tonne of concrete is responsible for emitting about 100–200 kg of CO2. Every year more than 10 billion tonnes of concrete are used worldwide. In the coming years, large quantities of concrete will continue to be used, and the mitigation of CO2 emissions from the sector will be even more critical. Concrete is used to create hard surfaces that contribute to surface runoff, which can cause heavy soil erosion, water pollution, and flooding, but conversely can be used to divert, dam, and control flooding. Concrete dust released by building demolition and natural disasters can be a major source of dangerous air pollution. Concrete is a contributor to the urban heat island effect, though less so than asphalt. Concrete and climate change mitigation Reducing the cement clinker content might have positive effects on the environmental life-cycle assessment of concrete. Some research work on reducing the cement clinker content in concrete has already been carried out. However, there exist different research strategies. Often replacement of some clinker for large amounts of slag or fly ash was investigated based on conventional concrete technology. This could lead to a waste of scarce raw materials such as slag and fly ash. The aim of other research activities is the efficient use of cement and reactive materials like slag and fly ash in concrete based on a modified mix design approach. An environmental investigation found that the embodied carbon of a precast concrete facade can be reduced by 50% when using the presented fiber reinforced high performance concrete in place of typical reinforced concrete cladding. Studies have been conducted about commercialization of low-carbon concretes. Life cycle assessment (LCA) of low-carbon concrete was investigated according to the ground granulated blast-furnace slag (GGBS) and fly ash (FA) replacement ratios. Global warming potential (GWP) of GGBS decreased by 1.1 kg CO2 eq/m3, while FA decreased by 17.3 kg CO2 eq/m3 when the mineral admixture replacement ratio was increased by 10%. This study also compared the compressive strength properties of binary blended low-carbon concrete according to the replacement ratios, and the applicable range of mixing proportions was derived. Researchers at University of Auckland are working on utilizing biochar in concrete applications to reduce carbon emissions during concrete production and to improve strength. Concrete and climate change adaptation High-performance building materials will be particularly important for enhancing resilience, including for flood defenses and critical-infrastructure protection. Risks to infrastructure and cities posed by extreme weather events are especially serious for those places exposed to flood and hurricane damage, but also where residents need protection from extreme summer temperatures. Traditional concrete can come under strain when exposed to humidity and higher concentrations of atmospheric CO2. While concrete is likely to remain important in applications where the environment is challenging, novel, smarter and more adaptable materials are also needed. Concrete – health and safety Grinding of concrete can produce hazardous dust. Exposure to cement dust can lead to issues such as silicosis, kidney disease, skin irritation and similar effects. The U.S. National Institute for Occupational Safety and Health in the United States recommends attaching local exhaust ventilation shrouds to electric concrete grinders to control the spread of this dust. In addition, the Occupational Safety and Health Administration (OSHA) has placed more stringent regulations on companies whose workers regularly come into contact with silica dust. An updated silica rule, which OSHA put into effect 23 September 2017 for construction companies, restricted the amount of breathable crystalline silica workers could legally come into contact with to 50 micro grams per cubic meter of air per 8-hour workday. That same rule went into effect 23 June 2018 for general industry, hydraulic fracturing and maritime. That deadline was extended to 23 June 2021 for engineering controls in the hydraulic fracturing industry. Companies which fail to meet the tightened safety regulations can face financial charges and extensive penalties. The presence of some substances in concrete, including useful and unwanted additives, can cause health concerns due to toxicity and radioactivity. Fresh concrete (before curing is complete) is highly alkaline and must be handled with proper protective equipment. Circular economy Concrete is an excellent material with which to make long-lasting and energy-efficient buildings. However, even with good design, human needs change and potential waste will be generated. End-of-life: concrete degradation and waste Concrete can be damaged by many processes, such as the expansion of corrosion products of the steel reinforcement bars, freezing of trapped water, fire or radiant heat, aggregate expansion, sea water effects, bacterial corrosion, leaching, erosion by fast-flowing water, physical damage and chemical damage (from carbonatation, chlorides, sulfates and distillate water). The micro fungi Aspergillus alternaria and Cladosporium were able to grow on samples of concrete used as a radioactive waste barrier in the Chernobyl reactor; leaching aluminium, iron, calcium, and silicon. Concrete may be considered waste according to the European Commission decision of 2014/955/EU for the List of Waste under the codes: 17 (construction and demolition wastes, including excavated soil from contaminated sites) 01 (concrete, bricks, tiles and ceramics), 01 (concrete), and 17.01.06* (mixtures of, separate fractions of concrete, bricks, tiles and ceramics containing hazardous substances), and 17.01.07 (mixtures of, separate fractions of concrete, bricks, tiles and ceramics other than those mentioned in 17.01.06). It is estimated that in 2018 the European Union generated 371,910 thousand tons of mineral waste from construction and demolition, and close to 4% of this quantity is considered hazardous. Germany, France and the United Kingdom were the top three polluters with 86,412 thousand tons, 68,976 and 68,732 thousand tons of construction waste generation, respectively. Currently, there is not an End-of-Waste criteria for concrete materials in the EU. However, different sectors have been proposing alternatives for concrete waste and re purposing it as a secondary raw material in various applications, including concrete manufacturing itself. Reuse of concrete Reuse of blocks in original form, or by cutting into smaller blocks, has even less environmental impact; however, only a limited market currently exists. Improved building designs that allow for slab reuse and building transformation without demolition could increase this use. Hollow core concrete slabs are easy to dismantle and the span is normally constant, making them good for reuse. Other cases of re-use are possible with pre-cast concrete pieces: through selective demolition, such pieces can be disassembled and collected for further use in other building sites. Studies show that back-building and remounting plans for building units (i.e., re-use of pre-fabricated concrete) is an alternative for a kind of construction which protects resources and saves energy. Especially long-living, durable, energy-intensive building materials, such as concrete, can be kept in the life-cycle longer through recycling. Prefabricated constructions are the prerequisites for constructions necessarily capable of being taken apart. In the case of optimal application in the building carcass, savings in costs are estimated in 26%, a lucrative complement to new building methods. However, this depends on several courses to be set. The viability of this alternative has to be studied as the logistics associated with transporting heavy pieces of concrete can impact the operation financially and also increase the carbon footprint of the project. Also, ever changing regulations on new buildings worldwide may require higher quality standards for construction elements and inhibit the use of old elements which may be classified as obsolete. Recycling of concrete Concrete recycling is an increasingly common method for disposing of concrete structures. Concrete debris were once routinely shipped to landfills for disposal, but recycling is increasing due to improved environmental awareness, governmental laws and economic benefits. Contrary to general belief, concrete recovery is achievable – concrete can be crushed and reused as aggregate in new projects. Recycling or recovering concrete reduces natural resource exploitation and associated transportation costs, and reduces waste landfill. However, it has little impact on reducing greenhouse gas emissions as most emissions occur when cement is made, and cement alone cannot be recycled. At present, most recovered concrete is used for road sub-base and civil engineering projects. From a sustainability viewpoint, these relatively low-grade uses currently provide the optimal outcome. The recycling process can be done in situ, with mobile plants, or in specific recycling units. The input material can be returned concrete which is fresh (wet) from ready-mix trucks, production waste at a pre-cast production facility, or waste from construction and demolition. The most significant source is demolition waste, preferably pre-sorted from selective demolition processes. By far the most common method for recycling dry and hardened concrete involves crushing. Mobile sorters and crushers are often installed on construction sites to allow on-site processing. In other situations, specific processing sites are established, which are usually able to produce higher quality aggregate. Screens are used to achieve desired particle size, and remove dirt, foreign particles and fine material from the coarse aggregate. Chloride and sulfates are undesired contaminants originated from soil and weathering and can provoke corrosion problems on aluminium and steel structures. The final product, Recycled Concrete Aggregate (RCA), presents interesting properties such as: angular shape, rougher surface, lower specific gravity (20%), higher water absorption, and pH greater than 11 – this elevated pH increases the risk of alkali reactions. The lower density of RCA usually Increases project efficiency and improve job cost – recycled concrete aggregates yield more volume by weight (up to 15%). The physical properties of coarse aggregates made from crushed demolition concrete make it the preferred material for applications such as road base and sub-base. This is because recycled aggregates often have better compaction properties and require less cement for sub-base uses. Furthermore, it is generally cheaper to obtain than virgin material. Applications of recycled concrete aggregate The main commercial applications of the final recycled concrete aggregate are: Aggregate base course (road base), or the untreated aggregates used as foundation for roadway pavement, is the underlying layer (under pavement surfacing) which forms a structural foundation for paving. To this date this has been the most popular application for RCA due to technical-economic aspects. Aggregate for ready-mix concrete, by replacing from 10 to 45% of the natural aggregates in the concrete mix with a blend of cement, sand and water. Some concept buildings are showing the progress of this field. Because the RCA itself contains cement, the ratios of the mix have to be adjusted to achieve desired structural requirements such as workability, strength and water absorption. Soil Stabilization, with the incorporation of recycled aggregate, lime, or fly ash into marginal quality subgrade material used to enhance the load bearing capacity of that subgrade. Pipe bedding: serving as a stable bed or firm foundation in which to lay underground utilities. Some countries' regulations prohibit the use of RCA and other construction and demolition wastes in filtration and drainage beds due to potential contamination with chromium and pH-value impacts. Landscape Materials: to promote green architecture. To date, recycled concrete aggregate has been used as boulder/stacked rock walls, underpass abutment structures, erosion structures, water features, retaining walls, and more. Cradle-to-cradle challenges The applications developed for RCA so far are not exhaustive, and many more uses are to be developed as regulations, institutions and norms find ways to accommodate construction and demolition waste as secondary raw materials in a safe and economic way. However, considering the purpose of having a circularity of resources in the concrete life cycle, the only application of RCA that could be considered as recycling of concrete is the replacement of natural aggregates on concrete mixes. All the other applications would fall under the category of downcycling. It is estimated that even near complete recovery of concrete from construction and demolition waste will only supply about 20% of total aggregate needs in the developed world. The path towards circularity goes beyond concrete technology itself, depending on multilateral advances in the cement industry, research and development of alternative materials, building design and management, and demolition as well as conscious use of spaces in urban areas to reduce consumption. World records The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil. The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal Pradesh, India, when the concrete mix was pumped through a vertical height of . The Polavaram dam works in Andhra Pradesh on 6 January 2019 entered the Guinness World Records by pouring 32,100 cubic metres of concrete in 24 hours. The world record for the largest continuously poured concrete raft was achieved in August 2007 in Abu Dhabi by contracting firm Al Habtoor-CCC Joint Venture and the concrete supplier is Unibeton Ready Mix. The pour (a part of the foundation for the Abu Dhabi's Landmark Tower) was 16,000 cubic meters of concrete poured within a two-day period. The previous record, 13,200 cubic meters poured in 54 hours despite a severe tropical storm requiring the site to be covered with tarpaulins to allow work to continue, was achieved in 1992 by joint Japanese and South Korean consortiums Hazama Corporation and the Samsung C&T Corporation for the construction of the Petronas Towers in Kuala Lumpur, Malaysia. The world record for largest continuously poured concrete floor was completed 8 November 1997, in Louisville, Kentucky by design-build firm EXXCEL Project Management. The monolithic placement consisted of of concrete placed in 30 hours, finished to a flatness tolerance of FF 54.60 and a levelness tolerance of FL 43.83. This surpassed the previous record by 50% in total volume and 7.5% in total area. The record for the largest continuously placed underwater concrete pour was completed 18 October 2010, in New Orleans, Louisiana by contractor C. J. Mahan Construction Company, LLC of Grove City, Ohio. The placement consisted of 10,251 cubic yards of concrete placed in 58.5 hours using two concrete pumps and two dedicated concrete batch plants. Upon curing, this placement allows the cofferdam to be dewatered approximately below sea level to allow the construction of the Inner Harbor Navigation Canal Sill & Monolith Project to be completed in the dry. See also Further reading References External links Advantage and Disadvantage of Concrete Release of ultrafine particles from three simulated building processes Concrete: The Quest for Greener Alternatives Building materials Masonry Pavements Sculpture materials Composite materials Heterogeneous chemical mixtures Roofing materials
5373
https://en.wikipedia.org/wiki/Coitus%20interruptus
Coitus interruptus
Coitus interruptus, also known as withdrawal, pulling out or the pull-out method, is a method of birth control during penetrative sexual intercourse, whereby the penis is withdrawn from a vagina prior to ejaculation so that the ejaculate (semen) may be directed away from the vagina in an effort to avoid insemination. This method was used by an estimated 38 million couples worldwide in 1991. Coitus interruptus does not protect against sexually transmitted infections (STIs/STDs). History Perhaps the oldest description of the use of the withdrawal method to avoid pregnancy is the story of Onan in the Torah and the Bible. This text is believed to have been written down over 2,500 years ago. Societies in the ancient civilizations of Greece and Rome preferred small families and are known to have practiced a variety of birth control methods. There are references that have led historians to believe withdrawal was sometimes used as birth control. However, these societies viewed birth control as a woman's responsibility, and the only well-documented contraception methods were female-controlled devices (both possibly effective, such as pessaries, and ineffective, such as amulets). After the decline of the Roman Empire in the 5th century AD, contraceptive practices fell out of use in Europe; the use of contraceptive pessaries, for example, is not documented again until the 15th century. If withdrawal was used during the Roman Empire, knowledge of the practice may have been lost during its decline. From the 18th century until the development of modern methods, withdrawal was one of the most popular methods of birth-control in Europe, North America, and elsewhere. Effects Like many methods of birth control, reliable effect is achieved only by correct and consistent use. Observed failure rates of withdrawal vary depending on the population being studied: American studies have found actual failure rates of 15–28% per year. One U.S. study, based on self-reported data from the 2006-2010 cycle of the National Survey of Family Growth, found significant differences in failure rate based on parity status. Women with 0 previous births had a 12-month failure rate of only 8.4%, which then increased to 20.4% for those with 1 prior birth and again to 27.7% for those with 2 or more. An analysis of Demographic and Health Surveys in 43 developing countries between 1990 and 2013 found a median 12-month failure rate across subregions of 13.4%, with a range of 7.8-17.1%. Individual countries within the subregions were even more varied. A large scale study of women in England and Scotland during 1968–1974 to determine the efficacy of various contraceptive methods found a failure rate of 6.7 per 100 woman-years of use. This was a “typical use” failure rate, including user failure to use the method correctly. In comparison, the combined oral contraceptive pill has an actual use failure rate of 2–8%, while intrauterine devices (IUDs) have an actual use failure rate of 0.1–0.8%. Condoms have an actual use failure rate of 10–18%. However, some authors suggest that actual effectiveness of withdrawal could be similar to the effectiveness of condoms; this area needs further research. (See Comparison of birth control methods.) For couples that use coitus interruptus consistently and correctly at every act of intercourse, the failure rate is 4% per year. This rate is derived from an educated guess based on a modest chance of sperm in the pre-ejaculate. In comparison, the pill has a perfect-use failure rate of 0.3%, IUDs a rate of 0.1-0.6%, and internal condoms a rate of 2%. It has been suggested that the pre-ejaculate ("Cowper's fluid") emitted by the penis prior to ejaculation may contain spermatozoa (sperm cells), which would compromise the effectiveness of the method. However, several small studies have failed to find any viable sperm in the fluid. While no large conclusive studies have been done, it is believed by some that the cause of method (correct-use) failure is the pre-ejaculate fluid picking up sperm from a previous ejaculation. For this reason, it is recommended that the male partner urinate between ejaculations, to clear the urethra of sperm, and wash any ejaculate from objects that might come near the woman's vulva (e.g. hands and penis). However, recent research suggests that this might not be accurate. A contrary, yet non-generalizable study that found mixed evidence, including individual cases of a high sperm concentration, was published in March 2011. A noted limitation to these previous studies' findings is that pre-ejaculate samples were analyzed after the critical two-minute point. That is, looking for motile sperm in small amounts of pre-ejaculate via microscope after two minutes – when the sample has most likely dried – makes examination and evaluation "extremely difficult". Thus, in March 2011 a team of researchers assembled 27 male volunteers and analyzed their pre-ejaculate samples within two minutes after producing them. The researchers found that 11 of the 27 men (41%) produced pre-ejaculatory samples that contained sperm, and 10 of these samples (37%) contained a "fair amount" of motile sperm (i.e. as few as 1 million to as many as 35 million). This study therefore recommends, in order to minimize unintended pregnancy and disease transmission, the use of condoms from the first moment of genital contact. As a point of reference, a study showed that, of couples who conceived within a year of trying, only 2.5% included a male partner with a total sperm count (per ejaculate) of 23 million sperm or less. However, across a wide range of observed values, total sperm count (as with other identified semen and sperm characteristics) has weak power to predict which couples are at risk of pregnancy. Regardless, this study introduced the concept that some men may consistently have sperm in their pre-ejaculate, due to a "leakage," while others may not. Similarly, another robust study performed in 2016 found motile sperm in the pre-ejaculate of 16.7% (7/42) healthy men. What more, this study attempted to exclude contamination of sperm from ejaculate by drying the pre-ejaculate specimens to reveal a fern-like pattern, characteristics of true pre-ejaculate. All pre-ejaculate specimens were examined within an hour of production and then dried; all pre-ejaculate specimens were found to be true pre-ejaculate. It is widely believed that urinating after an ejaculation will flush the urethra of remaining sperm. However, some of the subjects in the March 2011 study who produced sperm in their pre-ejaculate did urinate (sometimes more than once) before producing their sample. Therefore, some males can release the pre-ejaculate fluid containing sperm without a previous ejaculation. Advantages The advantage of coitus interruptus is that it can be used by people who have objections to, or do not have access to, other forms of contraception. Some people prefer it so they can avoid possible adverse effects of hormonal contraceptives or so that they can have a full experience and be able to "feel" their partner. Other reasons for the popularity of this method are it has no direct monetary cost, requires no artificial devices, has no physical side effects, can be practiced without a prescription or medical consultation, and provides no barriers to stimulation. Disadvantages Compared to the other common reversible methods of contraception such as IUDs, hormonal contraceptives, and male condoms, coitus interruptus is less effective at preventing pregnancy. As a result, it is also less cost-effective than many more effective methods: although the method itself has no direct cost, users have a greater chance of incurring the risks and expenses of either child-birth or abortion. Only models that assume all couples practice perfect use of the method find cost savings associated with the choice of withdrawal as a birth control method. The method is largely ineffective in the prevention of sexually transmitted infections (STIs/STDs), like HIV, since pre-ejaculate may carry viral particles or bacteria which may infect the partner if this fluid comes in contact with mucous membranes. However, a reduction in the volume of bodily fluids exchanged during intercourse may reduce the likelihood of disease transmission compared to using no method due to the smaller number of pathogens present. Prevalence Based on data from surveys conducted during the late 1990s, 3% of women of childbearing age worldwide rely on withdrawal as their primary method of contraception. Regional popularity of the method varies widely, from a low of 1% in Africa to 16% in Western Asia. In the United States, according to the National Survey of Family Growth (NSFG) in 2014, 8.1% of reproductive-aged women reported using withdrawal as a primary contraceptive method. This was a significant increase from 2012 when 4.8% of women reported the use of withdrawal as their most effective method. However, when withdrawal is used in addition to or in rotation with another contraceptive method, the percentage of women using withdrawal jumps from 5% for sole use and 11% for any withdrawal use in 2002, and for adolescents from 7.1% of sole withdrawal use to 14.6% of any withdrawal use in 2006–2008. When asked if withdrawal was used at least once in the past month by women, use of withdrawal increased from 13% as sole use to 33% ever use in the past month. These increases are even more pronounced for adolescents 15 to 19 years old and young women 20 to 24 years old Similarly, the NSFG reports that 9.8% of unmarried men who have had sexual intercourse in the last three months in 2002 used withdrawal, which then increased to 14.5% in 2006–2010, and then to 18.8% in 2011–2015. The use of withdrawal varied by the unmarried man's age and cohabiting status, but not by ethnicity or race. The use of withdrawal decreased significantly with increasing age groups, ranging from 26.2% among men aged 15–19 to 12% among men aged 35–44. The use of withdrawal was significantly higher for never-married men (23.0%) compared with formerly married (16.3%) and cohabiting (13.0%) men. For 1998, about 18% of married men in Turkey reported using withdrawal as a contraceptive method. See also Coitus reservatus Coitus saxonicus Masturbation References External links Contraception and abortion in Islam Withdrawal Methods of birth control Contraception for males Latin words and phrases
5374
https://en.wikipedia.org/wiki/Condom
Condom
A condom is a sheath-shaped barrier device used during sexual intercourse to reduce the probability of pregnancy or a sexually transmitted infection (STI). There are both male and female condoms. The male condom is rolled onto an erect penis before intercourse and works by forming a physical barrier which blocks semen from entering the body of a sexual partner. Male condoms are typically made from latex and, less commonly, from polyurethane, polyisoprene, or lamb intestine. Male condoms have the advantages of ease of use, ease of access, and few side effects. Individuals with latex allergy should use condoms made from a material other than latex, such as polyurethane. Female condoms are typically made from polyurethane and may be used multiple times. With proper use—and use at every act of intercourse—women whose partners use male condoms experience a 2% per-year pregnancy rate. With typical use, the rate of pregnancy is 18% per-year. Their use greatly decreases the risk of gonorrhea, chlamydia, trichomoniasis, hepatitis B, and HIV/AIDS. To a lesser extent, they also protect against genital herpes, human papillomavirus (HPV), and syphilis. Condoms as a method of preventing STIs have been used since at least 1564. Rubber condoms became available in 1855, followed by latex condoms in the 1920s. It is on the World Health Organization's List of Essential Medicines. As of 2019, globally around 21% of those using birth control use the condom, making it the second-most common method after female sterilization (24%). Rates of condom use are highest in East and Southeast Asia, Europe and North America. About six to nine billion are sold a year. Medical uses Birth control The effectiveness of condoms, as of most forms of contraception, can be assessed two ways. Perfect use or method effectiveness rates only include people who use condoms properly and consistently. Actual use, or typical use effectiveness rates are of all condom users, including those who use condoms incorrectly or do not use condoms at every act of intercourse. Rates are generally presented for the first year of use. Most commonly the Pearl Index is used to calculate effectiveness rates, but some studies use decrement tables. The typical use pregnancy rate among condom users varies depending on the population being studied, ranging from 10 to 18% per year. The perfect use pregnancy rate of condoms is 2% per year. Condoms may be combined with other forms of contraception (such as spermicide) for greater protection. Sexually transmitted infections Condoms are widely recommended for the prevention of sexually transmitted infections (STIs). They have been shown to be effective in reducing infection rates in both men and women. While not perfect, the condom is effective at reducing the transmission of organisms that cause AIDS, genital herpes, cervical cancer, genital warts, syphilis, chlamydia, gonorrhea, and other diseases. Condoms are often recommended as an adjunct to more effective birth control methods (such as IUD) in situations where STD protection is also desired. For this reason, condoms are frequently used by those in the swinging (sexual practice) community. According to a 2000 report by the National Institutes of Health (NIH), consistent use of latex condoms reduces the risk of HIV transmission by approximately 85% relative to risk when unprotected, putting the seroconversion rate (infection rate) at 0.9 per 100 person-years with condom, down from 6.7 per 100 person-years. Analysis published in 2007 from the University of Texas Medical Branch and the World Health Organization found similar risk reductions of 80–95%. The 2000 NIH review concluded that condom use significantly reduces the risk of gonorrhea for men. A 2006 study reports that proper condom use decreases the risk of transmission of human papillomavirus (HPV) to women by approximately 70%. Another study in the same year found consistent condom use was effective at reducing transmission of herpes simplex virus-2, also known as genital herpes, in both men and women. Although a condom is effective in limiting exposure, some disease transmission may occur even with a condom. Infectious areas of the genitals, especially when symptoms are present, may not be covered by a condom, and as a result, some diseases like HPV and herpes may be transmitted by direct contact. The primary effectiveness issue with using condoms to prevent STDs, however, is inconsistent use. Condoms may also be useful in treating potentially precancerous cervical changes. Exposure to human papillomavirus, even in individuals already infected with the virus, appears to increase the risk of precancerous changes. The use of condoms helps promote regression of these changes. In addition, researchers in the UK suggest that a hormone in semen can aggravate existing cervical cancer, condom use during sex can prevent exposure to the hormone. Causes of failure Condoms may slip off the penis after ejaculation, break due to improper application or physical damage (such as tears caused when opening the package), or break or slip due to latex degradation (typically from usage past the expiration date, improper storage, or exposure to oils). The rate of breakage is between 0.4% and 2.3%, while the rate of slippage is between 0.6% and 1.3%. Even if no breakage or slippage is observed, 1–3% of women will test positive for semen residue after intercourse with a condom. Failure rates are higher for anal sex, and until 2022, condoms were only approved by the FDA for vaginal sex. The One Male Condom received FDA approval for anal sex on February 23, 2022. "Double bagging", using two condoms at once, is often believed to cause a higher rate of failure due to the friction of rubber on rubber. This claim is not supported by research. The limited studies that have been done found that the simultaneous use of multiple condoms decreases the risk of condom breakage. Different modes of condom failure result in different levels of semen exposure. If a failure occurs during application, the damaged condom may be disposed of and a new condom applied before intercourse begins – such failures generally pose no risk to the user. One study found that semen exposure from a broken condom was about half that of unprotected intercourse; semen exposure from a slipped condom was about one-fifth that of unprotected intercourse. Standard condoms will fit almost any penis, with varying degrees of comfort or risk of slippage. Many condom manufacturers offer "snug" or "magnum" sizes. Some manufacturers also offer custom sized-to-fit condoms, with claims that they are more reliable and offer improved sensation/comfort. Some studies have associated larger penises and smaller condoms with increased breakage and decreased slippage rates (and vice versa), but other studies have been inconclusive. It is recommended for condoms manufacturers to avoid very thick or very thin condoms, because they are both considered less effective. Some authors encourage users to choose thinner condoms "for greater durability, sensation, and comfort", but others warn that "the thinner the condom, the smaller the force required to break it". Experienced condom users are significantly less likely to have a condom slip or break compared to first-time users, although users who experience one slippage or breakage are more likely to suffer a second such failure. An article in Population Reports suggests that education on condom use reduces behaviors that increase the risk of breakage and slippage. A Family Health International publication also offers the view that education can reduce the risk of breakage and slippage, but emphasizes that more research needs to be done to determine all of the causes of breakage and slippage. Among people who intend condoms to be their form of birth control, pregnancy may occur when the user has sex without a condom. The person may have run out of condoms, or be traveling and not have a condom with them, or dislike the feel of condoms and decide to "take a chance". This behavior is the primary cause of typical use failure (as opposed to method or perfect use failure). Another possible cause of condom failure is sabotage. One motive is to have a child against a partner's wishes or consent. Some commercial sex workers from Nigeria reported clients sabotaging condoms in retaliation for being coerced into condom use. Using a fine needle to make several pinholes at the tip of the condom is believed to significantly impact on their effectiveness. Cases of such condom sabotage have occurred. Side effects The use of latex condoms by people with an allergy to latex can cause allergic symptoms, such as skin irritation. In people with severe latex allergies, using a latex condom can potentially be life-threatening. Repeated use of latex condoms can also cause the development of a latex allergy in some people. Irritation may also occur due to spermicides that may be present. Use Male condoms are usually packaged inside a foil or plastic wrapper, in a rolled-up form, and are designed to be applied to the tip of the penis and then unrolled over the erect penis. It is important that some space be left in the tip of the condom so that semen has a place to collect; otherwise it may be forced out of the base of the device. Most condoms have a teat end for this purpose. After use, it is recommended the condom be wrapped in tissue or tied in a knot, then disposed of in a trash receptacle. Condoms are used to reduce the likelihood of pregnancy during intercourse and to reduce the likelihood of contracting sexually transmitted infections (STIs). Condoms are also used during fellatio to reduce the likelihood of contracting STIs. Some couples find that putting on a condom interrupts sex, although others incorporate condom application as part of their foreplay. Some men and women find the physical barrier of a condom dulls sensation. Advantages of dulled sensation can include prolonged erection and delayed ejaculation; disadvantages might include a loss of some sexual excitement. Advocates of condom use also cite their advantages of being inexpensive, easy to use, and having few side effects. Adult film industry In 2012 proponents gathered 372,000 voter signatures through a citizens' initiative in Los Angeles County to put Measure B on the 2012 ballot. As a result, Measure B, a law requiring the use of condoms in the production of pornographic films, was passed. This requirement has received much criticism and is said by some to be counter-productive, merely forcing companies that make pornographic films to relocate to other places without this requirement. Producers claim that condom use depresses sales. Sex education Condoms are often used in sex education programs, because they have the capability to reduce the chances of pregnancy and the spread of some sexually transmitted diseases when used correctly. A recent American Psychological Association (APA) press release supported the inclusion of information about condoms in sex education, saying "comprehensive sexuality education programs ... discuss the appropriate use of condoms", and "promote condom use for those who are sexually active." In the United States, teaching about condoms in public schools is opposed by some religious organizations. Planned Parenthood, which advocates family planning and sex education, argues that no studies have shown abstinence-only programs to result in delayed intercourse, and cites surveys showing that 76% of American parents want their children to receive comprehensive sexuality education including condom use. Infertility treatment Common procedures in infertility treatment such as semen analysis and intrauterine insemination (IUI) require collection of semen samples. These are most commonly obtained through masturbation, but an alternative to masturbation is use of a special collection condom to collect semen during sexual intercourse. Collection condoms are made from silicone or polyurethane, as latex is somewhat harmful to sperm. Some religions prohibit masturbation entirely. Also, compared with samples obtained from masturbation, semen samples from collection condoms have higher total sperm counts, sperm motility, and percentage of sperm with normal morphology. For this reason, they are believed to give more accurate results when used for semen analysis, and to improve the chances of pregnancy when used in procedures such as intracervical or intrauterine insemination. Adherents of religions that prohibit contraception, such as Catholicism, may use collection condoms with holes pricked in them. For fertility treatments, a collection condom may be used to collect semen during sexual intercourse where the semen is provided by the woman's partner. Private sperm donors may also use a collection condom to obtain samples through masturbation or by sexual intercourse with a partner and will transfer the ejaculate from the collection condom to a specially designed container. The sperm is transported in such containers, in the case of a donor, to a recipient woman to be used for insemination, and in the case of a woman's partner, to a fertility clinic for processing and use. However, transportation may reduce the fecundity of the sperm. Collection condoms may also be used where semen is produced at a sperm bank or fertility clinic. Condom therapy is sometimes prescribed to infertile couples when the female has high levels of antisperm antibodies. The theory is that preventing exposure to her partner's semen will lower her level of antisperm antibodies, and thus increase her chances of pregnancy when condom therapy is discontinued. However, condom therapy has not been shown to increase subsequent pregnancy rates. Other uses Condoms excel as multipurpose containers and barriers because they are waterproof, elastic, durable, and (for military and espionage uses) will not arouse suspicion if found. Ongoing military utilization began during World War II, and includes covering the muzzles of rifle barrels to prevent fouling, the waterproofing of firing assemblies in underwater demolitions, and storage of corrosive materials and garrotes by paramilitary agencies. Condoms have also been used to smuggle alcohol, cocaine, heroin, and other drugs across borders and into prisons by filling the condom with drugs, tying it in a knot and then either swallowing it or inserting it into the rectum. These methods are very dangerous and potentially lethal; if the condom breaks, the drugs inside become absorbed into the bloodstream and can cause an overdose. Medically, condoms can be used to cover endovaginal ultrasound probes, or in field chest needle decompressions they can be used to make a one-way valve. Condoms have also been used to protect scientific samples from the environment, and to waterproof microphones for underwater recording. Types Most condoms have a reservoir tip or teat end, making it easier to accommodate the man's ejaculate. Condoms come in different sizes and shapes. They also come in a variety of surfaces intended to stimulate the user's partner. Condoms are usually supplied with a lubricant coating to facilitate penetration, while flavored condoms are principally used for oral sex. As mentioned above, most condoms are made of latex, but polyurethane and lambskin condoms also exist. Female condom Male condoms have a tight ring to form a seal around the penis, while female condoms usually have a large stiff ring to prevent them from slipping into the body orifice. The Female Health Company produced a female condom that was initially made of polyurethane, but newer versions are made of nitrile rubber. Medtech Products produces a female condom made of latex. Materials Natural latex Latex has outstanding elastic properties: Its tensile strength exceeds 30 MPa, and latex condoms may be stretched in excess of 800% before breaking. In 1990 the ISO set standards for condom production (ISO 4074, Natural latex rubber condoms), and the EU followed suit with its CEN standard (Directive 93/42/EEC concerning medical devices). Every latex condom is tested for holes with an electric current. If the condom passes, it is rolled and packaged. In addition, a portion of each batch of condoms is subject to water leak and air burst testing. While the advantages of latex have made it the most popular condom material, it does have some drawbacks. Latex condoms are damaged when used with oil-based substances as lubricants, such as petroleum jelly, cooking oil, baby oil, mineral oil, skin lotions, suntan lotions, cold creams, butter or margarine. Contact with oil makes latex condoms more likely to break or slip off due to loss of elasticity caused by the oils. Additionally, latex allergy precludes use of latex condoms and is one of the principal reasons for the use of other materials. In May 2009, the U.S. Food and Drug Administration (FDA) granted approval for the production of condoms composed of Vytex, latex that has been treated to remove 90% of the proteins responsible for allergic reactions. An allergen-free condom made of synthetic latex (polyisoprene) is also available. Synthetic The most common non-latex condoms are made from polyurethane. Condoms may also be made from other synthetic materials, such as AT-10 resin, and most polyisoprene. Polyurethane condoms tend to be the same width and thickness as latex condoms, with most polyurethane condoms between 0.04 mm and 0.07 mm thick. Polyurethane can be considered better than latex in several ways: it conducts heat better than latex, is not as sensitive to temperature and ultraviolet light (and so has less rigid storage requirements and a longer shelf life), can be used with oil-based lubricants, is less allergenic than latex, and does not have an odor. Polyurethane condoms have gained FDA approval for sale in the United States as an effective method of contraception and HIV prevention, and under laboratory conditions have been shown to be just as effective as latex for these purposes. However, polyurethane condoms are less elastic than latex ones, and may be more likely to slip or break than latex, lose their shape or bunch up more than latex, and are more expensive. Polyisoprene is a synthetic version of natural rubber latex. While significantly more expensive, it has the advantages of latex (such as being softer and more elastic than polyurethane condoms) without the protein which is responsible for latex allergies. Unlike polyurethane condoms, they cannot be used with an oil-based lubricant. Lambskin Condoms made from sheep intestines, labeled "lambskin", are also available. Although they are generally effective as a contraceptive by blocking sperm, it is presumed that they are less effective than latex in preventing the transmission of sexually transmitted infections because of pores in the material. This is based on the idea that intestines, by their nature, are porous, permeable membranes, and while sperm are too large to pass through the pores, viruses — such as HIV, herpes, and genital warts — are small enough to pass. However, there are to date no clinical data confirming or denying this theory. As a result of laboratory data on condom porosity, in 1989, the FDA began requiring lambskin condom manufacturers to indicate that the products were not to be used for the prevention of sexually transmitted infections. This was based on the presumption that lambskin condoms would be less effective than latex in preventing HIV transmission, rather than a conclusion that lambskin condoms lack efficacy in STI prevention altogether. An FDA publication in 1992 states that lambskin condoms "provide good birth control and a varying degree of protection against some, but not all, sexually transmitted diseases" and that the labelling requirement was decided upon because the FDA "cannot expect people to know which STDs they need to be protected against", and since "the reality is that you don't know what your partner has, we wanted natural-membrane condoms to have labels that don't allow the user to assume they're effective against the small viral STDs." Some believe that lambskin condoms provide a more "natural" sensation and lack the allergens inherent to latex. Still, because of their lesser protection against infection, other hypoallergenic materials such as polyurethane are recommended for latex-allergic users and partners. Lambskin condoms are also significantly more expensive than different types, and as slaughter by-products, they are also not vegetarian. Spermicide Some latex condoms are lubricated at the manufacturer with a small amount of a nonoxynol-9, a spermicidal chemical. According to Consumer Reports, condoms lubricated with spermicide have no additional benefit in preventing pregnancy, have a shorter shelf life, and may cause urinary tract infections in women. In contrast, application of separately packaged spermicide is believed to increase the contraceptive efficacy of condoms. Nonoxynol-9 was once believed to offer additional protection against STDs (including HIV) but recent studies have shown that, with frequent use, nonoxynol-9 may increase the risk of HIV transmission. The World Health Organization says that spermicidally lubricated condoms should no longer be promoted. However, it recommends using a nonoxynol-9 lubricated condom over no condom at all. , nine condom manufacturers have stopped manufacturing condoms with nonoxynol-9 and Planned Parenthood has discontinued the distribution of condoms so lubricated. Ribbed and studded Textured condoms include studded and ribbed condoms which can provide extra sensations to both partners. The studs or ribs can be located on the inside, outside, or both; alternatively, they are located in specific sections to provide directed stimulation to either the G-spot or frenulum. Many textured condoms which advertise "mutual pleasure" also are bulb-shaped at the top, to provide extra stimulation to the penis. Some women experience irritation during vaginal intercourse with studded condoms. Other A Swiss company (Lamprecht A.G) produces extra small condoms aimed at the teenage market. Designed to be used by boys as young as fourteen, Ceylor 'Hotshot' condoms are aimed at reducing teenage pregnancies. The anti-rape condom is another variation designed to be worn by women. It is designed to cause pain to the attacker, hopefully allowing the victim a chance to escape. A collection condom is used to collect semen for fertility treatments or sperm analysis. These condoms are designed to maximize sperm life and may be coated on the inside with a sperm-friendly lubricant. Some condom-like devices are intended for entertainment only, such as glow-in-the dark condoms. These novelty condoms may not provide protection against pregnancy and STDs. In February 2022, the U.S. Food and Drug Administration (FDA) approved the first condoms specifically indicated to help reduce transmission of sexually transmitted infections (STIs) during anal intercourse. Prevalence The prevalence of condom use varies greatly between countries. Most surveys of contraceptive use are among married women, or women in informal unions. Japan has the highest rate of condom usage in the world: in that country, condoms account for almost 80% of contraceptive use by married women. On average, in developed countries, condoms are the most popular method of birth control: 28% of married contraceptive users rely on condoms. In the average less-developed country, condoms are less common: only 6–8% of married contraceptive users choose condoms. History Before the 19th century Whether condoms were used in ancient civilizations is debated by archaeologists and historians. In ancient Egypt, Greece, and Rome, pregnancy prevention was generally seen as a woman's responsibility, and the only well documented contraception methods were female-controlled devices. In Asia before the 15th century, some use of glans condoms (devices covering only the head of the penis) is recorded. Condoms seem to have been used for contraception, and to have been known only by members of the upper classes. In China, glans condoms may have been made of oiled silk paper, or of lamb intestines. In Japan, condoms called Kabuto-gata (甲形) were made of tortoise shell or animal horn. In 16th-century Italy, anatomist and physician Gabriele Falloppio wrote a treatise on syphilis. The earliest documented strain of syphilis, first appearing in Europe in a 1490s outbreak, caused severe symptoms and often death within a few months of contracting the disease. Falloppio's treatise is the earliest uncontested description of condom use: it describes linen sheaths soaked in a chemical solution and allowed to dry before use. The cloths he described were sized to cover the glans of the penis, and were held on with a ribbon. Falloppio claimed that an experimental trial of the linen sheath demonstrated protection against syphilis. After this, the use of penis coverings to protect from disease is described in a wide variety of literature throughout Europe. The first indication that these devices were used for birth control, rather than disease prevention, is the 1605 theological publication De iustitia et iure (On justice and law) by Catholic theologian Leonardus Lessius, who condemned them as immoral. In 1666, the English Birth Rate Commission attributed a recent downward fertility rate to use of "condons", the first documented use of that word or any similar spelling. Other early spellings include "condam" and "quondam", from which the Italian derivation guantone has been suggested, from guanto, "a glove". In addition to linen, condoms during the Renaissance were made out of intestines and bladder. In the late 16th century, Dutch traders introduced condoms made from "fine leather" to Japan. Unlike the horn condoms used previously, these leather condoms covered the entire penis. Casanova in the 18th century was one of the first reported using "assurance caps" to prevent impregnating his mistresses. From at least the 18th century, condom use was opposed in some legal, religious, and medical circles for essentially the same reasons that are given today: condoms reduce the likelihood of pregnancy, which some thought immoral or undesirable for the nation; they do not provide full protection against sexually transmitted infections, while belief in their protective powers was thought to encourage sexual promiscuity; and, they are not used consistently due to inconvenience, expense, or loss of sensation. Despite some opposition, the condom market grew rapidly. In the 18th century, condoms were available in a variety of qualities and sizes, made from either linen treated with chemicals, or "skin" (bladder or intestine softened by treatment with sulfur and lye). They were sold at pubs, barbershops, chemist shops, open-air markets, and at the theater throughout Europe and Russia. They later spread to America, although in every place there were generally used only by the middle and upper classes, due to both expense and lack of sex education. 1800 through 1920s The early 19th century saw contraceptives promoted to the poorer classes for the first time. Writers on contraception tended to prefer other birth control methods to the condom. By the late 19th century, many feminists expressed distrust of the condom as a contraceptive, as its use was controlled and decided upon by men alone. They advocated instead for methods controlled by women, such as diaphragms and spermicidal douches. Other writers cited both the expense of condoms and their unreliability (they were often riddled with holes and often fell off or tore). Still, they discussed condoms as a good option for some and the only contraceptive that protects from disease. Many countries passed laws impeding the manufacture and promotion of contraceptives. In spite of these restrictions, condoms were promoted by traveling lecturers and in newspaper advertisements, using euphemisms in places where such ads were illegal. Instructions on how to make condoms at home were distributed in the United States and Europe. Despite social and legal opposition, at the end of the 19th century the condom was the Western world's most popular birth control method. Beginning in the second half of the 19th century, American rates of sexually transmitted diseases skyrocketed. Causes cited by historians include the effects of the American Civil War and the ignorance of prevention methods promoted by the Comstock laws. To fight the growing epidemic, sex education classes were introduced to public schools for the first time, teaching about venereal diseases and how they were transmitted. They generally taught abstinence was the only way to avoid sexually transmitted diseases. Condoms were not promoted for disease prevention because the medical community and moral watchdogs considered STDs to be punishment for sexual misbehavior. The stigma against people with these diseases was so significant that many hospitals refused to treat people with syphilis. The German military was the first to promote condom use among its soldiers in the later 19th century. Early 20th century experiments by the American military concluded that providing condoms to soldiers significantly lowered rates of sexually transmitted diseases. During World War I, the United States and (at the beginning of the war only) Britain were the only countries with soldiers in Europe who did not provide condoms and promote their use. In the decades after World War I, there remained social and legal obstacles to condom use throughout the U.S. and Europe. Founder of psychoanalysis Sigmund Freud opposed all methods of birth control because their failure rates were too high. Freud was especially opposed to the condom because he thought it cut down on sexual pleasure. Some feminists continued to oppose male-controlled contraceptives such as condoms. In 1920 the Church of England's Lambeth Conference condemned all "unnatural means of conception avoidance". The Bishop of London, Arthur Winnington-Ingram, complained of the huge number of condoms discarded in alleyways and parks, especially after weekends and holidays. However, European militaries continued to provide condoms to their members for disease protection, even in countries where they were illegal for the general population. Through the 1920s, catchy names and slick packaging became an increasingly important marketing technique for many consumer items, including condoms and cigarettes. Quality testing became more common, involving filling each condom with air followed by one of several methods intended to detect loss of pressure. Worldwide, condom sales doubled in the 1920s. Rubber and manufacturing advances In 1839, Charles Goodyear discovered a way of processing natural rubber, which is too stiff when cold and too soft when warm, in such a way as to make it elastic. This proved to have advantages for the manufacture of condoms; unlike the sheep's gut condoms, they could stretch and did not tear quickly when used. The rubber vulcanization process was patented by Goodyear in 1844. The first rubber condom was produced in 1855. The earliest rubber condoms had a seam and were as thick as a bicycle inner tube. Besides this type, small rubber condoms covering only the glans were often used in England and the United States. There was more risk of losing them and if the rubber ring was too tight, it would constrict the penis. This type of condom was the original "capote" (French for condom), perhaps because of its resemblance to a woman's bonnet worn at that time, also called a capote. For many decades, rubber condoms were manufactured by wrapping strips of raw rubber around penis-shaped molds, then dipping the wrapped molds in a chemical solution to cure the rubber. In 1912, Polish-born inventor Julius Fromm developed a new, improved manufacturing technique for condoms: dipping glass molds into a raw rubber solution. Called cement dipping, this method required adding gasoline or benzene to the rubber to make it liquid. Around 1920 patent lawyer and vice-president of the United States Rubber Company Ernest Hopkinson invented a new technique of converting latex into rubber without a coagulant (demulsifier), which featured using water as a solvent and warm air to dry the solution, as well as optionally preserving liquid latex with ammonia. Condoms made this way, commonly called "latex" ones, required less labor to produce than cement-dipped rubber condoms, which had to be smoothed by rubbing and trimming. The use of water to suspend the rubber instead of gasoline and benzene eliminated the fire hazard previously associated with all condom factories. Latex condoms also performed better for the consumer: they were stronger and thinner than rubber condoms, and had a shelf life of five years (compared to three months for rubber). Until the twenties, all condoms were individually hand-dipped by semi-skilled workers. Throughout the decade of the 1920s, advances in the automation of the condom assembly line were made. The first fully automated line was patented in 1930. Major condom manufacturers bought or leased conveyor systems, and small manufacturers were driven out of business. The skin condom, now significantly more expensive than the latex variety, became restricted to a niche high-end market. 1930 to present In 1930 the Anglican Church's Lambeth Conference sanctioned the use of birth control by married couples. In 1931 the Federal Council of Churches in the U.S. issued a similar statement. The Roman Catholic Church responded by issuing the encyclical Casti connubii affirming its opposition to all contraceptives, a stance it has never reversed. In the 1930s, legal restrictions on condoms began to be relaxed. But during this period Fascist Italy and Nazi Germany increased restrictions on condoms (limited sales as disease preventatives were still allowed). During the Depression, condom lines by Schmid gained in popularity. Schmid still used the cement-dipping method of manufacture which had two advantages over the latex variety. Firstly, cement-dipped condoms could be safely used with oil-based lubricants. Secondly, while less comfortable, these older-style rubber condoms could be reused and so were more economical, a valued feature in hard times. More attention was brought to quality issues in the 1930s, and the U.S. Food and Drug Administration began to regulate the quality of condoms sold in the United States. Throughout World War II, condoms were not only distributed to male U.S. military members, but also heavily promoted with films, posters, and lectures. European and Asian militaries on both sides of the conflict also provided condoms to their troops throughout the war, even Germany which outlawed all civilian use of condoms in 1941. In part because condoms were readily available, soldiers found a number of non-sexual uses for the devices, many of which continue to this day. After the war, condom sales continued to grow. From 1955 to 1965, 42% of Americans of reproductive age relied on condoms for birth control. In Britain from 1950 to 1960, 60% of married couples used condoms. The birth control pill became the world's most popular method of birth control in the years after its 1960 début, but condoms remained a strong second. The U.S. Agency for International Development pushed condom use in developing countries to help solve the "world population crises": by 1970 hundreds of millions of condoms were being used each year in India alone.(This number has grown in recent decades: in 2004, the government of India purchased 1.9 billion condoms for distribution at family planning clinics.) In the 1960s and 1970s quality regulations tightened, and more legal barriers to condom use were removed. In Ireland, legal condom sales were allowed for the first time in 1978. Advertising, however was one area that continued to have legal restrictions. In the late 1950s, the American National Association of Broadcasters banned condom advertisements from national television; this policy remained in place until 1979. After it was discovered in the early 1980s that AIDS can be a sexually transmitted infection, the use of condoms was encouraged to prevent transmission of HIV. Despite opposition by some political, religious, and other figures, national condom promotion campaigns occurred in the U.S. and Europe. These campaigns increased condom use significantly. Due to increased demand and greater social acceptance, condoms began to be sold in a wider variety of retail outlets, including in supermarkets and in discount department stores such as Walmart. Condom sales increased every year until 1994, when media attention to the AIDS pandemic began to decline. The phenomenon of decreasing use of condoms as disease preventatives has been called prevention fatigue or condom fatigue. Observers have cited condom fatigue in both Europe and North America. As one response, manufacturers have changed the tone of their advertisements from scary to humorous. New developments continued to occur in the condom market, with the first polyurethane condom—branded Avanti and produced by the manufacturer of Durex—introduced in the 1990s. Worldwide condom use is expected to continue to grow: one study predicted that developing nations would need 18.6 billion condoms by 2015. , condoms are available inside prisons in Canada, most of the European Union, Australia, Brazil, Indonesia, South Africa, and the US states of Vermont (on September 17, 2013, the Californian Senate approved a bill for condom distribution inside the state's prisons, but the bill was not yet law at the time of approval). The global condom market was estimated at US$9.2 billion in 2020. Etymology and other terms The term condom first appears in the early 18th century: early forms include condum (1706 and 1717), condon (1708) and cundum (1744). The word's etymology is unknown. In popular tradition, the invention and naming of the condom came to be attributed to an associate of England's King Charles II, one "Dr. Condom" or "Earl of Condom". There is however no evidence of the existence of such a person, and condoms had been used for over one hundred years before King Charles II acceded to the throne in 1660. A variety of unproven Latin etymologies have been proposed, including (receptacle), (house), and (scabbard or case). It has also been speculated to be from the Italian word guantone, derived from guanto, meaning glove. William E. Kruck wrote an article in 1981 concluding that, "As for the word 'condom', I need state only that its origin remains completely unknown, and there ends this search for an etymology." Modern dictionaries may also list the etymology as "unknown". Other terms are also commonly used to describe condoms. In North America condoms are also commonly known as prophylactics, or rubbers. In Britain they may be called French letters or rubber johnnies. Additionally, condoms may be referred to using the manufacturer's name. Society and culture Some moral and scientific criticism of condoms exists despite the many benefits of condoms agreed on by scientific consensus and sexual health experts. Condom usage is typically recommended for new couples who have yet to develop full trust in their partner with regard to STDs. Established couples on the other hand have few concerns about STDs, and can use other methods of birth control such as the pill, which does not act as a barrier to intimate sexual contact. Note that the polar debate with regard to condom usage is attenuated by the target group the argument is directed. Notably the age category and stable partner question are factors, as well as the distinction between heterosexual and homosexuals, who have different kinds of sex and have different risk consequences and factors. Among the prime objections to condom usage is the blocking of erotic sensation, or the intimacy that barrier-free sex provides. As the condom is held tightly to the skin of the penis, it diminishes the delivery of stimulation through rubbing and friction. Condom proponents claim this has the benefit of making sex last longer, by diminishing sensation and delaying male ejaculation. Those who promote condom-free heterosexual sex (slang: "bareback") claim that the condom puts a barrier between partners, diminishing what is normally a highly sensual, intimate, and spiritual connection between partners. Religious The United Church of Christ (UCC), a Reformed denomination of the Congregationalist tradition, promotes the distribution of condoms in churches and faith-based educational settings. Michael Shuenemeyer, a UCC minister, has stated that "The practice of safer sex is a matter of life and death. People of faith make condoms available because we have chosen life so that we and our children may live." On the other hand, the Roman Catholic Church opposes all kinds of sexual acts outside of marriage, as well as any sexual act in which the chance of successful conception has been reduced by direct and intentional acts (for example, surgery to prevent conception) or foreign objects (for example, condoms). The use of condoms to prevent STI transmission is not specifically addressed by Catholic doctrine, and is currently a topic of debate among theologians and high-ranking Catholic authorities. A few, such as Belgian Cardinal Godfried Danneels, believe the Catholic Church should actively support condoms used to prevent disease, especially serious diseases such as AIDS. However, the majority view—including all statements from the Vatican—is that condom-promotion programs encourage promiscuity, thereby actually increasing STI transmission. This view was most recently reiterated in 2009 by Pope Benedict XVI. The Roman Catholic Church is the largest organized body of any world religion. The church has hundreds of programs dedicated to fighting the AIDS epidemic in Africa, but its opposition to condom use in these programs has been highly controversial. In a November 2011 interview, Pope Benedict XVI discussed for the first time the use of condoms to prevent STI transmission. He said that the use of a condom can be justified in a few individual cases if the purpose is to reduce the risk of an HIV infection. He gave as an example male prostitutes. There was some confusion at first whether the statement applied only to homosexual prostitutes and thus not to heterosexual intercourse at all. However, Federico Lombardi, spokesman for the Vatican, clarified that it applied to heterosexual and transsexual prostitutes, whether male or female, as well. He did, however, also clarify that the Vatican's principles on sexuality and contraception had not been changed. Scientific and environmental More generally, some scientific researchers have expressed objective concern over certain ingredients sometimes added to condoms, notably talc and nitrosamines. Dry dusting powders are applied to latex condoms before packaging to prevent the condom from sticking to itself when rolled up. Previously, talc was used by most manufacturers, but cornstarch is currently the most popular dusting powder. Although rare during normal use, talc is known to be potentially irritant to mucous membranes (such as in the vagina). Cornstarch is generally believed to be safe; however, some researchers have raised concerns over its use as well. Nitrosamines, which are potentially carcinogenic in humans, are believed to be present in a substance used to improve elasticity in latex condoms. A 2001 review stated that humans regularly receive 1,000 to 10,000 times greater nitrosamine exposure from food and tobacco than from condom use and concluded that the risk of cancer from condom use is very low. However, a 2004 study in Germany detected nitrosamines in 29 out of 32 condom brands tested, and concluded that exposure from condoms might exceed the exposure from food by 1.5- to 3-fold. In addition, the large-scale use of disposable condoms has resulted in concerns over their environmental impact via littering and in landfills, where they can eventually wind up in wildlife environments if not incinerated or otherwise permanently disposed of first. Polyurethane condoms in particular, given they are a form of plastic, are not biodegradable, and latex condoms take a very long time to break down. Experts, such as AVERT, recommend condoms be disposed of in a garbage receptacle, as flushing them down the toilet (which some people do) may cause plumbing blockages and other problems. Furthermore, the plastic and foil wrappers condoms are packaged in are also not biodegradable. However, the benefits condoms offer are widely considered to offset their small landfill mass. Frequent condom or wrapper disposal in public areas such as a parks have been seen as a persistent litter problem. While biodegradable, latex condoms damage the environment when disposed of improperly. According to the Ocean Conservancy, condoms, along with certain other types of trash, cover the coral reefs and smother sea grass and other bottom dwellers. The United States Environmental Protection Agency also has expressed concerns that many animals might mistake the litter for food. Cultural barriers to use In much of the Western world, the introduction of the pill in the 1960s was associated with a decline in condom use. In Japan, oral contraceptives were not approved for use until September 1999, and even then access was more restricted than in other industrialized nations. Perhaps because of this restricted access to hormonal contraception, Japan has the highest rate of condom usage in the world: in 2008, 80% of contraceptive users relied on condoms. Cultural attitudes toward gender roles, contraception, and sexual activity vary greatly around the world, and range from extremely conservative to extremely liberal. But in places where condoms are misunderstood, mischaracterised, demonised, or looked upon with overall cultural disapproval, the prevalence of condom use is directly affected. In less-developed countries and among less-educated populations, misperceptions about how disease transmission and conception work negatively affect the use of condoms; additionally, in cultures with more traditional gender roles, women may feel uncomfortable demanding that their partners use condoms. As an example, Latino immigrants in the United States often face cultural barriers to condom use. A study on female HIV prevention published in the Journal of Sex Health Research asserts that Latino women often lack the attitudes needed to negotiate safe sex due to traditional gender-role norms in the Latino community, and may be afraid to bring up the subject of condom use with their partners. Women who participated in the study often reported that because of the general machismo subtly encouraged in Latino culture, their male partners would be angry or possibly violent at the woman's suggestion that they use condoms. A similar phenomenon has been noted in a survey of low-income American black women; the women in this study also reported a fear of violence at the suggestion to their male partners that condoms be used. A telephone survey conducted by Rand Corporation and Oregon State University, and published in the Journal of Acquired Immune Deficiency Syndromes showed that belief in AIDS conspiracy theories among United States black men is linked to rates of condom use. As conspiracy beliefs about AIDS grow in a given sector of these black men, consistent condom use drops in that same sector. Female use of condoms was not similarly affected. In the African continent, condom promotion in some areas has been impeded by anti-condom campaigns by some Muslim and Catholic clerics. Among the Maasai in Tanzania, condom use is hampered by an aversion to "wasting" sperm, which is given sociocultural importance beyond reproduction. Sperm is believed to be an "elixir" to women and to have beneficial health effects. Maasai women believe that, after conceiving a child, they must have sexual intercourse repeatedly so that the additional sperm aids the child's development. Frequent condom use is also considered by some Maasai to cause impotence. Some women in Africa believe that condoms are "for prostitutes" and that respectable women should not use them. A few clerics even promote the lie that condoms are deliberately laced with HIV. In the United States, possession of many condoms has been used by police to accuse women of engaging in prostitution. The Presidential Advisory Council on HIV/AIDS has condemned this practice and there are efforts to end it. Middle-Eastern couples who have not had children, because of the strong desire and social pressure to establish fertility as soon as possible within marriage, rarely use condoms. In 2017, India restricted TV advertisements for condoms to between the hours of 10 pm to 6 am. Family planning advocates were against this, saying it was liable to "undo decades of progress on sexual and reproductive health". Major manufacturers One analyst described the size of the condom market as something that "boggles the mind". Numerous small manufacturers, nonprofit groups, and government-run manufacturing plants exist around the world. Within the condom market, there are several major contributors, among them both for-profit businesses and philanthropic organizations. Most large manufacturers have ties to the business that reach back to the end of the 19th century. Economics In the United States condoms usually cost less than US$1.00. Research A spray-on condom made of latex is intended to be easier to apply and more successful in preventing the transmission of diseases. , the spray-on condom was not going to market because the drying time could not be reduced below two to three minutes. The Invisible Condom, developed at Université Laval in Quebec, Canada, is a gel that hardens upon increased temperature after insertion into the vagina or rectum. In the lab, it has been shown to effectively block HIV and herpes simplex virus. The barrier breaks down and liquefies after several hours. , the invisible condom is in the clinical trial phase, and has not yet been approved for use. Also developed in 2005 is a condom treated with an erectogenic compound. The drug-treated condom is intended to help the wearer maintain his erection, which should also help reduce slippage. If approved, the condom would be marketed under the Durex brand. , it was still in clinical trials. In 2009, Ansell Healthcare, the makers of Lifestyle condoms, introduced the X2 condom lubricated with "Excite Gel" which contains the amino acid L-arginine and is intended to improve the strength of the erectile response. In March 2013, philanthropist Bill Gates offered US$100,000 grants through his foundation for a condom design that "significantly preserves or enhances pleasure" to encourage more males to adopt the use of condoms for safer sex. The grant information stated: "The primary drawback from the male perspective is that condoms decrease pleasure as compared to no condom, creating a trade-off that many men find unacceptable, particularly given that the decisions about use must be made just prior to intercourse. Is it possible to develop a product without this stigma, or better, one that is felt to enhance pleasure?" In November of the same year, 11 research teams were selected to receive the grant money. References External links "Sheathing Cupid's Arrow: the Oldest Artificial Contraceptive May Be Ripe for a Makeover", The Economist, February 2014. 16th-century introductions HIV/AIDS Prevention of HIV/AIDS Penis Sexual health World Health Organization essential medicines Wikipedia medicine articles ready to translate Contraception for males
5375
https://en.wikipedia.org/wiki/Country%20code
Country code
A country code is a short alphanumeric identification code for countries and dependent areas. Its primary use is in data processing and communications. Several identification systems have been developed. The term country code frequently refers to ISO 3166-1 alpha-2, as well as the telephone country code, which is embodied in the E.164 recommendation by the International Telecommunication Union (ITU). ISO 3166-1 The standard ISO 3166-1 defines short identification codes for most countries and dependent areas: ISO 3166-1 alpha-2: two-letter code ISO 3166-1 alpha-3: three-letter code ISO 3166-1 numeric: three-digit code The two-letter codes are used as the basis for other codes and applications, for example, for ISO 4217 currency codes with deviations, for country code top-level domain names (ccTLDs) on the Internet: list of Internet TLDs. Other applications are defined in ISO 3166-1 alpha-2. ITU country codes In telecommunication, a country code, or international subscriber dialing (ISD) code, is a telephone number prefix used in international direct dialing (IDD) and for destination routing of telephone calls to a country other than the caller's. A country or region with an autonomous telephone administration must apply for membership in the International Telecommunication Union (ITU) to participate in the international public switched telephone network (PSTN). County codes are defined by the ITU-T section of the ITU in standards E.123 and E.164. Country codes constitute the international telephone numbering plan, and are dialed only when calling a telephone number in another country. They are dialed before the national telephone number. International calls require at least one additional prefix to be dialing before the country code, to connect the call to international circuits, the international call prefix. When printing telephone numbers this is indicated by a plus-sign (+) in front of a complete international telephone number, per recommendation E164 by the ITU. Other country codes European Union: Before the 2004 EU enlargement the EU used the UN Road Traffic Conventions license plate codes. Since then, it has used the ISO 3166-1 alpha-2 code, but with two modifications: EL for Greece (instead of GR) (formerly) UK for United Kingdom (instead of GB) The Nomenclature des unités territoriales statistiques (Nomenclature of territorial units for statistics, NUTS) of the European Union, mostly focusing on subdivisions of the EU member states FIFA (Fédération Internationale de Football Association) assigns a three-letter code (dubbed FIFA Trigramme) to each of its member and non-member countries: List of FIFA country codes Federal Information Processing Standard (FIPS) 10-4 defined two-letter codes used by the U.S. government and in the CIA World Factbook: list of FIPS country codes. On September 2, 2008, FIPS 10-4 was one of ten standards withdrawn by NIST as a Federal Information Processing Standard. The Bureau of Transportation Statistics, part of the United States Department of Transportation (US DOT), maintains its own list of codes, so-called World Area Codes (WAC), for state and country codes. GOST 7.67: country codes in Cyrillic from the GOST standards committee From the International Civil Aviation Organization (ICAO): The national prefixes used in aircraft registration numbers Location prefixes in four-character ICAO airport codes International Olympic Committee (IOC) three-letter codes used in sporting events: list of IOC country codes From the International Telecommunication Union (ITU): the E.212 mobile country codes (MCC), for mobile/wireless phone addresses, the first few characters of call signs of radio stations (maritime, aeronautical, amateur radio, broadcasting, and so on) define the country: the ITU prefix, ITU letter codes for member-countries, ITU prefix - amateur and experimental stations - The International Telecommunication Union (ITU) assigns national telecommunication prefixes for amateur and experimental radio use, so that operators can be identified by their country of origin. These prefixes are legally administered by the national entity to which prefix ranges are assigned. Three-digit codes used to identify countries in maritime mobile radio transmissions, known as maritime identification digits License plates for automobiles: Under the 1949 and 1968 United Nations Road Traffic Conventions (distinguishing signs of vehicles in international traffic): List of international license plate codes. Diplomatic license plates in the United States, assigned by the U.S. State Department. North Atlantic Treaty Organization (NATO) used two-letter codes of its own: list of NATO country codes. They were largely borrowed from the FIPS 10-4 codes mentioned below. In 2003 the eighth edition of the Standardisation Agreement (STANAG) adopted the ISO 3166 three-letter codes with one exception (the code for Macedonia). With the ninth edition, NATO is transitioning to four- and six-letter codes based on ISO 3166 with a few exceptions and additions United Nations Development Programme (UNDP) also has its own list of trigram country codes World Intellectual Property Organization (WIPO): WIPO ST.3 gives two-letter codes to countries and regional intellectual property organizations World Meteorological Organization (WMO) maintains a list of country codes, used in reporting meteorological observations UIC (the International Union of Railways): UIC Country Codes The developers of ISO 3166 intended that in time it would replace other coding systems. Other codings Country identities may be encoded in the following coding systems: The initial digits of International Standard Book Numbers (ISBN) are group identifiers for countries, areas, or language regions. The first three digits of GS1 Company Prefixes used to identify products, for example, in barcodes, designate (national) numbering agencies. Lists of country codes by country A - B - C - D–E - F - G - H–I - J–K - L - M - N - O–Q - R - S - T - U–Z See also List of ISO 3166 country codes ISO 639 language codes Language code Numbering scheme References External links Comparison of various systems Another comparison: A comparison with ISO, IFS and others with notes United Nations Region Codes Geocodes
5376
https://en.wikipedia.org/wiki/Cladistics
Cladistics
Cladistics (; ) is an approach to biological classification in which organisms are categorized in groups ("clades") based on hypotheses of most recent common ancestry. The evidence for hypothesized relationships is typically shared derived characteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whose character states can be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the terms worms or fishes were used within a strict cladistic framework, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings. As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. ("Evolved from" is misleading, because in cladistics all descendants stay in the ancestral group). Upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished. Branches down to the divergence to the next significant (e.g. extant) sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not distinguishing whether an actual ancestor of other groupings was found. The techniques and nomenclature of cladistics have been applied to disciplines other than biology. (See phylogenetic nomenclature.) Cladistics findings are posing a difficulty for taxonomy, where the rank and (genus-)naming of established groupings may turn out to be inconsistent. Cladistics is now the most commonly used method to classify organisms. History The original methods used in cladistic analysis and the school of taxonomy derived from the work of the German entomologist Willi Hennig, who referred to it as phylogenetic systematics (also the title of his 1966 book); the terms "cladistics" and "clade" were popularized by other researchers. Cladistics in the original sense refers to a particular set of methods used in phylogenetic analysis, although it is now sometimes used to refer to the whole field. What is now called the cladistic method appeared as early as 1901 with a work by Peter Chalmers Mitchell for birds and subsequently by Robert John Tillyard (for insects) in 1921, and W. Zimmermann (for plants) in 1943. The term "clade" was introduced in 1958 by Julian Huxley after having been coined by Lucien Cuénot in 1940, "cladogenesis" in 1958, "cladistic" by Arthur Cain and Harrison in 1960, "cladist" (for an adherent of Hennig's school) by Ernst Mayr in 1965, and "cladistics" in 1966. Hennig referred to his own approach as "phylogenetic systematics". From the time of his original formulation until the end of the 1970s, cladistics competed as an analytical and philosophical approach to systematics with phenetics and so-called evolutionary taxonomy. Phenetics was championed at this time by the numerical taxonomists Peter Sneath and Robert Sokal, and evolutionary taxonomy by Ernst Mayr. Originally conceived, if only in essence, by Willi Hennig in a book published in 1950, cladistics did not flourish until its translation into English in 1966 (Lewin 1997). Today, cladistics is the most popular method for inferring phylogenetic trees from morphological data. In the 1990s, the development of effective polymerase chain reaction techniques allowed the application of cladistic methods to biochemical and molecular genetic traits of organisms, vastly expanding the amount of data available for phylogenetics. At the same time, cladistics rapidly became popular in evolutionary biology, because computers made it possible to process large quantities of data about organisms and their characteristics. Methodology The cladistic method interprets each shared character state transformation as a potential piece of evidence for grouping. Synapomorphies (shared, derived character states) are viewed as evidence of grouping, while symplesiomorphies (shared ancestral character states) are not. The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram) that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characters and originally calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used in phylogenetic analyses, and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sophisticated" but less parsimonious evolutionary models of character state transformation. Cladists contend that these models are unjustified because there is no evidence that they recover more "true" or "correct" results from actual empirical data sets Every cladogram is based on a particular dataset analyzed with a particular method. Datasets are tables consisting of molecular, morphological, ethological and/or other characters and a list of operational taxonomic units (OTUs), which may be genes, individuals, populations, species, or larger taxa that are presumed to be monophyletic and therefore to form, all together, one large clade; phylogenetic analysis infers the branching pattern within that clade. Different datasets and different methods, not to mention violations of the mentioned assumptions, often result in different cladograms. Only scientific investigation can show which is more likely to be correct. Until recently, for example, cladograms like the following have generally been accepted as accurate representations of the ancestral relations among turtles, lizards, crocodilians, and birds: If this phylogenetic hypothesis is correct, then the last common ancestor of turtles and birds, at the branch near the lived earlier than the last common ancestor of lizards and birds, near the . Most molecular evidence, however, produces cladograms more like this: If this is accurate, then the last common ancestor of turtles and birds lived later than the last common ancestor of lizards and birds. Since the cladograms show two mutually exclusive hypotheses to describe the evolutionary history, at most one of them is correct. The cladogram to the right represents the current universally accepted hypothesis that all primates, including strepsirrhines like the lemurs and lorises, had a common ancestor all of whose descendants are or were primates, and so form a clade; the name Primates is therefore recognized for this clade. Within the primates, all anthropoids (monkeys, apes, and humans) are hypothesized to have had a common ancestor all of whose descendants are or were anthropoids, so they form the clade called Anthropoidea. The "prosimians", on the other hand, form a paraphyletic taxon. The name Prosimii is not used in phylogenetic nomenclature, which names only clades; the "prosimians" are instead divided between the clades Strepsirhini and Haplorhini, where the latter contains Tarsiiformes and Anthropoidea. Lemurs and tarsiers may have looked closely related to humans, in the sense of being close on the evolutionary tree to humans. However, from the perspective of a tarsier, humans and lemurs would have looked close, in the exact same sense. Cladistics forces a neutral perspective, treating all branches (extant or extinct) in the same manner. It also forces one to try to make statements, and honestly take into account findings, about the exact historic relationships between the groups. Terminology for character states The following terms, coined by Hennig, are used to identify shared or distinct character states among groups: A plesiomorphy ("close form") or ancestral state is a character state that a taxon has retained from its ancestors. When two or more taxa that are not nested within each other share a plesiomorphy, it is a symplesiomorphy (from syn-, "together"). Symplesiomorphies do not mean that the taxa that exhibit that character state are necessarily closely related. For example, Reptilia is traditionally characterized by (among other things) being cold-blooded (i.e., not maintaining a constant high body temperature), whereas birds are warm-blooded. Since cold-bloodedness is a plesiomorphy, inherited from the common ancestor of traditional reptiles and birds, and thus a symplesiomorphy of turtles, snakes and crocodiles (among others), it does not mean that turtles, snakes and crocodiles form a clade that excludes the birds. An apomorphy ("separate form") or derived state is an innovation. It can thus be used to diagnose a clade – or even to help define a clade name in phylogenetic nomenclature. Features that are derived in individual taxa (a single species or a group that is represented by a single terminal in a given phylogenetic analysis) are called autapomorphies (from auto-, "self"). Autapomorphies express nothing about relationships among groups; clades are identified (or defined) by synapomorphies (from syn-, "together"). For example, the possession of digits that are homologous with those of Homo sapiens is a synapomorphy within the vertebrates. The tetrapods can be singled out as consisting of the first vertebrate with such digits homologous to those of Homo sapiens together with all descendants of this vertebrate (an apomorphy-based phylogenetic definition). Importantly, snakes and other tetrapods that do not have digits are nonetheless tetrapods: other characters, such as amniotic eggs and diapsid skulls, indicate that they descended from ancestors that possessed digits which are homologous with ours. A character state is homoplastic or "an instance of homoplasy" if it is shared by two or more organisms but is absent from their common ancestor or from a later ancestor in the lineage leading to one of the organisms. It is therefore inferred to have evolved by convergence or reversal. Both mammals and birds are able to maintain a high constant body temperature (i.e., they are warm-blooded). However, the accepted cladogram explaining their significant features indicates that their common ancestor is in a group lacking this character state, so the state must have evolved independently in the two clades. Warm-bloodedness is separately a synapomorphy of mammals (or a larger clade) and of birds (or a larger clade), but it is not a synapomorphy of any group including both these clades. Hennig's Auxiliary Principle states that shared character states should be considered evidence of grouping unless they are contradicted by the weight of other evidence; thus, homoplasy of some feature among members of a group may only be inferred after a phylogenetic hypothesis for that group has been established. The terms plesiomorphy and apomorphy are relative; their application depends on the position of a group within a tree. For example, when trying to decide whether the tetrapods form a clade, an important question is whether having four limbs is a synapomorphy of the earliest taxa to be included within Tetrapoda: did all the earliest members of the Tetrapoda inherit four limbs from a common ancestor, whereas all other vertebrates did not, or at least not homologously? By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features. It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree. Phylogenetics uses various forms of parsimony to decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence. Terminology for taxa Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states. These are compared in the table below. Criticism Cladistics, either generally or in specific applications, has been criticized from its beginnings. Decisions as to whether particular character states are homologous, a precondition of their being synapomorphies, have been challenged as involving circular reasoning and subjective judgements. Of course, the potential unreliability of evidence is a problem for any systematic method, or for that matter, for any empirical scientific endeavor at all. Transformed cladistics arose in the late 1970s in an attempt to resolve some of these problems by removing a priori assumptions about phylogeny from cladistic analysis, but it has remained unpopular. Issues Ancestors The cladistic method does not identify fossil species as actual ancestors of a clade. Instead, fossil taxa are identified as belonging to separate extinct branches. While a fossil species could be the actual ancestor of a clade, there is no way to know that. Therefore, a more conservative hypothesis is that the fossil taxon is related to other fossil and extant taxa, as implied by the pattern of shared apomorphic features. Extinction status An otherwise extinct group with any extant descendants, is not considered (literally) extinct, and for instance does not have a date of extinction. Hybridization, interbreeding Anything having to do with biology and sex is complicated and messy, and cladistics is no exception. Many species reproduce sexually, and are capable of interbreeding for millions of years. Worse, during such a period, many branches may have radiated, and it may take hundreds of millions of years for them to have whittled down to just two. Only then one can theoretically assign proper last common ancestors of groupings which do not inadvertently include earlier branches. The process of true cladistic bifurcation can thus take a much more extended time than one is usually aware of. In practice, for recent radiations, cladistically guided findings only give a coarse impression of the complexity. A more detailed account will give details about fractions of introgressions between groupings, and even geographic variations thereof. This has been used as an argument for the use of paraphyletic groupings, but typically other reasons are quoted. Horizontal gene transfer Horizontal gene transfer is the mobility of genetic info between different organisms that can have immediate or delayed effects for the reciprocal host. There are several processes in nature which can cause horizontal gene transfer. This does typically not directly interfere with ancestry of the organism, but can complicate the determination of that ancestry. On another level, one can map the horizontal gene transfer processes, by determining the phylogeny of the individual genes using cladistics. Naming stability If there is unclarity in mutual relationships, there are a lot of possible trees. Assigning names to each possible clade may not be prudent. Furthermore, established names are discarded in cladistics, or alternatively carry connotations which may no longer hold, such as when additional groups are found to have emerged in them. Naming changes are the direct result of changes in the recognition of mutual relationships, which often is still in flux, especially for extinct species. Hanging on to older naming and/or connotations is counter-productive, as they typically do not reflect actual mutual relationships precisely at all. E.g. Archaea, Asgard archaea, protists, slime molds, worms, invertebrata, fishes, reptilia, monkeys, Ardipithecus, Australopithecus, Homo erectus all contain Homo sapiens cladistically, in their sensu lato meaning. For originally extinct stem groups, sensu lato generally means generously keeping previously included groups, which then may come to include even living species. A pruned sensu stricto meaning is often adopted instead, but the group would need to be restricted to a single branche on the stem. Other branches then get their own name and level. This is commensurate to the fact that more senior stem branches are in fact closer related to the resulting group than the more basal stem branches; that those stem branches only may have lived for a short time does not affect that assessment in cladistics. In disciplines other than biology The comparisons used to acquire data on which cladograms can be based are not limited to the field of biology. Any group of individuals or classes that are hypothesized to have a common ancestor, and to which a set of common characteristics may or may not apply, can be compared pairwise. Cladograms can be used to depict the hypothetical descent relationships within groups of items in many different academic realms. The only requirement is that the items have characteristics that can be identified and measured. Anthropology and archaeology: Cladistic methods have been used to reconstruct the development of cultures or artifacts using groups of cultural traits or artifact features. Comparative mythology and folktale use cladistic methods to reconstruct the protoversion of many myths. Mythological phylogenies constructed with mythemes clearly support low horizontal transmissions (borrowings), historical (sometimes Palaeolithic) diffusions and punctuated evolution. They also are a powerful way to test hypotheses about cross-cultural relationships among folktales. Literature: Cladistic methods have been used in the classification of the surviving manuscripts of the Canterbury Tales, and the manuscripts of the Sanskrit Charaka Samhita. Historical linguistics: Cladistic methods have been used to reconstruct the phylogeny of languages using linguistic features. This is similar to the traditional comparative method of historical linguistics, but is more explicit in its use of parsimony and allows much faster analysis of large datasets (computational phylogenetics). Textual criticism or stemmatics: Cladistic methods have been used to reconstruct the phylogeny of manuscripts of the same work (and reconstruct the lost original) using distinctive copying errors as apomorphies. This differs from traditional historical-comparative linguistics in enabling the editor to evaluate and place in genetic relationship large groups of manuscripts with large numbers of variants that would be impossible to handle manually. It also enables parsimony analysis of contaminated traditions of transmission that would be impossible to evaluate manually in a reasonable period of time. Astrophysics infers the history of relationships between galaxies to create branching diagram hypotheses of galaxy diversification. See also Bioinformatics Biomathematics Coalescent theory Common descent Glossary of scientific naming Language family Patrocladogram Phylogenetic network Scientific classification Stratocladistics Subclade Systematics Three-taxon analysis Tree model Tree structure Notes and references Bibliography Available free online at Gallica (No direct URL). This is the paper credited by for the first use of the term 'clade'. responding to . Translated from manuscript in German eventually published in 1982 (Phylogenetische Systematik, Verlag Paul Parey, Berlin). d'Huy, Julien (2012b), "Le motif de Pygmalion : origine afrasienne et diffusion en Afrique". Sahara, 23: 49-59 . d'Huy, Julien (2013a), "Polyphemus (Aa. Th. 1137)." "A phylogenetic reconstruction of a prehistoric tale". Nouvelle Mythologie Comparée / New Comparative Mythology 1, d'Huy, Julien (2013c) "Les mythes évolueraient par ponctuations". Mythologie française, 252, 2013c: 8-12. d'Huy, Julien (2013d) "A Cosmic Hunt in the Berber sky : a phylogenetic reconstruction of Palaeolithic mythology". Les Cahiers de l'AARS, 15, 2013d: 93-106. Reissued 1997 in paperback. Includes a reprint of Mayr's 1974 anti-cladistics paper at pp. 433–476, "Cladistic analysis or cladistic classification." This is the paper to which is a response. . Tehrani, Jamshid J., 2013, "The Phylogeny of Little Red Riding Hood", PLOS ONE, 13 November. External links OneZoom: Tree of Life – all living species as intuitive and zoomable fractal explorer (responsive design) Willi Hennig Society Cladistics (scholarly journal of the Willi Hennig Society) Phylogenetics Evolutionary biology Zoology Philosophy of biology
5377
https://en.wikipedia.org/wiki/Calendar
Calendar
A calendar is a system of organizing days. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single and specific day within such a system. A calendar is also a physical record (often paper) of such a system. A calendar can also mean a list of planned events, such as a court calendar, or a partly or fully chronological list of documents, such as a calendar of wills. Periods in a calendar (such as years and months) are usually, though not necessarily, synchronized with the cycle of the sun or the moon. The most common type of pre-modern calendar was the lunisolar calendar, a lunar calendar that occasionally adds one intercalary month to remain synchronized with the solar year over the long term. Etymology The term calendar is taken from , the term for the first day of the month in the Roman calendar, related to the verb 'to call out', referring to the "calling" of the new moon when it was first seen. Latin meant 'account book, register' (as accounts were settled and debts were collected on the calends of each month). The Latin term was adopted in Old French as and from there in Middle English as by the 13th century (the spelling calendar is early modern). History The course of the sun and the moon are the most salient regularly recurring natural events useful for timekeeping, and in pre-modern societies around the world lunation and the year were most commonly used as time units. Nevertheless, the Roman calendar contained remnants of a very ancient pre-Etruscan 10-month solar year. The first recorded physical calendars, dependent on the development of writing in the Ancient Near East, are the Bronze Age Egyptian and Sumerian calendars. During the Vedic period India developed a sophisticated timekeeping methodology and calendars for Vedic rituals. According to Yukio Ohashi, the Vedanga calendar in ancient India was based on astronomical studies during the Vedic Period and was not derived from other cultures. A large number of calendar systems in the Ancient Near East were based on the Babylonian calendar dating from the Iron Age, among them the calendar system of the Persian Empire, which in turn gave rise to the Zoroastrian calendar and the Hebrew calendar. A great number of Hellenic calendars were developed in Classical Greece, and during the Hellenistic period they gave rise to the ancient Roman calendar and to various Hindu calendars. Calendars in antiquity were lunisolar, depending on the introduction of intercalary months to align the solar and the lunar years. This was mostly based on observation, but there may have been early attempts to model the pattern of intercalation algorithmically, as evidenced in the fragmentary 2nd-century Coligny calendar. The Roman calendar was reformed by Julius Caesar in 46 BC. His "Julian" calendar was no longer dependent on the observation of the new moon, but followed an algorithm of introducing a leap day every four years. This created a dissociation of the calendar month from lunation. The Gregorian calendar, introduced in 1582, corrected most of the remaining difference between the Julian calendar and the solar year. The Islamic calendar is based on the prohibition of intercalation (nasi') by Muhammad, in Islamic tradition dated to a sermon given on 9 Dhu al-Hijjah AH 10 (Julian date: 6 March 632). This resulted in an observation-based lunar calendar that shifts relative to the seasons of the solar year. There have been several modern proposals for reform of the modern calendar, such as the World Calendar, the International Fixed Calendar, the Holocene calendar, and the Hanke-Henry Permanent Calendar. Such ideas are mooted from time to time, but have failed to gain traction because of the loss of continuity and the massive upheaval that implementing them would involve, as well as their effect on cycles of religious activity. Systems A full calendar system has a different calendar date for every day. Thus the week cycle is by itself not a full calendar system; neither is a system to name the days within a year without a system for identifying the years. The simplest calendar system just counts time periods from a reference date. This applies for the Julian day or Unix Time. Virtually the only possible variation is using a different reference date, in particular, one less distant in the past to make the numbers smaller. Computations in these systems are just a matter of addition and subtraction. Other calendars have one (or multiple) larger units of time. Calendars that contain one level of cycles: week and weekday – this system (without year, the week number keeps on increasing) is not very common year and ordinal date within the year, e.g., the ISO 8601 ordinal date system Calendars with two levels of cycles: year, month, and day – most systems, including the Gregorian calendar (and its very similar predecessor, the Julian calendar), the Islamic calendar, the Solar Hijri calendar and the Hebrew calendar year, week, and weekday – e.g., the ISO week date Cycles can be synchronized with periodic phenomena: Lunar calendars are synchronized to the motion of the Moon (lunar phases); an example is the Islamic calendar. Solar calendars are based on perceived seasonal changes synchronized to the apparent motion of the Sun; an example is the Persian calendar. Lunisolar calendars are based on a combination of both solar and lunar reckonings; examples include the traditional calendar of China, the Hindu calendar in India and Nepal, and the Hebrew calendar. The week cycle is an example of one that is not synchronized to any external phenomenon (although it may have been derived from lunar phases, beginning anew every month). Very commonly a calendar includes more than one type of cycle or has both cyclic and non-cyclic elements. Most calendars incorporate more complex cycles. For example, the vast majority of them track years, months, weeks and days. The seven-day week is practically universal, though its use varies. It has run uninterrupted for millennia. Solar Solar calendars assign a date to each solar day. A day may consist of the period between sunrise and sunset, with a following period of night, or it may be a period between successive events such as two sunsets. The length of the interval between two such successive events may be allowed to vary slightly during the year, or it may be averaged into a mean solar day. Other types of calendar may also use a solar day. Lunar Not all calendars use the solar year as a unit. A lunar calendar is one in which days are numbered within each lunar phase cycle. Because the length of the lunar month is not an even fraction of the length of the tropical year, a purely lunar calendar quickly drifts against the seasons, which do not vary much near the equator. It does, however, stay constant with respect to other phenomena, notably tides. An example is the Islamic calendar. Alexander Marshack, in a controversial reading, believed that marks on a bone baton () represented a lunar calendar. Other marked bones may also represent lunar calendars. Similarly, Michael Rappenglueck believes that marks on a 15,000-year-old cave painting represent a lunar calendar. Lunisolar A lunisolar calendar is a lunar calendar that compensates by adding an extra month as needed to realign the months with the seasons. Prominent examples of lunisolar calendar are Hindu calendar and Buddhist calendar that are popular in South Asia and Southeast Asia. Another example is the Hebrew calendar, which uses a 19-year cycle. Subdivisions Nearly all calendar systems group consecutive days into "months" and also into "years". In a solar calendar a year approximates Earth's tropical year (that is, the time it takes for a complete cycle of seasons), traditionally used to facilitate the planning of agricultural activities. In a lunar calendar, the month approximates the cycle of the moon phase. Consecutive days may be grouped into other periods such as the week. Because the number of days in the tropical year is not a whole number, a solar calendar must have a different number of days in different years. This may be handled, for example, by adding an extra day in leap years. The same applies to months in a lunar calendar and also the number of months in a year in a lunisolar calendar. This is generally known as intercalation. Even if a calendar is solar, but not lunar, the year cannot be divided entirely into months that never vary in length. Cultures may define other units of time, such as the week, for the purpose of scheduling regular activities that do not easily coincide with months or years. Many cultures use different baselines for their calendars' starting years. Historically, several countries have based their calendars on regnal years, a calendar based on the reign of their current sovereign. For example, the year 2006 in Japan is year 18 Heisei, with Heisei being the era name of Emperor Akihito. Other types Arithmetical and astronomical An astronomical calendar is based on ongoing observation; examples are the religious Islamic calendar and the old religious Jewish calendar in the time of the Second Temple. Such a calendar is also referred to as an observation-based calendar. The advantage of such a calendar is that it is perfectly and perpetually accurate. The disadvantage is that working out when a particular date would occur is difficult. An arithmetic calendar is one that is based on a strict set of rules; an example is the current Jewish calendar. Such a calendar is also referred to as a rule-based calendar. The advantage of such a calendar is the ease of calculating when a particular date occurs. The disadvantage is imperfect accuracy. Furthermore, even if the calendar is very accurate, its accuracy diminishes slowly over time, owing to changes in Earth's rotation. This limits the lifetime of an accurate arithmetic calendar to a few thousand years. After then, the rules would need to be modified from observations made since the invention of the calendar. Complete and incomplete Calendars may be either complete or incomplete. Complete calendars provide a way of naming each consecutive day, while incomplete calendars do not. The early Roman calendar, which had no way of designating the days of the winter months other than to lump them together as "winter", is an example of an incomplete calendar, while the Gregorian calendar is an example of a complete calendar. Usage The primary practical use of a calendar is to identify days: to be informed about or to agree on a future event and to record an event that has happened. Days may be significant for agricultural, civil, religious, or social reasons. For example, a calendar provides a way to determine when to start planting or harvesting, which days are religious or civil holidays, which days mark the beginning and end of business accounting periods, and which days have legal significance, such as the day taxes are due or a contract expires. Also, a calendar may, by identifying a day, provide other useful information about the day such as its season. Calendars are also used as part of a complete timekeeping system: date and time of day together specify a moment in time. In the modern world, timekeepers can show time, date, and weekday. Some may also show the lunar phase. Gregorian The Gregorian calendar is the de facto international standard and is used almost everywhere in the world for civil purposes. The widely used solar aspect is a cycle of leap days in a 400-year cycle designed to keep the duration of the year aligned with the solar year. There is a lunar aspect which approximates the position of the moon during the year, and is used in the calculation of the date of Easter. Each Gregorian year has either 365 or 366 days (the leap day being inserted as 29 February), amounting to an average Gregorian year of 365.2425 days (compared to a solar year of 365.2422 days). The calendar was introduced in 1582 as a refinement to the Julian calendar, which had been in use throughout the European Middle Ages, amounting to a 0.002% correction in the length of the year. During the Early Modern period, its adoption was mostly limited to Roman Catholic nations, but by the 19th century it had become widely adopted for the sake of convenience in international trade. The last European country to adopt it was Greece, in 1923. The calendar epoch used by the Gregorian calendar is inherited from the medieval convention established by Dionysius Exiguus and associated with the Julian calendar. The year number is variously given as AD (for Anno Domini) or CE (for Common Era or Christian Era). Religious The most important use of pre-modern calendars is keeping track of the liturgical year and the observation of religious feast days. While the Gregorian calendar is itself historically motivated to the calculation of the Easter date, it is now in worldwide secular use as the de facto standard. Alongside the use of the Gregorian calendar for secular matters, there remain several calendars in use for religious purposes. Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church and generally include the liturgical seasons of Advent, Christmas, Ordinary Time (Time after Epiphany), Lent, Easter, and Ordinary Time (Time after Pentecost). Some Christian calendars do not include Ordinary Time and every day falls into a denominated season. Eastern Christians, including the Orthodox Church, use the Julian calendar. The Islamic calendar or Hijri calendar is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to date events in most of the Muslim countries (concurrently with the Gregorian calendar) and used by Muslims everywhere to determine the proper day on which to celebrate Islamic holy days and festivals. Its epoch is the Hijra (corresponding to AD 622) With an annual drift of 11 or 12 days, the seasonal relation is repeated approximately every 33 Islamic years. Various Hindu calendars remain in use in the Indian subcontinent, including the Nepali calendars, Bengali calendar, Malayalam calendar, Tamil calendar, Vikrama Samvat used in Northern India, and Shalivahana calendar in the Deccan states. The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar. Most of the Hindu calendars are inherited from a system first enunciated in Vedanga Jyotisha of Lagadha, standardized in the Sūrya Siddhānta and subsequently reformed by astronomers such as Āryabhaṭa (AD 499), Varāhamihira (6th century) and Bhāskara II (12th century). The Hebrew calendar is used by Jews worldwide for religious and cultural affairs, also influences civil matters in Israel (such as national holidays) and can be used business dealings (such as for the dating of cheques). Followers of the Baháʼí Faith use the Baháʼí calendar. The Baháʼí Calendar, also known as the Badi Calendar was first established by the Bab in the Kitab-i-Asma. The Baháʼí Calendar is also purely a solar calendar and comprises 19 months each having nineteen days. National The Chinese, Hebrew, Hindu, and Julian calendars are widely used for religious and social purposes. The Iranian (Persian) calendar is used in Iran and some parts of Afghanistan. The Assyrian calendar is in use by the members of the Assyrian community in the Middle East (mainly Iraq, Syria, Turkey, and Iran) and the diaspora. The first year of the calendar is exactly 4750 years prior to the start of the Gregorian calendar. The Ethiopian calendar or Ethiopic calendar is the principal calendar used in Ethiopia and Eritrea, with the Oromo calendar also in use in some areas. In neighboring Somalia, the Somali calendar co-exists alongside the Gregorian and Islamic calendars. In Thailand, where the Thai solar calendar is used, the months and days have adopted the western standard, although the years are still based on the traditional Buddhist calendar. Fiscal A fiscal calendar generally means the accounting year of a government or a business. It is used for budgeting, keeping accounts, and taxation. It is a set of 12 months that may start at any date in a year. The US government's fiscal year starts on 1 October and ends on 30 September. The government of India's fiscal year starts on 1 April and ends on 31 March. Small traditional businesses in India start the fiscal year on Diwali festival and end the day before the next year's Diwali festival. In accounting (and particularly accounting software), a fiscal calendar (such as a 4/4/5 calendar) fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year. January always has exactly 4 weeks (Sunday through Saturday), February has 4 weeks, March has 5 weeks, etc. Note that this calendar will normally need to add a 53rd week to every 5th or 6th year, which might be added to December or might not be, depending on how the organization uses those dates. There exists an international standard way to do this (the ISO week). The ISO week starts on a Monday and ends on a Sunday. Week 1 is always the week that contains 4 January in the Gregorian calendar. Formats The term calendar applies not only to a given scheme of timekeeping but also to a specific record or device displaying such a scheme, for example, an appointment book in the form of a pocket calendar (or personal organizer), desktop calendar, a wall calendar, etc. In a paper calendar, one or two sheets can show a single day, a week, a month, or a year. If a sheet is for a single day, it easily shows the date and the weekday. If a sheet is for multiple days it shows a conversion table to convert from weekday to date and back. With a special pointing device, or by crossing out past days, it may indicate the current date and weekday. This is the most common usage of the word. In the US Sunday is considered the first day of the week and so appears on the far left and Saturday the last day of the week appearing on the far right. In Britain, the weekend may appear at the end of the week so the first day is Monday and the last day is Sunday. The US calendar display is also used in Britain. It is common to display the Gregorian calendar in separate monthly grids of seven columns (from Monday to Sunday, or Sunday to Saturday depending on which day is considered to start the week – this varies according to country) and five to six rows (or rarely, four rows when the month of February contains 28 days in common years beginning on the first day of the week), with the day of the month numbered in each cell, beginning with 1. The sixth row is sometimes eliminated by marking 23/30 and 24/31 together as necessary. When working with weeks rather than months, a continuous format is sometimes more convenient, where no blank cells are inserted to ensure that the first day of a new month begins on a fresh row. Software Calendaring software provides users with an electronic version of a calendar, and may additionally provide an appointment book, address book, or contact list. Calendaring is a standard feature of many PDAs, EDAs, and smartphones. The software may be a local package designed for individual use (e.g., Lightning extension for Mozilla Thunderbird, Microsoft Outlook without Exchange Server, or Windows Calendar) or maybe a networked package that allows for the sharing of information between users (e.g., Mozilla Sunbird, Windows Live Calendar, Google Calendar, or Microsoft Outlook with Exchange Server). See also General Roman Calendar List of calendars Advent calendar Calendar reform Calendrical calculation Docket (court) History of calendars Horology List of international common standards List of unofficial observances by date Real-time clock (RTC), which underlies the Calendar software on modern computers. Unit of time References Citations Sources Further reading External links Calendar converter, including all major civil, religious and technical calendars. Units of time
5378
https://en.wikipedia.org/wiki/Physical%20cosmology
Physical cosmology
Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood. Physical cosmology, as it is now understood, began with the development in 1915 of Albert Einstein's general theory of relativity, followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way; then, work by Vesto Slipher and others showed that the universe is expanding. These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître, as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies; however, most cosmologists agree that the Big Bang theory best explains the observations. Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations. Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Areas relevant to cosmology include particle physics experiments and theory, theoretical and observational astrophysics, general relativity, quantum mechanics, and plasma physics. Subject history Modern cosmology developed along tandem tracks of theory and observation. In 1916, Albert Einstein published his theory of general relativity, which provided a unified description of gravity as a geometric property of space and time. At the time, Einstein believed in a static universe, but found that his original formulation of the theory did not permit it. This is because masses distributed throughout the universe gravitationally attract, and move toward each other over time. However, he realized that his equations permitted the introduction of a constant term which could counteract the attractive force of gravity on the cosmic scale. Einstein published his first paper on relativistic cosmology in 1917, in which he added this cosmological constant to his field equations in order to force them to model a static universe. The Einstein model describes a static universe; space is finite and unbounded (analogous to the surface of a sphere, which has a finite area but no edges). However, this so-called Einstein model is unstable to small perturbations—it will eventually start to expand or contract. It was later realized that Einstein's model was just one of a larger set of possibilities, all of which were consistent with general relativity and the cosmological principle. The cosmological solutions of general relativity were found by Alexander Friedmann in the early 1920s. His equations describe the Friedmann–Lemaître–Robertson–Walker universe, which may expand or contract, and whose geometry may be open, flat, or closed. In the 1910s, Vesto Slipher (and later Carl Wilhelm Wirtz) interpreted the red shift of spiral nebulae as a Doppler shift that indicated they were receding from Earth. However, it is difficult to determine the distance to astronomical objects. One way is to compare the physical size of an object to its angular size, but a physical size must be assumed to do this. Another method is to measure the brightness of an object and assume an intrinsic luminosity, from which the distance may be determined using the inverse-square law. Due to the difficulty of using these methods, they did not realize that the nebulae were actually galaxies outside our own Milky Way, nor did they speculate about the cosmological implications. In 1927, the Belgian Roman Catholic priest Georges Lemaître independently derived the Friedmann–Lemaître–Robertson–Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the "explosion" of a "primeval atom"—which was later called the Big Bang. In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. Hubble showed that the spiral nebulae were galaxies by determining their distances using measurements of the brightness of Cepheid variable stars. He discovered a relationship between the redshift of a galaxy and its distance. He interpreted this as evidence that the galaxies are receding from Earth in every direction at speeds proportional to their distance. This fact is now known as Hubble's law, though the numerical factor Hubble found relating recessional velocity and distance was off by a factor of ten, due to not knowing about the types of Cepheid variables. Given the cosmological principle, Hubble's law suggested that the universe was expanding. Two primary explanations were proposed for the expansion. One was Lemaître's Big Bang theory, advocated and developed by George Gamow. The other explanation was Fred Hoyle's steady state model in which new matter is created as the galaxies move away from each other. In this model, the universe is roughly the same at any point in time. For a number of years, support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. The discovery of the cosmic microwave background in 1965 lent strong support to the Big Bang model, and since the precise measurements of the cosmic microwave background by the Cosmic Background Explorer in the early 1990s, few cosmologists have seriously proposed other theories of the origin and evolution of the cosmos. One consequence of this is that in standard general relativity, the universe began with a singularity, as demonstrated by Roger Penrose and Stephen Hawking in the 1960s. An alternative view to extend the Big Bang model, suggesting the universe had no beginning or singularity and the age of the universe is infinite, has been presented. In September 2023, astrophysicists questioned the overall current view of the universe, in the form of the Standard Model of Cosmology, based on the latest James Webb Space Telescope studies. Energy of the cosmos The lightest chemical elements, primarily hydrogen and helium, were created during the Big Bang through the process of nucleosynthesis. In a sequence of stellar nucleosynthesis reactions, smaller atomic nuclei are then combined into larger atomic nuclei, ultimately forming stable iron group elements such as iron and nickel, which have the highest nuclear binding energies. The net process results in a later energy release, meaning subsequent to the Big Bang. Such reactions of nuclear particles can lead to sudden energy releases from cataclysmic variable stars such as novae. Gravitational collapse of matter into black holes also powers the most energetic processes, generally seen in the nuclear regions of galaxies, forming quasars and active galaxies. Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe, using conventional forms of energy. Instead, cosmologists propose a new form of energy called dark energy that permeates all space. One hypothesis is that dark energy is just the vacuum energy, a component of empty space that is associated with the virtual particles that exist due to the uncertainty principle. There is no clear way to define the total energy in the universe using the most widely accepted theory of gravity, general relativity. Therefore, it remains controversial whether the total energy is conserved in an expanding universe. For instance, each photon that travels through intergalactic space loses energy due to the redshift effect. This energy is not transferred to any other system, so seems to be permanently lost. On the other hand, some cosmologists insist that energy is conserved in some sense; this follows the law of conservation of energy. Different forms of energy may dominate the cosmos—relativistic particles which are referred to as radiation, or non-relativistic particles referred to as matter. Relativistic particles are particles whose rest mass is zero or negligible compared to their kinetic energy, and so move at the speed of light or very close to it; non-relativistic particles have much higher rest mass than their energy and so move much slower than the speed of light. As the universe expands, both matter and radiation become diluted. However, the energy densities of radiation and matter dilute at different rates. As a particular volume expands, mass-energy density is changed only by the increase in volume, but the energy density of radiation is changed both by the increase in volume and by the increase in the wavelength of the photons that make it up. Thus the energy of radiation becomes a smaller part of the universe's total energy than that of matter as it expands. The very early universe is said to have been 'radiation dominated' and radiation controlled the deceleration of expansion. Later, as the average energy per photon becomes roughly 10 eV and lower, matter dictates the rate of deceleration and the universe is said to be 'matter dominated'. The intermediate case is not treated well analytically. As the expansion of the universe continues, matter dilutes even further and the cosmological constant becomes dominant, leading to an acceleration in the universe's expansion. History of the universe The history of the universe is a central issue in cosmology. The history of the universe is divided into different periods called epochs, according to the dominant forces and processes in each period. The standard cosmological model is known as the Lambda-CDM model. Equations of motion Within the standard cosmological model, the equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. The solution is an expanding universe; due to this expansion, the radiation and matter in the universe cool down and become diluted. At first, the expansion is slowed down by gravitation attracting the radiation and matter in the universe. However, as these become diluted, the cosmological constant becomes more dominant and the expansion of the universe starts to accelerate rather than decelerate. In our universe this happened billions of years ago. Particle physics in cosmology During the earliest moments of the universe, the average energy density was very high, making knowledge of particle physics critical to understanding this environment. Hence, scattering processes and decay of unstable elementary particles are important for cosmological models of this period. As a rule of thumb, a scattering or a decay process is cosmologically important in a certain epoch if the time scale describing that process is smaller than, or comparable to, the time scale of the expansion of the universe. The time scale that describes the expansion of the universe is with being the Hubble parameter, which varies with time. The expansion timescale is roughly equal to the age of the universe at each point in time. Timeline of the Big Bang Observations suggest that the universe began around 13.8 billion years ago. Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses. Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics. This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars, and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever. Areas of study Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in Timeline of the Big Bang. Very early universe The early, hot universe appears to be well explained by the Big Bang from roughly 10−33 seconds onwards, but there are several problems. One is that there is no compelling reason, using current particle physics, for the universe to be flat, homogeneous, and isotropic (see the cosmological principle). Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation, which drives the universe to flatness, smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. The physical model behind cosmic inflation is extremely simple, but it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory. Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation. Another major problem in cosmology is what caused the universe to contain far more matter than antimatter. Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation, but this is not observed. Therefore, some process in the early universe must have created a small excess of matter over antimatter, and this (currently not understood) process is called baryogenesis. Three required conditions for baryogenesis were derived by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry, called CP-symmetry, between matter and antimatter. However, particle accelerators measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists look for additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry. Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment, rather than through observations of the universe. Big Bang Theory Big Bang nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions (protons), it principally produced deuterium, helium-4, and lithium. Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher, and Robert Herman. It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. Specifically, it can be used to test the equivalence principle, to probe dark matter, and test neutrino physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth "sterile" species of neutrino. Standard model of Big Bang cosmology The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). It is frequently referred to as the standard model of Big Bang cosmology. Cosmic microwave background The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 105. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses. Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background. On 17 March 2014, astronomers of the BICEP2 Collaboration announced the apparent detection of B-mode polarization of the CMB, considered to be evidence of primordial gravitational waves that are predicted by the theory of inflation to occur during the earliest phase of the Big Bang. However, later that year the Planck collaboration provided a more accurate measurement of cosmic dust, concluding that the B-mode signal from dust is the same strength as that reported from BICEP2. On 30 January 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to interstellar dust in the Milky Way. Formation and evolution of large-scale structure Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy. Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include: The Lyman-alpha forest, which allows cosmologists to measure the distribution of neutral atomic hydrogen gas in the early universe, by measuring the absorption of light from distant quasars by the gas. The 21-centimeter absorption line of neutral atomic hydrogen also provides a sensitive test of cosmology. Weak lensing, the distortion of a distant image by gravitational lensing due to dark matter. These will help cosmologists settle the question of when and how structure formed in the universe. Dark matter Evidence from Big Bang nucleosynthesis, the cosmic microwave background, structure formation, and galaxy rotation curves suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, a gravitationally-interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology. TeVeS is a version of MOND that can explain gravitational lensing. Dark energy If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate. Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists (see string landscape) have invoked the 'weak anthropic principle': i.e. the reason that physicists observe a universe with such a small cosmological constant is that no physicists (or any life) could exist in a universe with a larger cosmological constant. Many cosmologists find this an unsatisfying explanation: perhaps because while the weak anthropic principle is self-evident (given that living observers exist, there must be at least one universe with a cosmological constant which allows for life to exist) it does not attempt to explain the context of that universe. For example, the weak anthropic principle alone does not distinguish between: Only one universe will ever exist and there is some underlying principle that constrains the CC to the value we observe. Only one universe will ever exist and although there is no underlying principle fixing the CC, we got lucky. Lots of universes exist (simultaneously or serially) with a range of CC values, and of course ours is one of the life-supporting ones. Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology. A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe. In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip, or whether it will eventually reverse, lead to a Big Freeze, or follow some other scenario. Gravitational waves Gravitational waves are ripples in the curvature of spacetime that propagate as waves at the speed of light, generated in certain gravitational interactions that propagate outward from their source. Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves to collect observational data about sources of detectable gravitational waves such as binary star systems composed of white dwarfs, neutron stars, and black holes; and events such as supernovae, and the formation of the early universe shortly after the Big Bang. In 2016, the LIGO Scientific Collaboration and Virgo Collaboration teams announced that they had made the first observation of gravitational waves, originating from a pair of merging black holes using the Advanced LIGO detectors. On 15 June 2016, a second detection of gravitational waves from coalescing black holes was announced. Besides LIGO, many other gravitational-wave observatories (detectors) are under construction. Other areas of inquiry Cosmologists also study: Whether primordial black holes were formed in our universe, and what happened to them. Detection of cosmic rays with energies above the GZK cutoff, and whether it signals a failure of special relativity at high energies. The equivalence principle, whether or not Einstein's general theory of relativity is the correct theory of gravitation, and if the fundamental laws of physics are the same everywhere in the universe. See also Accretion Hubble's law Illustris project List of cosmologists Physical ontology Quantum cosmology String cosmology Universal Rotation Curve References Further reading Popular Textbooks Introductory cosmology and general relativity without the full tensor apparatus, deferred until the last part of the book. Modern introduction to cosmology covering the homogeneous and inhomogeneous universe as well as inflation and the CMB. An introductory text, released slightly before the WMAP results. For undergraduates; mathematically gentle with a strong historical focus. An introductory astronomy text. The classic reference for researchers. Cosmology without general relativity. An introduction to cosmology with a thorough discussion of inflation. Discusses the formation of large-scale structures in detail. An introduction including more on general relativity and quantum field theory than most. Strong historical focus. The classic work on large-scale structure and correlation functions. A standard reference for the mathematical formalism. External links From groups Cambridge Cosmology – from Cambridge University (public home page) Cosmology 101 – from the NASA WMAP group Center for Cosmological Physics. University of Chicago, Chicago. Origins, Nova Online – Provided by PBS. From individuals Gale, George, "Cosmology: Methodological Debates in the 1930s and 1940s", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.) Madore, Barry F., "Level 5 : A Knowledgebase for Extragalactic Astronomy and Cosmology". Caltech and Carnegie. Pasadena, California. Tyler, Pat, and Phil Newman "Beyond Einstein". Laboratory for High Energy Astrophysics (LHEA) NASA Goddard Space Flight Center. Wright, Ned. "Cosmology tutorial and FAQ". Division of Astronomy & Astrophysics, UCLA. Philosophy of physics Philosophy of time Astronomical sub-disciplines Astrophysics
5382
https://en.wikipedia.org/wiki/Inflation%20%28cosmology%29
Inflation (cosmology)
In physical cosmology, cosmic inflation, cosmological inflation, or just inflation, is a theory of exponential expansion of space in the early universe. The inflationary epoch is believed to have lasted from  seconds to between and  seconds after the Big Bang. Following the inflationary period, the universe continued to expand, but at a slower rate. The acceleration of this expansion due to dark energy began after the universe was already over 7.7 billion years old (5.4 billion years ago). Inflation theory was developed in the late 1970s and early 80s, with notable contributions by several theoretical physicists, including Alexei Starobinsky at Landau Institute for Theoretical Physics, Alan Guth at Cornell University, and Andrei Linde at Lebedev Physical Institute. Alexei Starobinsky, Alan Guth, and Andrei Linde won the 2014 Kavli Prize "for pioneering the theory of cosmic inflation". It was developed further in the early 1980s. It explains the origin of the large-scale structure of the cosmos. Quantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed. The detailed particle physics mechanism responsible for inflation is unknown. The basic inflationary paradigm is accepted by most physicists, as a number of inflation model predictions have been confirmed by observation; however, a substantial minority of scientists dissent from this position. The hypothetical field thought to be responsible for inflation is called the inflaton. In 2002 three of the original architects of the theory were recognized for their major contributions; physicists Alan Guth of M.I.T., Andrei Linde of Stanford, and Paul Steinhardt of Princeton shared the prestigious Dirac Prize "for development of the concept of inflation in cosmology". In 2012 Guth and Linde were awarded the Breakthrough Prize in Fundamental Physics for their invention and development of inflationary cosmology. Overview Around 1930, Edwin Hubble discovered that light from remote galaxies was redshifted; the more remote, the more shifted. This implies that the galaxies are receding from the Earth, with more distant galaxies receding more rapidly, such that galaxies also recede from each other. This expansion of the universe was previously predicted by Alexander Friedmann and Georges Lemaître from the theory of general relativity. It can be understood as a consequence of an initial impulse, which sent the contents of the universe flying apart at such a rate that their mutual gravitational attraction has not reversed their separation. Inflation may provide this initial impulse. According to the Friedmann equations that describe the dynamics of an expanding universe, a fluid with sufficiently negative pressure exerts gravitational repulsion in the cosmological context. A field in a positive-energy false vacuum state could represent such a fluid, and the resulting repulsion would set the universe into exponential expansion. This inflation phase was originally proposed by Alan Guth in 1979 because the exponential expansion could dilute exotic relics, such as magnetic monopoles, that were predicted by grand unified theories at the time. This would explain why such relics were not seen. It was quickly realized that such accelerated expansion would resolve the horizon problem and the flatness problem. These problems arise from the notion that to look like it does today, the Universe must have started from very finely tuned, or "special", initial conditions at the Big Bang. Theory An expanding universe generally has a cosmological horizon, which, by analogy with the more familiar horizon caused by the curvature of Earth's surface, marks the boundary of the part of the Universe that an observer can see. Light (or other radiation) emitted by objects beyond the cosmological horizon in an accelerating universe never reaches the observer, because the space in between the observer and the object is expanding too rapidly. The observable universe is one causal patch of a much larger unobservable universe; other parts of the Universe cannot communicate with Earth yet. These parts of the Universe are outside our current cosmological horizon. In the standard hot big bang model, without inflation, the cosmological horizon moves out, bringing new regions into view. Yet as a local observer sees such a region for the first time, it looks no different from any other region of space the local observer has already seen: its background radiation is at nearly the same temperature as the background radiation of other regions, and its space-time curvature is evolving lock-step with the others. This presents a mystery: how did these new regions know what temperature and curvature they were supposed to have? They couldn't have learned it by getting signals, because they were not previously in communication with our past light cone. Inflation answers this question by postulating that all the regions come from an earlier era with a big vacuum energy, or cosmological constant. A space with a cosmological constant is qualitatively different: instead of moving outward, the cosmological horizon stays put. For any one observer, the distance to the cosmological horizon is constant. With exponentially expanding space, two nearby observers are separated very quickly; so much so, that the distance between them quickly exceeds the limits of communications. The spatial slices are expanding very fast to cover huge volumes. Things are constantly moving beyond the cosmological horizon, which is a fixed distance away, and everything becomes homogeneous. As the inflationary field slowly relaxes to the vacuum, the cosmological constant goes to zero and space begins to expand normally. The new regions that come into view during the normal expansion phase are exactly the same regions that were pushed out of the horizon during inflation, and so they are at nearly the same temperature and curvature, because they come from the same originally small patch of space. The theory of inflation thus explains why the temperatures and curvatures of different regions are so nearly equal. It also predicts that the total curvature of a space-slice at constant global time is zero. This prediction implies that the total ordinary matter, dark matter and residual vacuum energy in the Universe have to add up to the critical density, and the evidence supports this. More strikingly, inflation allows physicists to calculate the minute differences in temperature of different regions from quantum fluctuations during the inflationary era, and many of these quantitative predictions have been confirmed. Space expands In a space that expands exponentially (or nearly exponentially) with time, any pair of free-floating objects that are initially at rest will move apart from each other at an accelerating rate, at least as long as they are not bound together by any force. From the point of view of one such object, the spacetime is something like an inside-out Schwarzschild black hole—each object is surrounded by a spherical event horizon. Once the other object has fallen through this horizon it can never return, and even light signals it sends will never reach the first object (at least so long as the space continues to expand exponentially). In the approximation that the expansion is exactly exponential, the horizon is static and remains a fixed physical distance away. This patch of an inflating universe can be described by the following metric: This exponentially expanding spacetime is called a de Sitter space, and to sustain it there must be a cosmological constant, a vacuum energy density that is constant in space and time and proportional to Λ in the above metric. For the case of exactly exponential expansion, the vacuum energy has a negative pressure p equal in magnitude to its energy density ρ; the equation of state is p=−ρ. Inflation is typically not an exactly exponential expansion, but rather quasi- or near-exponential. In such a universe the horizon will slowly grow with time as the vacuum energy density gradually decreases. Few inhomogeneities remain Because the accelerating expansion of space stretches out any initial variations in density or temperature to very large length scales, an essential feature of inflation is that it smooths out inhomogeneities and anisotropies, and reduces the curvature of space. This pushes the Universe into a very simple state in which it is completely dominated by the inflaton field and the only significant inhomogeneities are tiny quantum fluctuations. Inflation also dilutes exotic heavy particles, such as the magnetic monopoles predicted by many extensions to the Standard Model of particle physics. If the Universe was only hot enough to form such particles before a period of inflation, they would not be observed in nature, as they would be so rare that it is quite likely that there are none in the observable universe. Together, these effects are called the inflationary "no-hair theorem" by analogy with the no hair theorem for black holes. The "no-hair" theorem works essentially because the cosmological horizon is no different from a black-hole horizon, except for not testable disagreements about what is on the other side. The interpretation of the no-hair theorem is that the Universe (observable and unobservable) expands by an enormous factor during inflation. In an expanding universe, energy densities generally fall, or get diluted, as the volume of the Universe increases. For example, the density of ordinary "cold" matter (dust) goes down as the inverse of the volume: when linear dimensions double, the energy density goes down by a factor of eight; the radiation energy density goes down even more rapidly as the Universe expands since the wavelength of each photon is stretched (redshifted), in addition to the photons being dispersed by the expansion. When linear dimensions are doubled, the energy density in radiation falls by a factor of sixteen (see the solution of the energy density continuity equation for an ultra-relativistic fluid). During inflation, the energy density in the inflaton field is roughly constant. However, the energy density in everything else, including inhomogeneities, curvature, anisotropies, exotic particles, and standard-model particles is falling, and through sufficient inflation these all become negligible. This leaves the Universe flat and symmetric, and (apart from the homogeneous inflaton field) mostly empty, at the moment inflation ends and reheating begins. Duration A key requirement is that inflation must continue long enough to produce the present observable universe from a single, small inflationary Hubble volume. This is necessary to ensure that the Universe appears flat, homogeneous and isotropic at the largest observable scales. This requirement is generally thought to be satisfied if the Universe expanded by a factor of at least during inflation. Reheating Inflation is a period of supercooled expansion, when the temperature drops by a factor of 100,000 or so. (The exact drop is model-dependent, but in the first models it was typically from  K down to  K.) This relatively low temperature is maintained during the inflationary phase. When inflation ends the temperature returns to the pre-inflationary temperature; this is called reheating or thermalization because the large potential energy of the inflaton field decays into particles and fills the Universe with Standard Model particles, including electromagnetic radiation, starting the radiation dominated phase of the Universe. Because the nature of the inflaton field is not known, this process is still poorly understood, although it is believed to take place through a parametric resonance. Motivations Inflation resolves several problems in Big Bang cosmology that were discovered in the 1970s. Inflation was first proposed by Alan Guth in 1979 while investigating the problem of why no magnetic monopoles are seen today; he found that a positive-energy false vacuum would, according to general relativity, generate an exponential expansion of space. It was very quickly realised that such an expansion would resolve many other long-standing problems. These problems arise from the observation that to look like it does today, the Universe would have to have started from very finely tuned, or "special" initial conditions at the Big Bang. Inflation attempts to resolve these problems by providing a dynamical mechanism that drives the Universe to this special state, thus making a universe like ours much more likely in the context of the Big Bang theory. Horizon problem The horizon problem is the problem of determining why the Universe appears statistically homogeneous and isotropic in accordance with the cosmological principle. For example, molecules in a canister of gas are distributed homogeneously and isotropically because they are in thermal equilibrium: gas throughout the canister has had enough time to interact to dissipate inhomogeneities and anisotropies. The situation is quite different in the big bang model without inflation, because gravitational expansion does not give the early universe enough time to equilibrate. In a big bang with only the matter and radiation known in the Standard Model, two widely separated regions of the observable universe cannot have equilibrated because they move apart from each other faster than the speed of light and thus have never come into causal contact. In the early Universe, it was not possible to send a light signal between the two regions. Because they have had no interaction, it is difficult to explain why they have the same temperature (are thermally equilibrated). Historically, proposed solutions included the Phoenix universe of Georges Lemaître, the related oscillatory universe of Richard Chase Tolman, and the Mixmaster universe of Charles Misner. Lemaître and Tolman proposed that a universe undergoing a number of cycles of contraction and expansion could come into thermal equilibrium. Their models failed, however, because of the buildup of entropy over several cycles. Misner made the (ultimately incorrect) conjecture that the Mixmaster mechanism, which made the Universe more chaotic, could lead to statistical homogeneity and isotropy. Flatness problem The flatness problem is sometimes called one of the Dicke coincidences (along with the cosmological constant problem). It became known in the 1960s that the density of matter in the Universe was comparable to the critical density necessary for a flat universe (that is, a universe whose large scale geometry is the usual Euclidean geometry, rather than a non-Euclidean hyperbolic or spherical geometry). Therefore, regardless of the shape of the universe the contribution of spatial curvature to the expansion of the Universe could not be much greater than the contribution of matter. But as the Universe expands, the curvature redshifts away more slowly than matter and radiation. Extrapolated into the past, this presents a fine-tuning problem because the contribution of curvature to the Universe must be exponentially small (sixteen orders of magnitude less than the density of radiation at Big Bang nucleosynthesis, for example). This problem is exacerbated by recent observations of the cosmic microwave background that have demonstrated that the Universe is flat to within a few percent. Magnetic-monopole problem The magnetic monopole problem, sometimes called "the exotic-relics problem", says that if the early universe were very hot, a large number of very heavy, stable magnetic monopoles would have been produced. Stable magnetic monopoles are a problem for Grand Unified Theories, which propose that at high temperatures (such as in the early universe) the electromagnetic force, strong, and weak nuclear forces are not actually fundamental forces but arise due to spontaneous symmetry breaking from a single gauge theory. These theories predict a number of heavy, stable particles that have not been observed in nature. The most notorious is the magnetic monopole, a kind of stable, heavy "charge" of magnetic field. Monopoles are predicted to be copiously produced following Grand Unified Theories at high temperature, and they should have persisted to the present day, to such an extent that they would become the primary constituent of the Universe. Not only is that not the case, but all searches for them have failed, placing stringent limits on the density of relic magnetic monopoles in the Universe. A period of inflation that occurs below the temperature where magnetic monopoles can be produced would offer a possible resolution of this problem: Monopoles would be separated from each other as the Universe around them expands, potentially lowering their observed density by many orders of magnitude. Though, as cosmologist Martin Rees has written, "Skeptics about exotic physics might not be hugely impressed by a theoretical argument to explain the absence of particles that are themselves only hypothetical. Preventive medicine can readily seem 100 percent effective against a disease that doesn't exist!" History Precursors In the early days of General Relativity, Albert Einstein introduced the cosmological constant to allow a static solution, which was a three-dimensional sphere with a uniform density of matter. Later, Willem de Sitter found a highly symmetric inflating universe, which described a universe with a cosmological constant that is otherwise empty. It was discovered that Einstein's universe is unstable, and that small fluctuations cause it to collapse or turn into a de Sitter universe. In the early 1970s, Zeldovich noticed the flatness and horizon problems of Big Bang cosmology; before his work, cosmology was presumed to be symmetrical on purely philosophical grounds. In the Soviet Union, this and other considerations led Belinski and Khalatnikov to analyze the chaotic BKL singularity in General Relativity. Misner's Mixmaster universe attempted to use this chaotic behavior to solve the cosmological problems, with limited success. False vacuum In the late 1970s, Sidney Coleman applied the instanton techniques developed by Alexander Polyakov and collaborators to study the fate of the false vacuum in quantum field theory. Like a metastable phase in statistical mechanics—water below the freezing temperature or above the boiling point—a quantum field would need to nucleate a large enough bubble of the new vacuum, the new phase, in order to make a transition. Coleman found the most likely decay pathway for vacuum decay and calculated the inverse lifetime per unit volume. He eventually noted that gravitational effects would be significant, but he did not calculate these effects and did not apply the results to cosmology. The universe could have been spontaneously created from nothing (no space, time, nor matter) by quantum fluctuations of metastable false vacuum causing an expanding bubble of true vacuum. Starobinsky inflation In the Soviet Union, Alexei Starobinsky noted that quantum corrections to general relativity should be important for the early universe. These generically lead to curvature-squared corrections to the Einstein–Hilbert action and a form of f(R) modified gravity. The solution to Einstein's equations in the presence of curvature squared terms, when the curvatures are large, leads to an effective cosmological constant. Therefore, he proposed that the early universe went through an inflationary de Sitter era. This resolved the cosmology problems and led to specific predictions for the corrections to the microwave background radiation, corrections that were then calculated in detail. Starobinsky used the action which corresponds to the potential in the Einstein frame. This results in the observables: Monopole problem In 1978, Zeldovich noted the magnetic monopole problem, which was an unambiguous quantitative version of the horizon problem, this time in a subfield of particle physics, which led to several speculative attempts to resolve it. In 1980 Alan Guth realized that false vacuum decay in the early universe would solve the problem, leading him to propose a scalar-driven inflation. Starobinsky's and Guth's scenarios both predicted an initial de Sitter phase, differing only in mechanistic details. Early inflationary models Guth proposed inflation in January 1981 to explain the nonexistence of magnetic monopoles; it was Guth who coined the term "inflation". At the same time, Starobinsky argued that quantum corrections to gravity would replace the supposed initial singularity of the Universe with an exponentially expanding de Sitter phase. In October 1980, Demosthenes Kazanas suggested that exponential expansion could eliminate the particle horizon and perhaps solve the horizon problem, while Sato suggested that an exponential expansion could eliminate domain walls (another kind of exotic relic). In 1981 Einhorn and Sato published a model similar to Guth's and showed that it would resolve the puzzle of the magnetic monopole abundance in Grand Unified Theories. Like Guth, they concluded that such a model not only required fine tuning of the cosmological constant, but also would likely lead to a much too granular universe, i.e., to large density variations resulting from bubble wall collisions. Guth proposed that as the early universe cooled, it was trapped in a false vacuum with a high energy density, which is much like a cosmological constant. As the very early universe cooled it was trapped in a metastable state (it was supercooled), which it could only decay out of through the process of bubble nucleation via quantum tunneling. Bubbles of true vacuum spontaneously form in the sea of false vacuum and rapidly begin expanding at the speed of light. Guth recognized that this model was problematic because the model did not reheat properly: when the bubbles nucleated, they did not generate any radiation. Radiation could only be generated in collisions between bubble walls. But if inflation lasted long enough to solve the initial conditions problems, collisions between bubbles became exceedingly rare. In any one causal patch it is likely that only one bubble would nucleate. Slow-roll inflation The bubble collision problem was solved by Linde and independently by Andreas Albrecht and Paul Steinhardt in a model named new inflation or slow-roll inflation (Guth's model then became known as old inflation). In this model, instead of tunneling out of a false vacuum state, inflation occurred by a scalar field rolling down a potential energy hill. When the field rolls very slowly compared to the expansion of the Universe, inflation occurs. However, when the hill becomes steeper, inflation ends and reheating can occur. Effects of asymmetries Eventually, it was shown that new inflation does not produce a perfectly symmetric universe, but that quantum fluctuations in the inflaton are created. These fluctuations form the primordial seeds for all structure created in the later universe. These fluctuations were first calculated by Viatcheslav Mukhanov and G. V. Chibisov in analyzing Starobinsky's similar model. In the context of inflation, they were worked out independently of the work of Mukhanov and Chibisov at the three-week 1982 Nuffield Workshop on the Very Early Universe at Cambridge University. The fluctuations were calculated by four groups working separately over the course of the workshop: Stephen Hawking; Starobinsky; Guth and So-Young Pi; and Bardeen, Steinhardt and Turner. Observational status Inflation is a mechanism for realizing the cosmological principle, which is the basis of the standard model of physical cosmology: it accounts for the homogeneity and isotropy of the observable universe. In addition, it accounts for the observed flatness and absence of magnetic monopoles. Since Guth's early work, each of these observations has received further confirmation, most impressively by the detailed observations of the cosmic microwave background made by the Planck spacecraft. This analysis shows that the Universe is flat to within percent, and that it is homogeneous and isotropic to one part in 100,000. Inflation predicts that the structures visible in the Universe today formed through the gravitational collapse of perturbations that were formed as quantum mechanical fluctuations in the inflationary epoch. The detailed form of the spectrum of perturbations, called a nearly-scale-invariant Gaussian random field is very specific and has only two free parameters. One is the amplitude of the spectrum and the spectral index, which measures the slight deviation from scale invariance predicted by inflation (perfect scale invariance corresponds to the idealized de Sitter universe). The other free parameter is the tensor to scalar ratio. The simplest inflation models, those without fine-tuning, predict a tensor to scalar ratio near 0.1 . Inflation predicts that the observed perturbations should be in thermal equilibrium with each other (these are called adiabatic or isentropic perturbations). This structure for the perturbations has been confirmed by the Planck spacecraft, WMAP spacecraft and other cosmic microwave background (CMB) experiments, and galaxy surveys, especially the ongoing Sloan Digital Sky Survey. These experiments have shown that the one part in 100,000 inhomogeneities observed have exactly the form predicted by theory. There is evidence for a slight deviation from scale invariance. The spectral index, is one for a scale-invariant Harrison–Zel'dovich spectrum. The simplest inflation models predict that is between 0.92 and 0.98 . This is the range that is possible without fine-tuning of the parameters related to energy. From Planck data it can be inferred that =0.968 ± 0.006, and a tensor to scalar ratio that is less than 0.11 . These are considered an important confirmation of the theory of inflation. Various inflation theories have been proposed that make radically different predictions, but they generally have much more fine-tuning than should be necessary. As a physical model, however, inflation is most valuable in that it robustly predicts the initial conditions of the Universe based on only two adjustable parameters: the spectral index (that can only change in a small range) and the amplitude of the perturbations. Except in contrived models, this is true regardless of how inflation is realized in particle physics. Occasionally, effects are observed that appear to contradict the simplest models of inflation. The first-year WMAP data suggested that the spectrum might not be nearly scale-invariant, but might instead have a slight curvature. However, the third-year data revealed that the effect was a statistical anomaly. Another effect remarked upon since the first cosmic microwave background satellite, the Cosmic Background Explorer is that the amplitude of the quadrupole moment of the CMB is unexpectedly low and the other low multipoles appear to be preferentially aligned with the ecliptic plane. Some have claimed that this is a signature of non-Gaussianity and thus contradicts the simplest models of inflation. Others have suggested that the effect may be due to other new physics, foreground contamination, or even publication bias. An experimental program is underway to further test inflation with more precise CMB measurements. In particular, high precision measurements of the so-called "B-modes" of the polarization of the background radiation could provide evidence of the gravitational radiation produced by inflation, and could also show whether the energy scale of inflation predicted by the simplest models (~ GeV) is correct. In March 2014, the BICEP2 team announced B-mode CMB polarization confirming inflation had been demonstrated. The team announced the tensor-to-scalar power ratio was between 0.15 and 0.27 (rejecting the null hypothesis; is expected to be 0 in the absence of inflation). However, on 19 June 2014, lowered confidence in confirming the findings was reported; on 19 September 2014, a further reduction in confidence was reported and, on 30 January 2015, even less confidence yet was reported. By 2018, additional data suggested, with 95% confidence, that is 0.06 or lower: consistent with the null hypothesis, but still also consistent with many remaining models of inflation. Other potentially corroborating measurements are expected from the Planck spacecraft, although it is unclear if the signal will be visible, or if contamination from foreground sources will interfere. Other forthcoming measurements, such as those of 21 centimeter radiation (radiation emitted and absorbed from neutral hydrogen before the first stars formed), may measure the power spectrum with even greater resolution than the CMB and galaxy surveys, although it is not known if these measurements will be possible or if interference with radio sources on Earth and in the galaxy will be too great. Theoretical status In Guth's early proposal, it was thought that the inflaton was the Higgs field, the field that explains the mass of the elementary particles. It is now believed by some that the inflaton cannot be the Higgs field although the recent discovery of the Higgs boson has increased the number of works considering the Higgs field as inflaton. One problem of this identification is the current tension with experimental data at the electroweak scale, which is currently under study at the Large Hadron Collider (LHC). Other models of inflation relied on the properties of Grand Unified Theories. Since the simplest models of grand unification have failed, it is now thought by many physicists that inflation will be included in a supersymmetric theory such as string theory or a supersymmetric grand unified theory. At present, while inflation is understood principally by its detailed predictions of the initial conditions for the hot early universe, the particle physics is largely ad hoc modelling. As such, although predictions of inflation have been consistent with the results of observational tests, many open questions remain. Fine-tuning problem One of the most severe challenges for inflation arises from the need for fine tuning. In new inflation, the slow-roll conditions must be satisfied for inflation to occur. The slow-roll conditions say that the inflaton potential must be flat (compared to the large vacuum energy) and that the inflaton particles must have a small mass. New inflation requires the Universe to have a scalar field with an especially flat potential and special initial conditions. However, explanations for these fine-tunings have been proposed. For example, classically scale invariant field theories, where scale invariance is broken by quantum effects, provide an explanation of the flatness of inflationary potentials, as long as the theory can be studied through perturbation theory. Linde proposed a theory known as chaotic inflation in which he suggested that the conditions for inflation were actually satisfied quite generically. Inflation will occur in virtually any universe that begins in a chaotic, high energy state that has a scalar field with unbounded potential energy. However, in his model the inflaton field necessarily takes values larger than one Planck unit: for this reason, these are often called large field models and the competing new inflation models are called small field models. In this situation, the predictions of effective field theory are thought to be invalid, as renormalization should cause large corrections that could prevent inflation. This problem has not yet been resolved and some cosmologists argue that the small field models, in which inflation can occur at a much lower energy scale, are better models. While inflation depends on quantum field theory (and the semiclassical approximation to quantum gravity) in an important way, it has not been completely reconciled with these theories. Brandenberger commented on fine-tuning in another situation. The amplitude of the primordial inhomogeneities produced in inflation is directly tied to the energy scale of inflation. This scale is suggested to be around GeV or times the Planck energy. The natural scale is naïvely the Planck scale so this small value could be seen as another form of fine-tuning (called a hierarchy problem): the energy density given by the scalar potential is down by compared to the Planck density. This is not usually considered to be a critical problem, however, because the scale of inflation corresponds naturally to the scale of gauge unification. Eternal inflation In many models, the inflationary phase of the Universe's expansion lasts forever in at least some regions of the Universe. This occurs because inflating regions expand very rapidly, reproducing themselves. Unless the rate of decay to the non-inflating phase is sufficiently fast, new inflating regions are produced more rapidly than non-inflating regions. In such models, most of the volume of the Universe is continuously inflating at any given time. All models of eternal inflation produce an infinite, hypothetical multiverse, typically a fractal. The multiverse theory has created significant dissension in the scientific community about the viability of the inflationary model. Paul Steinhardt, one of the original architects of the inflationary model, introduced the first example of eternal inflation in 1983. He showed that the inflation could proceed forever by producing bubbles of non-inflating space filled with hot matter and radiation surrounded by empty space that continues to inflate. The bubbles could not grow fast enough to keep up with the inflation. Later that same year, Alexander Vilenkin showed that eternal inflation is generic. Although new inflation is classically rolling down the potential, quantum fluctuations can sometimes lift it to previous levels. These regions in which the inflaton fluctuates upwards expand much faster than regions in which the inflaton has a lower potential energy, and tend to dominate in terms of physical volume. It has been shown that any inflationary theory with an unbounded potential is eternal. There are well-known theorems that this steady state cannot continue forever into the past. Inflationary spacetime, which is similar to de Sitter space, is incomplete without a contracting region. However, unlike de Sitter space, fluctuations in a contracting inflationary space collapse to form a gravitational singularity, a point where densities become infinite. Therefore, it is necessary to have a theory for the Universe's initial conditions. In eternal inflation, regions with inflation have an exponentially growing volume, while regions that are not inflating don't. This suggests that the volume of the inflating part of the Universe in the global picture is always unimaginably larger than the part that has stopped inflating, even though inflation eventually ends as seen by any single pre-inflationary observer. Scientists disagree about how to assign a probability distribution to this hypothetical anthropic landscape. If the probability of different regions is counted by volume, one should expect that inflation will never end or applying boundary conditions that a local observer exists to observe it, that inflation will end as late as possible. Some physicists believe this paradox can be resolved by weighting observers by their pre-inflationary volume. Others believe that there is no resolution to the paradox and that the multiverse is a critical flaw in the inflationary paradigm. Paul Steinhardt, who first introduced the eternal inflationary model, later became one of its most vocal critics for this reason. Initial conditions Some physicists have tried to avoid the initial conditions problem by proposing models for an eternally inflating universe with no origin. These models propose that while the Universe, on the largest scales, expands exponentially it was, is and always will be, spatially infinite and has existed, and will exist, forever. Other proposals attempt to describe the ex nihilo creation of the Universe based on quantum cosmology and the following inflation. Vilenkin put forth one such scenario. Hartle and Hawking offered the no-boundary proposal for the initial creation of the Universe in which inflation comes about naturally. Guth described the inflationary universe as the "ultimate free lunch": new universes, similar to our own, are continually produced in a vast inflating background. Gravitational interactions, in this case, circumvent (but do not violate) the first law of thermodynamics (energy conservation) and the second law of thermodynamics (entropy and the arrow of time problem). However, while there is consensus that this solves the initial conditions problem, some have disputed this, as it is much more likely that the Universe came about by a quantum fluctuation. Don Page was an outspoken critic of inflation because of this anomaly. He stressed that the thermodynamic arrow of time necessitates low entropy initial conditions, which would be highly unlikely. According to them, rather than solving this problem, the inflation theory aggravates it – the reheating at the end of the inflation era increases entropy, making it necessary for the initial state of the Universe to be even more orderly than in other Big Bang theories with no inflation phase. Hawking and Page later found ambiguous results when they attempted to compute the probability of inflation in the Hartle-Hawking initial state. Other authors have argued that, since inflation is eternal, the probability doesn't matter as long as it is not precisely zero: once it starts, inflation perpetuates itself and quickly dominates the Universe. However, Albrecht and Lorenzo Sorbo argued that the probability of an inflationary cosmos, consistent with today's observations, emerging by a random fluctuation from some pre-existent state is much higher than that of a non-inflationary cosmos. This is because the "seed" amount of non-gravitational energy required for the inflationary cosmos is so much less than that for a non-inflationary alternative, which outweighs any entropic considerations. Another problem that has occasionally been mentioned is the trans-Planckian problem or trans-Planckian effects. Since the energy scale of inflation and the Planck scale are relatively close, some of the quantum fluctuations that have made up the structure in our universe were smaller than the Planck length before inflation. Therefore, there ought to be corrections from Planck-scale physics, in particular the unknown quantum theory of gravity. Some disagreement remains about the magnitude of this effect: about whether it is just on the threshold of detectability or completely undetectable. Hybrid inflation Another kind of inflation, called hybrid inflation, is an extension of new inflation. It introduces additional scalar fields, so that while one of the scalar fields is responsible for normal slow roll inflation, another triggers the end of inflation: when inflation has continued for sufficiently long, it becomes favorable to the second field to decay into a much lower energy state. In hybrid inflation, one scalar field is responsible for most of the energy density (thus determining the rate of expansion), while another is responsible for the slow roll (thus determining the period of inflation and its termination). Thus fluctuations in the former inflaton would not affect inflation termination, while fluctuations in the latter would not affect the rate of expansion. Therefore, hybrid inflation is not eternal. When the second (slow-rolling) inflaton reaches the bottom of its potential, it changes the location of the minimum of the first inflaton's potential, which leads to a fast roll of the inflaton down its potential, leading to termination of inflation. Relation to dark energy Dark energy is broadly similar to inflation and is thought to be causing the expansion of the present-day universe to accelerate. However, the energy scale of dark energy is much lower,  GeV, roughly 27 orders of magnitude less than the scale of inflation. Inflation and string cosmology The discovery of flux compactifications opened the way for reconciling inflation and string theory. Brane inflation suggests that inflation arises from the motion of D-branes in the compactified geometry, usually towards a stack of anti-D-branes. This theory, governed by the Dirac-Born-Infeld action, is different from ordinary inflation. The dynamics are not completely understood. It appears that special conditions are necessary since inflation occurs in tunneling between two vacua in the string landscape. The process of tunneling between two vacua is a form of old inflation, but new inflation must then occur by some other mechanism. Inflation and loop quantum gravity When investigating the effects the theory of loop quantum gravity would have on cosmology, a loop quantum cosmology model has evolved that provides a possible mechanism for cosmological inflation. Loop quantum gravity assumes a quantized spacetime. If the energy density is larger than can be held by the quantized spacetime, it is thought to bounce back. Alternatives and adjuncts Other models have been advanced that are claimed to explain some or all of the observations addressed by inflation. Big bounce The big bounce hypothesis attempts to replace the cosmic singularity with a cosmic contraction and bounce, thereby explaining the initial conditions that led to the big bang. The flatness and horizon problems are naturally solved in the Einstein-Cartan-Sciama-Kibble theory of gravity, without needing an exotic form of matter or free parameters. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. The minimal coupling between torsion and Dirac spinors generates a spin-spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical Big Bang singularity, replacing it with a cusp-like bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the Big Bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era. Ekpyrotic and cyclic models The ekpyrotic and cyclic models are also considered adjuncts to inflation. These models solve the horizon problem through an expanding epoch well before the Big Bang, and then generate the required spectrum of primordial density perturbations during a contracting phase leading to a Big Crunch. The Universe passes through the Big Crunch and emerges in a hot Big Bang phase. In this sense they are reminiscent of Richard Chace Tolman's oscillatory universe; in Tolman's model, however, the total age of the Universe is necessarily finite, while in these models this is not necessarily so. Whether the correct spectrum of density fluctuations can be produced, and whether the Universe can successfully navigate the Big Bang/Big Crunch transition, remains a topic of controversy and current research. Ekpyrotic models avoid the magnetic monopole problem as long as the temperature at the Big Crunch/Big Bang transition remains below the Grand Unified Scale, as this is the temperature required to produce magnetic monopoles in the first place. As things stand, there is no evidence of any 'slowing down' of the expansion, but this is not surprising as each cycle is expected to last on the order of a trillion years. String gas cosmology String theory requires that, in addition to the three observable spatial dimensions, additional dimensions exist that are curled up or compactified (see also Kaluza–Klein theory). Extra dimensions appear as a frequent component of supergravity models and other approaches to quantum gravity. This raised the contingent question of why four space-time dimensions became large and the rest became unobservably small. An attempt to address this question, called string gas cosmology, was proposed by Robert Brandenberger and Cumrun Vafa. This model focuses on the dynamics of the early universe considered as a hot gas of strings. Brandenberger and Vafa show that a dimension of spacetime can only expand if the strings that wind around it can efficiently annihilate each other. Each string is a one-dimensional object, and the largest number of dimensions in which two strings will generically intersect (and, presumably, annihilate) is three. Therefore, the most likely number of non-compact (large) spatial dimensions is three. Current work on this model centers on whether it can succeed in stabilizing the size of the compactified dimensions and produce the correct spectrum of primordial density perturbations. The original model did not "solve the entropy and flatness problems of standard cosmology", although Brandenburger and coauthors later argued that these problems can be eliminated by implementing string gas cosmology in the context of a bouncing-universe scenario. Varying c Cosmological models employing a variable speed of light have been proposed to resolve the horizon problem of and provide an alternative to cosmic inflation. In the VSL models, the fundamental constant c, denoting the speed of light in vacuum, is greater in the early universe than its present value, effectively increasing the particle horizon at the time of decoupling sufficiently to account for the observed isotropy of the CMB. Criticisms Since its introduction by Alan Guth in 1980, the inflationary paradigm has become widely accepted. Nevertheless, many physicists, mathematicians, and philosophers of science have voiced criticisms, claiming untestable predictions and a lack of serious empirical support. In 1999, John Earman and Jesús Mosterín published a thorough critical review of inflationary cosmology, concluding, "we do not think that there are, as yet, good grounds for admitting any of the models of inflation into the standard core of cosmology." As pointed out by Roger Penrose from 1986 on, in order to work, inflation requires extremely specific initial conditions of its own, so that the problem (or pseudo-problem) of initial conditions is not solved: "There is something fundamentally misconceived about trying to explain the uniformity of the early universe as resulting from a thermalization process. ... For, if the thermalization is actually doing anything ... then it represents a definite increasing of the entropy. Thus, the universe would have been even more special before the thermalization than after." The problem of specific or "fine-tuned" initial conditions would not have been solved; it would have gotten worse. At a conference in 2015, Penrose said that "inflation isn't falsifiable, it's falsified. ... BICEP did a wonderful service by bringing all the Inflation-ists out of their shell, and giving them a black eye." A recurrent criticism of inflation is that the invoked inflaton field does not correspond to any known physical field, and that its potential energy curve seems to be an ad hoc contrivance to accommodate almost any data obtainable. Paul Steinhardt, one of the founding fathers of inflationary cosmology, has recently become one of its sharpest critics. He calls 'bad inflation' a period of accelerated expansion whose outcome conflicts with observations, and 'good inflation' one compatible with them: "Not only is bad inflation more likely than good inflation, but no inflation is more likely than either ... Roger Penrose considered all the possible configurations of the inflaton and gravitational fields. Some of these configurations lead to inflation ... Other configurations lead to a uniform, flat universe directly – without inflation. Obtaining a flat universe is unlikely overall. Penrose's shocking conclusion, though, was that obtaining a flat universe without inflation is much more likely than with inflation – by a factor of 10 to the googol power!" Together with Anna Ijjas and Abraham Loeb, he wrote articles claiming that the inflationary paradigm is in trouble in view of the data from the Planck satellite. Counter-arguments were presented by Alan Guth, David Kaiser, and Yasunori Nomura and by Andrei Linde, saying that "cosmic inflation is on a stronger footing than ever before". See also Notes References Sources External links Was Cosmic Inflation The 'Bang' Of The Big Bang?, by Alan Guth, 1997 update 2004 by Andrew Liddle The Growth of Inflation Symmetry, December 2004 Guth's logbook showing the original idea WMAP Bolsters Case for Cosmic Inflation, March 2006 NASA March 2006 WMAP press release Max Tegmark's Our Mathematical Universe (2014), "Chapter 5: Inflation" Physical cosmology Concepts in astronomy Astronomical events 1980 in science
5385
https://en.wikipedia.org/wiki/Candela
Candela
The candela ( or ; symbol: cd) is the unit of luminous intensity in the International System of Units (SI). It measures luminous power per unit solid angle emitted by a light source in a particular direction. Luminous intensity is analogous to radiant intensity, but instead of simply adding up the contributions of every wavelength of light in the source's spectrum, the contribution of each wavelength is weighted by the luminous efficiency function, the model of the sensitivity of the human eye to different wavelengths, standardized by the CIE and ISO. A common wax candle emits light with a luminous intensity of roughly one candela. If emission in some directions is blocked by an opaque barrier, the emission would still be approximately one candela in the directions that are not obscured. The word candela is Latin for candle. The old name "candle" is still sometimes used, as in foot-candle and the modern definition of candlepower. Definition The 26th General Conference on Weights and Measures (CGPM) redefined the candela in 2018. The new definition, which took effect on 20 May 2019, is: The candela [...] is defined by taking the fixed numerical value of the luminous efficacy of monochromatic radiation of frequency , Kcd, to be 683 when expressed in the unit lm W−1, which is equal to , or , where the kilogram, metre and second are defined in terms of h, c and ΔνCs. Explanation The frequency chosen is in the visible spectrum near green, corresponding to a wavelength of about 555 nanometres. The human eye, when adapted for bright conditions, is most sensitive near this frequency. Under these conditions, photopic vision dominates the visual perception of our eyes over the scotopic vision. At other frequencies, more radiant intensity is required to achieve the same luminous intensity, according to the frequency response of the human eye. The luminous intensity for light of a particular wavelength λ is given by where is the luminous intensity, is the radiant intensity and is the photopic luminous efficiency function. If more than one wavelength is present (as is usually the case), one must integrate over the spectrum of wavelengths to get the total luminous intensity. Examples A common candle emits light with roughly 1 cd luminous intensity. A 25 W compact fluorescent light bulb puts out around 1700 lumens; if that light is radiated equally in all directions (i.e. over 4 steradians), it will have an intensity of Focused into a 20° beam (0.095 steradians), the same light bulb would have an intensity of around 18,000 cd within the beam. History Prior to 1948, various standards for luminous intensity were in use in a number of countries. These were typically based on the brightness of the flame from a "standard candle" of defined composition, or the brightness of an incandescent filament of specific design. One of the best-known of these was the English standard of candlepower. One candlepower was the light produced by a pure spermaceti candle weighing one sixth of a pound and burning at a rate of 120 grains per hour. Germany, Austria and Scandinavia used the Hefnerkerze, a unit based on the output of a Hefner lamp. A better standard for luminous intensity was needed. In 1884, Jules Violle had proposed a standard based on the light emitted by 1 cm2 of platinum at its melting point (or freezing point). The resulting unit of intensity, called the "violle", was roughly equal to 60 English candlepower. Platinum was convenient for this purpose because it had a high enough melting point, was not prone to oxidation, and could be obtained in pure form. Violle showed that the intensity emitted by pure platinum was strictly dependent on its temperature, and so platinum at its melting point should have a consistent luminous intensity. In practice, realizing a standard based on Violle's proposal turned out to be more difficult than expected. Impurities on the surface of the platinum could directly affect its emissivity, and in addition impurities could affect the luminous intensity by altering the melting point. Over the following half century various scientists tried to make a practical intensity standard based on incandescent platinum. The successful approach was to suspend a hollow shell of thorium dioxide with a small hole in it in a bath of molten platinum. The shell (cavity) serves as a black body, producing black-body radiation that depends on the temperature and is not sensitive to details of how the device is constructed. In 1937, the Commission Internationale de l'Éclairage (International Commission on Illumination) and the CIPM proposed a "new candle" based on this concept, with value chosen to make it similar to the earlier unit candlepower. The decision was promulgated by the CIPM in 1946: The value of the new candle is such that the brightness of the full radiator at the temperature of solidification of platinum is 60 new candles per square centimetre. It was then ratified in 1948 by the 9th CGPM which adopted a new name for this unit, the candela. In 1967 the 13th CGPM removed the term "new candle" and gave an amended version of the candela definition, specifying the atmospheric pressure applied to the freezing platinum: The candela is the luminous intensity, in the perpendicular direction, of a surface of square metre of a black body at the temperature of freezing platinum under a pressure of  newtons per square metre. In 1979, because of the difficulties in realizing a Planck radiator at high temperatures and the new possibilities offered by radiometry, the 16th CGPM adopted a new definition of the candela: The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency and that has a radiant intensity in that direction of  watt per steradian. The definition describes how to produce a light source that (by definition) emits one candela, but does not specify the luminous efficiency function for weighting radiation at other frequencies. Such a source could then be used to calibrate instruments designed to measure luminous intensity with reference to a specified luminous efficiency function. An appendix to the SI Brochure makes it clear that the luminous efficiency function is not uniquely specified, but must be selected to fully define the candela. The arbitrary (1/683) term was chosen so that the new definition would precisely match the old definition. Although the candela is now defined in terms of the second (an SI base unit) and the watt (a derived SI unit), the candela remains a base unit of the SI system, by definition. The 26th CGPM approved the modern definition of the candela in 2018 as part of the 2019 redefinition of SI base units, which redefined the SI base units in terms of fundamental physical constants. SI photometric light units Relationships between luminous intensity, luminous flux, and illuminance If a source emits a known luminous intensity (in candelas) in a well-defined cone, the total luminous flux in lumens is given by where is the radiation angle of the lamp—the full vertex angle of the emission cone. For example, a lamp that emits 590 cd with a radiation angle of 40° emits about 224 lumens. See MR16 for emission angles of some common lamps. If the source emits light uniformly in all directions, the flux can be found by multiplying the intensity by 4: a uniform 1 candela source emits 12.6 lumens. For the purpose of measuring illumination, the candela is not a practical unit, as it only applies to idealized point light sources, each approximated by a source small compared to the distance from which its luminous radiation is measured, also assuming that it is done so in the absence of other light sources. What gets directly measured by a light meter is incident light on a sensor of finite area, i.e. illuminance in lm/m2 (lux). However, if designing illumination from many point light sources, like light bulbs, of known approximate omnidirectionally uniform intensities, the contributions to illuminance from incoherent light being additive, it is mathematically estimated as follows. If is the position of the ith source of uniform intensity , and is the unit vector normal to the illuminated elemental opaque area being measured, and provided that all light sources lie in the same half-space divided by the plane of this area, In the case of a single point light source of intensity Iv, at a distance r and normally incident, this reduces to SI multiples Like other SI units, the candela can also be modified by adding a metric prefix that multiplies it by a power of 10, for example millicandela (mcd) for 10−3 candela. References SI base units Units of luminous intensity
5387
https://en.wikipedia.org/wiki/Condensed%20matter%20physics
Condensed matter physics
Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases which arise from electromagnetic forces between atoms. More generally, the subject deals with condensed phases of matter: systems of many constituents with strong interactions among them. More exotic condensed phases include the superconducting phase exhibited by certain materials at extremely low cryogenic temperature, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, and the Bose–Einstein condensate found in ultracold atomic systems. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other physics theories to develop mathematical models. The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division at the American Physical Society. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid-state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the more comprehensive specialty of condensed matter physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. According to founding director of the Max Planck Institute for Solid State Research, physics professor Manuel Cardona, it was Albert Einstein who created the modern field of condensed matter physics starting with his seminal 1905 article on the photoelectric effect and photoluminescence which opened the fields of photoelectron spectroscopy and photoluminescence spectroscopy, and later his 1907 article on the specific heat of solids which introduced, for the first time, the effect of lattice vibrations on the thermodynamic properties of crystals, in particular the specific heat. Deputy Director of the Yale Quantum Institute A. Douglas Stone makes a similar priority case for Einstein in his work on the synthetic history of quantum mechanics. Etymology According to physicist Philip Warren Anderson, the use of the term "condensed matter" to designate a field of study was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge from Solid state theory to Theory of Condensed Matter in 1967, as they felt it better included their interest in liquids, nuclear matter, and so on. Although Anderson and Heine helped popularize the name "condensed matter", it had been used in Europe for some years, most prominently in the Springer-Verlag journal Physics of Condensed Matter, launched in 1963. The name "condensed matter physics" emphasized the commonality of scientific problems encountered by physicists working on solids, liquids, plasmas, and other complex matter, whereas "solid state physics" was often associated with restricted industrial applications of metals and semiconductors. In the 1960s and 70s, some physicists felt the more comprehensive name better fit the funding environment and Cold War politics of the time. References to "condensed" states can be traced to earlier sources. For example, in the introduction to his 1947 book Kinetic Theory of Liquids, Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of 'condensed bodies'". History Classical physics One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals. In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen, hydrogen, and oxygen. Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases, and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures. By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to liquefy hydrogen and then newly discovered helium, respectively. Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law. However, despite the success of Drude's free electron model, it had one notable problem: it was unable to correctly explain the electronic contribution to the specific heat and magnetic properties of metals, and the temperature dependence of resistivity at low temperatures. In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value. The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that "with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas." Advent of quantum mechanics Drude's classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch and other physicists. Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926. Shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice. The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series International Tables of Crystallography, first published in 1935. Band structure calculations was first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics. In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered a voltage developed across conductors transverse to an electric current in the conductor and magnetic field perpendicular to the current. This phenomenon arising due to the nature of charge carriers in the conductor came to be termed the Hall effect, but it was not properly explained at the time, since the electron was not experimentally discovered until 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 developed the theory of Landau quantization and laid the foundation for the theoretical explanation for the quantum Hall effect discovered half a century later. Magnetism as a property of matter has been known in China since 4000 BC. However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included classifying materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization. Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets. The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization. The Ising model was solved exactly to show that spontaneous magnetization cannot occur in one dimension but is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to developing new magnetic materials with applications to magnetic storage devices. Modern many-body physics The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect. After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Russian physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles. Landau also developed a mean-field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases. Eventually in 1956, John Bardeen, Leon Cooper and John Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair. The study of phase transitions and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s. Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory. The quantum Hall effect was discovered by Klaus von Klitzing, Dorda and Pepper in 1980 when they observed the Hall conductance to be integer multiples of a fundamental constant .(see figure) The effect was observed to be independent of parameters such as system size and impurities. In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance is proportional to a topological invariant, called Chern number, whose relevance for the band structure of solids was formulated by David J. Thouless and collaborators. Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of the constant . Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction. The study of topological properties of the fractional Hall effect remains an active field of research. Decades later, the aforementioned topological band theory advanced by David J. Thouless and collaborators was further expanded leading to the discovery of topological insulators. In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, a material which was superconducting at temperatures as high as 50 kelvins. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role. A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic. In 2009, David Field and researchers at Aarhus University discovered spontaneous electric fields when creating prosaic films of various gases. This has more recently expanded to form the research area of spontelectrics. In 2012, several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator in accord with the earlier theoretical predictions. Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, it is expected that the existence of a topological Dirac surface state in this material would lead to a topological insulator with strong electronic correlations. Theoretical Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries. Emergence Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents. For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known. Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon. Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two band-insulators are joined to create conductivity and superconductivity. Electronic theory of solids The metallic state has historically been an important building block for studying properties of solids. The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments. This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law. In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms. In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, known as Bloch's theorem. Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions. The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions. In general, it is very difficult to solve the Hartree–Fock equation. Only the free electron gas case can be solved exactly. Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory (DFT) which gave realistic descriptions for bulk and surface properties of metals. The density functional theory has been widely used since the 1970s for band structure calculations of variety of solids. Symmetry breaking Some states of matter exhibit symmetry breaking, where the relevant laws of physics possess some form of symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) phase rotational symmetry. Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations. Phase transition Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature, pressure, or molar composition. In a single-component system, a classical phase transition occurs at a temperature (at a specific pressure) where there is an abrupt change in the order of the system For example, when ice melts and becomes water, the ordered hexagonal crystal structure of ice is modified to a hydrogen bonded, mobile arrangement of water molecules. In quantum phase transitions, the temperature is set to absolute zero, and the non-thermal control parameter, such as pressure or magnetic field, causes the phase transitions when order is destroyed by quantum fluctuations originating from the Heisenberg uncertainty principle. Here, the different quantum phases of the system refer to distinct ground states of the Hamiltonian matrix. Understanding the behavior of quantum phase transition is important in the difficult tasks of explaining the properties of rare-earth magnetic insulators, high-temperature superconductors, and other substances. Two classes of phase transitions occur: first-order transitions and second-order or continuous transitions. For the latter, the two phases involved do not co-exist at the transition temperature, also called the critical point. Near the critical point, systems undergo critical behavior, wherein several of their properties such as correlation length, specific heat, and magnetic susceptibility diverge exponentially. These critical phenomena present serious challenges to physicists because normal macroscopic laws are no longer valid in the region, and novel ideas and methods must be invented to find the new laws that can describe the system. The simplest theory that can describe continuous phase transitions is the Ginzburg–Landau theory, which works in the so-called mean-field approximation. However, it can only roughly explain continuous phase transition for ferroelectrics and type I superconductors which involves long range microscopic interactions. For other types of systems that involves short range interactions near the critical point, a better theory is needed. Near the critical point, the fluctuations happen over broad range of size scales while the feature of the whole system is scale invariant. Renormalization group methods successively average out the shortest wavelength fluctuations in stages while retaining their effects into the next stage. Thus, the changes of a physical system as viewed at different size scales can be investigated systematically. The methods, together with powerful computer simulation, contribute greatly to the explanation of the critical phenomena associated with continuous phase transition. Experimental Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Such probes include effects of electric and magnetic fields, measuring response functions, transport properties and thermometry. Commonly used experimental methods include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measuring transport via thermal and heat conduction. Scattering Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest. Visible light has energy on the scale of 1 electron volt (eV) and is used as a scattering probe to measure variations in material properties such as dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density and crystal structure. Neutrons can also probe atomic length scales and are used to study scattering off nuclei and electron spins and magnetization (as neutrons have spin but no charge). Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes. Similarly, positron annihilation can be used as an indirect measurement of local electron density. Laser spectroscopy is an excellent tool for studying the microscopic properties of a medium, for example, to study forbidden transitions in media with nonlinear optical spectroscopy. External magnetic fields In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems. Nuclear magnetic resonance (NMR) is a method by which external magnetic fields are used to find resonance modes of individual nuclei, thus giving information about the atomic, molecular, and bond structure of their environment. NMR experiments can be made in magnetic fields with strengths up to 60 tesla. Higher magnetic fields can improve the quality of NMR measurement data. Quantum oscillations is another experimental method where high magnetic fields are used to study material properties such as the geometry of the Fermi surface. High magnetic fields will be useful in experimentally testing of the various theoretical predictions such as the quantized magnetoelectric effect, image magnetic monopole, and the half-integer quantum Hall effect. Nuclear spectroscopy The local structure, the structure of the nearest neighbour atoms, of condensed matter can be investigated with methods of nuclear spectroscopy, which are very sensitive to small changes. Using specific and radioactive nuclei, the nucleus becomes the probe that interacts with its surrounding electric and magnetic fields (hyperfine interactions). The methods are suitable to study defects, diffusion, phase change, magnetism. Common methods are e.g. NMR, Mössbauer spectroscopy, or perturbed angular correlation (PAC). Especially PAC is ideal for the study of phase changes at extreme temperature above 2000 °C due to no temperature dependence of the method. Cold atomic gases Ultracold atom trapping in optical lattices is an experimental tool commonly used in condensed matter physics, and in atomic, molecular, and optical physics. The method involves using optical lasers to form an interference pattern, which acts as a lattice, in which ions or atoms can be placed at very low temperatures. Cold atoms in optical lattices are used as quantum simulators, that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets. In particular, they are used to engineer one-, two- and three-dimensional lattices for a Hubbard model with pre-specified parameters, and to study phase transitions for antiferromagnetic and spin liquid ordering. In 1995, a gas of rubidium atoms cooled down to a temperature of 170 nK was used to experimentally realize the Bose–Einstein condensate, a novel state of matter originally predicted by S. N. Bose and Albert Einstein, wherein a large number of atoms occupy one quantum state. Applications Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor, laser technology, and several phenomena studied in the context of nanotechnology. Methods such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication. Such molecular machines were developed for example by Nobel laurate in chemistry Ben Feringa, Jean-Pierre Sauvage and Fraser Stoddart. Feringa and his team developed multiple molecular machines such as molecular car, molecular windmill and many more. In quantum computation, information is represented by quantum bits, or qubits. The qubits may decohere quickly before useful computation is completed. This serious problem must be solved before quantum computing may be realized. To solve this problem, several promising approaches are proposed in condensed matter physics, including Josephson junction qubits, spintronic qubits using the spin orientation of magnetic materials, or the topological non-Abelian anyons from fractional quantum Hall effect states. Condensed matter physics also has important uses for biomedicine, for example, the experimental method of magnetic resonance imaging, which is widely used in medical diagnosis. See also Notes References Further reading Anderson, Philip W. (2018-03-09). Basic Notions Of Condensed Matter Physics. CRC Press. . Girvin, Steven M.; Yang, Kun (2019-02-28). Modern Condensed Matter Physics. Cambridge University Press. . Coleman, Piers (2015). Introduction to Many-Body Physics, Cambridge University Press, . P. M. Chaikin and T. C. Lubensky (2000). Principles of Condensed Matter Physics, Cambridge University Press; 1st edition, Alexander Altland and Ben Simons (2006). Condensed Matter Field Theory, Cambridge University Press, . Michael P. Marder (2010). Condensed Matter Physics, second edition, John Wiley and Sons, . Lillian Hoddeson, Ernest Braun, Jürgen Teichmann and Spencer Weart, eds. (1992). Out of the Crystal Maze: Chapters from the History of Solid State Physics, Oxford University Press, . External links Materials science
5388
https://en.wikipedia.org/wiki/Cultural%20anthropology
Cultural anthropology
Cultural anthropology is a branch of anthropology focused on the study of cultural variation among humans. It is in contrast to social anthropology, which perceives cultural variation as a subset of a posited anthropological constant. The term sociocultural anthropology includes both cultural and social anthropology traditions. Anthropologists have pointed out that through culture, people can adapt to their environment in non-genetic ways, so people living in different environments will often have different cultures. Much of anthropological theory has originated in an appreciation of and interest in the tension between the local (particular cultures) and the global (a universal human nature, or the web of connections between people in distinct places/circumstances). Cultural anthropology has a rich methodology, including participant observation (often called fieldwork because it requires the anthropologist spending an extended period of time at the research location), interviews, and surveys. History The rise of cultural anthropology took place within the context of the late 19th century, when questions regarding which cultures were "primitive" and which were "civilized" occupied the mind of not only Freud, but many others. Colonialism and its processes increasingly brought European thinkers into direct or indirect contact with "primitive others". The relative status of various humans, some of whom had modern advanced technologies that included engines and telegraphs, while others lacked anything but face-to-face communication techniques and still lived a Paleolithic lifestyle, was of interest to the first generation of cultural anthropologists. Theoretical foundations The concept of culture One of the earliest articulations of the anthropological meaning of the term "culture" came from Sir Edward Tylor who writes on the first page of his 1871 book: "Culture, or civilization, taken in its broad, ethnographic sense, is that complex whole which includes knowledge, belief, art, morals, law, custom, and any other capabilities and habits acquired by man as a member of society." The term "civilization" later gave way to definitions given by V. Gordon Childe, with culture forming an umbrella term and civilization becoming a particular kind of culture. According to Kay Milton, former director of anthropology research at Queens University Belfast, culture can be general or specific. This means culture can be something applied to all human beings or it can be specific to a certain group of people such as African American culture or Irish American culture. Specific cultures are structured systems which means they are organized very specifically and adding or taking away any element from that system may disrupt it. The critique of evolutionism Anthropology is concerned with the lives of people in different parts of the world, particularly in relation to the discourse of beliefs and practices. In addressing this question, ethnologists in the 19th century divided into two schools of thought. Some, like Grafton Elliot Smith, argued that different groups must have learned from one another somehow, however indirectly; in other words, they argued that cultural traits spread from one place to another, or "diffused". Other ethnologists argued that different groups had the capability of creating similar beliefs and practices independently. Some of those who advocated "independent invention", like Lewis Henry Morgan, additionally supposed that similarities meant that different groups had passed through the same stages of cultural evolution (See also classical social evolutionism). Morgan, in particular, acknowledged that certain forms of society and culture could not possibly have arisen before others. For example, industrial farming could not have been invented before simple farming, and metallurgy could not have developed without previous non-smelting processes involving metals (such as simple ground collection or mining). Morgan, like other 19th century social evolutionists, believed there was a more or less orderly progression from the primitive to the civilized. 20th-century anthropologists largely reject the notion that all human societies must pass through the same stages in the same order, on the grounds that such a notion does not fit the empirical facts. Some 20th-century ethnologists, like Julian Steward, have instead argued that such similarities reflected similar adaptations to similar environments. Although 19th-century ethnologists saw "diffusion" and "independent invention" as mutually exclusive and competing theories, most ethnographers quickly reached a consensus that both processes occur, and that both can plausibly account for cross-cultural similarities. But these ethnographers also pointed out the superficiality of many such similarities. They noted that even traits that spread through diffusion often were given different meanings and function from one society to another. Analyses of large human concentrations in big cities, in multidisciplinary studies by Ronald Daus, show how new methods may be applied to the understanding of man living in a global world and how it was caused by the action of extra-European nations, so highlighting the role of Ethics in modern anthropology. Accordingly, most of these anthropologists showed less interest in comparing cultures, generalizing about human nature, or discovering universal laws of cultural development, than in understanding particular cultures in those cultures' own terms. Such ethnographers and their students promoted the idea of "cultural relativism", the view that one can only understand another person's beliefs and behaviors in the context of the culture in which they live or lived. Others, such as Claude Lévi-Strauss (who was influenced both by American cultural anthropology and by French Durkheimian sociology), have argued that apparently similar patterns of development reflect fundamental similarities in the structure of human thought (see structuralism). By the mid-20th century, the number of examples of people skipping stages, such as going from hunter-gatherers to post-industrial service occupations in one generation, were so numerous that 19th-century evolutionism was effectively disproved. Cultural relativism Cultural relativism is a principle that was established as axiomatic in anthropological research by Franz Boas and later popularized by his students. Boas first articulated the idea in 1887: "...civilization is not something absolute, but ... is relative, and ... our ideas and conceptions are true only so far as our civilization goes." Although Boas did not coin the term, it became common among anthropologists after Boas' death in 1942, to express their synthesis of a number of ideas Boas had developed. Boas believed that the sweep of cultures, to be found in connection with any sub-species, is so vast and pervasive that there cannot be a relationship between culture and race. Cultural relativism involves specific epistemological and methodological claims. Whether or not these claims require a specific ethical stance is a matter of debate. This principle should not be confused with moral relativism. Cultural relativism was in part a response to Western ethnocentrism. Ethnocentrism may take obvious forms, in which one consciously believes that one's people's arts are the most beautiful, values the most virtuous, and beliefs the most truthful. Boas, originally trained in physics and geography, and heavily influenced by the thought of Kant, Herder, and von Humboldt, argued that one's culture may mediate and thus limit one's perceptions in less obvious ways. This understanding of culture confronts anthropologists with two problems: first, how to escape the unconscious bonds of one's own culture, which inevitably bias our perceptions of and reactions to the world, and second, how to make sense of an unfamiliar culture. The principle of cultural relativism thus forced anthropologists to develop innovative methods and heuristic strategies. Boas and his students realized that if they were to conduct scientific research in other cultures, they would need to employ methods that would help them escape the limits of their own ethnocentrism. One such method is that of ethnography: basically, they advocated living with people of another culture for an extended period of time, so that they could learn the local language and be enculturated, at least partially, into that culture. In this context, cultural relativism is of fundamental methodological importance, because it calls attention to the importance of the local context in understanding the meaning of particular human beliefs and activities. Thus, in 1948 Virginia Heyer wrote, "Cultural relativity, to phrase it in starkest abstraction, states the relativity of the part to the whole. The part gains its cultural significance by its place in the whole, and cannot retain its integrity in a different situation." Theoretical approaches Actor–network theory Cultural materialism Culture theory Feminist anthropology Functionalism Symbolic and interpretive anthropology Political economy in anthropology Practice theory Structuralism Post-structuralism Systems theory in anthropology Comparison with social anthropology The rubric cultural anthropology is generally applied to ethnographic works that are holistic in approach, are oriented to the ways in which culture affects individual experience, or aim to provide a rounded view of the knowledge, customs, and institutions of a people. Social anthropology is a term applied to ethnographic works that attempt to isolate a particular system of social relations such as those that comprise domestic life, economy, law, politics, or religion, give analytical priority to the organizational bases of social life, and attend to cultural phenomena as somewhat secondary to the main issues of social scientific inquiry. Parallel with the rise of cultural anthropology in the United States, social anthropology developed as an academic discipline in Britain and in France. Foundational thinkers Lewis Henry Morgan Lewis Henry Morgan (1818–1881), a lawyer from Rochester, New York, became an advocate for and ethnological scholar of the Iroquois. His comparative analyses of religion, government, material culture, and especially kinship patterns proved to be influential contributions to the field of anthropology. Like other scholars of his day (such as Edward Tylor), Morgan argued that human societies could be classified into categories of cultural evolution on a scale of progression that ranged from savagery, to barbarism, to civilization. Generally, Morgan used technology (such as bowmaking or pottery) as an indicator of position on this scale. Franz Boas, founder of the modern discipline Franz Boas (1858–1942) established academic anthropology in the United States in opposition to Morgan's evolutionary perspective. His approach was empirical, skeptical of overgeneralizations, and eschewed attempts to establish universal laws. For example, Boas studied immigrant children to demonstrate that biological race was not immutable, and that human conduct and behavior resulted from nurture, rather than nature. Influenced by the German tradition, Boas argued that the world was full of distinct cultures, rather than societies whose evolution could be measured by how much or how little "civilization" they had. He believed that each culture has to be studied in its particularity, and argued that cross-cultural generalizations, like those made in the natural sciences, were not possible. In doing so, he fought discrimination against immigrants, blacks, and indigenous peoples of the Americas. Many American anthropologists adopted his agenda for social reform, and theories of race continue to be popular subjects for anthropologists today. The so-called "Four Field Approach" has its origins in Boasian Anthropology, dividing the discipline in the four crucial and interrelated fields of sociocultural, biological, linguistic, and archaic anthropology (e.g. archaeology). Anthropology in the United States continues to be deeply influenced by the Boasian tradition, especially its emphasis on culture. Kroeber, Mead, and Benedict Boas used his positions at Columbia University and the American Museum of Natural History (AMNH) to train and develop multiple generations of students. His first generation of students included Alfred Kroeber, Robert Lowie, Edward Sapir, and Ruth Benedict, who each produced richly detailed studies of indigenous North American cultures. They provided a wealth of details used to attack the theory of a single evolutionary process. Kroeber and Sapir's focus on Native American languages helped establish linguistics as a truly general science and free it from its historical focus on Indo-European languages. The publication of Alfred Kroeber's textbook Anthropology (1923) marked a turning point in American anthropology. After three decades of amassing material, Boasians felt a growing urge to generalize. This was most obvious in the 'Culture and Personality' studies carried out by younger Boasians such as Margaret Mead and Ruth Benedict. Influenced by psychoanalytic psychologists including Sigmund Freud and Carl Jung, these authors sought to understand the way that individual personalities were shaped by the wider cultural and social forces in which they grew up. Though such works as Mead's Coming of Age in Samoa (1928) and Benedict's The Chrysanthemum and the Sword (1946) remain popular with the American public, Mead and Benedict never had the impact on the discipline of anthropology that some expected. Boas had planned for Ruth Benedict to succeed him as chair of Columbia's anthropology department, but she was sidelined in favor of Ralph Linton, and Mead was limited to her offices at the AMNH. Wolf, Sahlins, Mintz, and political economy In the 1950s and mid-1960s anthropology tended increasingly to model itself after the natural sciences. Some anthropologists, such as Lloyd Fallers and Clifford Geertz, focused on processes of modernization by which newly independent states could develop. Others, such as Julian Steward and Leslie White, focused on how societies evolve and fit their ecological niche—an approach popularized by Marvin Harris. Economic anthropology as influenced by Karl Polanyi and practiced by Marshall Sahlins and George Dalton challenged standard neoclassical economics to take account of cultural and social factors, and employed Marxian analysis into anthropological study. In England, British Social Anthropology's paradigm began to fragment as Max Gluckman and Peter Worsley experimented with Marxism and authors such as Rodney Needham and Edmund Leach incorporated Lévi-Strauss's structuralism into their work. Structuralism also influenced a number of developments in the 1960s and 1970s, including cognitive anthropology and componential analysis. In keeping with the times, much of anthropology became politicized through the Algerian War of Independence and opposition to the Vietnam War; Marxism became an increasingly popular theoretical approach in the discipline. By the 1970s the authors of volumes such as Reinventing Anthropology worried about anthropology's relevance. Since the 1980s issues of power, such as those examined in Eric Wolf's Europe and the People Without History, have been central to the discipline. In the 1980s books like Anthropology and the Colonial Encounter pondered anthropology's ties to colonial inequality, while the immense popularity of theorists such as Antonio Gramsci and Michel Foucault moved issues of power and hegemony into the spotlight. Gender and sexuality became popular topics, as did the relationship between history and anthropology, influenced by Marshall Sahlins, who drew on Lévi-Strauss and Fernand Braudel to examine the relationship between symbolic meaning, sociocultural structure, and individual agency in the processes of historical transformation. Jean and John Comaroff produced a whole generation of anthropologists at the University of Chicago that focused on these themes. Also influential in these issues were Nietzsche, Heidegger, the critical theory of the Frankfurt School, Derrida and Lacan. Geertz, Schneider, and interpretive anthropology Many anthropologists reacted against the renewed emphasis on materialism and scientific modelling derived from Marx by emphasizing the importance of the concept of culture. Authors such as David Schneider, Clifford Geertz, and Marshall Sahlins developed a more fleshed-out concept of culture as a web of meaning or signification, which proved very popular within and beyond the discipline. Geertz was to state: Geertz's interpretive method involved what he called "thick description". The cultural symbols of rituals, political and economic action, and of kinship, are "read" by the anthropologist as if they are a document in a foreign language. The interpretation of those symbols must be re-framed for their anthropological audience, i.e. transformed from the "experience-near" but foreign concepts of the other culture, into the "experience-distant" theoretical concepts of the anthropologist. These interpretations must then be reflected back to its originators, and its adequacy as a translation fine-tuned in a repeated way, a process called the hermeneutic circle. Geertz applied his method in a number of areas, creating programs of study that were very productive. His analysis of "religion as a cultural system" was particularly influential outside of anthropology. David Schnieder's cultural analysis of American kinship has proven equally influential. Schneider demonstrated that the American folk-cultural emphasis on "blood connections" had an undue influence on anthropological kinship theories, and that kinship is not a biological characteristic but a cultural relationship established on very different terms in different societies. Prominent British symbolic anthropologists include Victor Turner and Mary Douglas. The post-modern turn In the late 1980s and 1990s authors such as James Clifford pondered ethnographic authority, in particular how and why anthropological knowledge was possible and authoritative. They were reflecting trends in research and discourse initiated by feminists in the academy, although they excused themselves from commenting specifically on those pioneering critics. Nevertheless, key aspects of feminist theory and methods became de rigueur as part of the 'post-modern moment' in anthropology: Ethnographies became more interpretative and reflexive, explicitly addressing the author's methodology; cultural, gendered, and racial positioning; and their influence on the ethnographic analysis. This was part of a more general trend of postmodernism that was popular contemporaneously. Currently anthropologists pay attention to a wide variety of issues pertaining to the contemporary world, including globalization, medicine and biotechnology, indigenous rights, virtual communities, and the anthropology of industrialized societies. Socio-cultural anthropology subfields Anthropology of art Cognitive anthropology Anthropology of development Disability anthropology Ecological anthropology Economic anthropology Feminist anthropology and anthropology of gender and sexuality Ethnohistory and historical anthropology Kinship and family Legal anthropology Multimodal anthropology Media anthropology Medical anthropology Political anthropology Political economy in anthropology Psychological anthropology Public anthropology Anthropology of religion Cyborg anthropology Transpersonal anthropology Urban anthropology Visual anthropology Methods Modern cultural anthropology has its origins in, and developed in reaction to, 19th century ethnology, which involves the organized comparison of human societies. Scholars like E.B. Tylor and J.G. Frazer in England worked mostly with materials collected by others—usually missionaries, traders, explorers, or colonial officials—earning them the moniker of "arm-chair anthropologists". Participant observation Participant observation is one of the principal research methods of cultural anthropology. It relies on the assumption that the best way to understand a group of people is to interact with them closely over a long period of time. The method originated in the field research of social anthropologists, especially Bronislaw Malinowski in Britain, the students of Franz Boas in the United States, and in the later urban research of the Chicago School of Sociology. Historically, the group of people being studied was a small, non-Western society. However, today it may be a specific corporation, a church group, a sports team, or a small town. There are no restrictions as to what the subject of participant observation can be, as long as the group of people is studied intimately by the observing anthropologist over a long period of time. This allows the anthropologist to develop trusting relationships with the subjects of study and receive an inside perspective on the culture, which helps him or her to give a richer description when writing about the culture later. Observable details (like daily time allotment) and more hidden details (like taboo behavior) are more easily observed and interpreted over a longer period of time, and researchers can discover discrepancies between what participants say—and often believe—should happen (the formal system) and what actually does happen, or between different aspects of the formal system; in contrast, a one-time survey of people's answers to a set of questions might be quite consistent, but is less likely to show conflicts between different aspects of the social system or between conscious representations and behavior. Interactions between an ethnographer and a cultural informant must go both ways. Just as an ethnographer may be naive or curious about a culture, the members of that culture may be curious about the ethnographer. To establish connections that will eventually lead to a better understanding of the cultural context of a situation, an anthropologist must be open to becoming part of the group, and willing to develop meaningful relationships with its members. One way to do this is to find a small area of common experience between an anthropologist and their subjects, and then to expand from this common ground into the larger area of difference. Once a single connection has been established, it becomes easier to integrate into the community, and more likely that accurate and complete information is being shared with the anthropologist. Before participant observation can begin, an anthropologist must choose both a location and a focus of study. This focus may change once the anthropologist is actively observing the chosen group of people, but having an idea of what one wants to study before beginning fieldwork allows an anthropologist to spend time researching background information on their topic. It can also be helpful to know what previous research has been conducted in one's chosen location or on similar topics, and if the participant observation takes place in a location where the spoken language is not one the anthropologist is familiar with, they will usually also learn that language. This allows the anthropologist to become better established in the community. The lack of need for a translator makes communication more direct, and allows the anthropologist to give a richer, more contextualized representation of what they witness. In addition, participant observation often requires permits from governments and research institutions in the area of study, and always needs some form of funding. The majority of participant observation is based on conversation. This can take the form of casual, friendly dialogue, or can also be a series of more structured interviews. A combination of the two is often used, sometimes along with photography, mapping, artifact collection, and various other methods. In some cases, ethnographers also turn to structured observation, in which an anthropologist's observations are directed by a specific set of questions they are trying to answer. In the case of structured observation, an observer might be required to record the order of a series of events, or describe a certain part of the surrounding environment. While the anthropologist still makes an effort to become integrated into the group they are studying, and still participates in the events as they observe, structured observation is more directed and specific than participant observation in general. This helps to standardize the method of study when ethnographic data is being compared across several groups or is needed to fulfill a specific purpose, such as research for a governmental policy decision. One common criticism of participant observation is its lack of objectivity. Because each anthropologist has their own background and set of experiences, each individual is likely to interpret the same culture in a different way. Who the ethnographer is has a lot to do with what they will eventually write about a culture, because each researcher is influenced by their own perspective. This is considered a problem especially when anthropologists write in the ethnographic present, a present tense which makes a culture seem stuck in time, and ignores the fact that it may have interacted with other cultures or gradually evolved since the anthropologist made observations. To avoid this, past ethnographers have advocated for strict training, or for anthropologists working in teams. However, these approaches have not generally been successful, and modern ethnographers often choose to include their personal experiences and possible biases in their writing instead. Participant observation has also raised ethical questions, since an anthropologist is in control of what they report about a culture. In terms of representation, an anthropologist has greater power than their subjects of study, and this has drawn criticism of participant observation in general. Additionally, anthropologists have struggled with the effect their presence has on a culture. Simply by being present, a researcher causes changes in a culture, and anthropologists continue to question whether or not it is appropriate to influence the cultures they study, or possible to avoid having influence. Ethnography In the 20th century, most cultural and social anthropologists turned to the crafting of ethnographies. An ethnography is a piece of writing about a people, at a particular place and time. Typically, the anthropologist lives among people in another society for a period of time, simultaneously participating in and observing the social and cultural life of the group. Numerous other ethnographic techniques have resulted in ethnographic writing or details being preserved, as cultural anthropologists also curate materials, spend long hours in libraries, churches and schools poring over records, investigate graveyards, and decipher ancient scripts. A typical ethnography will also include information about physical geography, climate and habitat. It is meant to be a holistic piece of writing about the people in question, and today often includes the longest possible timeline of past events that the ethnographer can obtain through primary and secondary research. Bronisław Malinowski developed the ethnographic method, and Franz Boas taught it in the United States. Boas' students such as Alfred L. Kroeber, Ruth Benedict and Margaret Mead drew on his conception of culture and cultural relativism to develop cultural anthropology in the United States. Simultaneously, Malinowski and A.R. Radcliffe Brown's students were developing social anthropology in the United Kingdom. Whereas cultural anthropology focused on symbols and values, social anthropology focused on social groups and institutions. Today socio-cultural anthropologists attend to all these elements. In the early 20th century, socio-cultural anthropology developed in different forms in Europe and in the United States. European "social anthropologists" focused on observed social behaviors and on "social structure", that is, on relationships among social roles (for example, husband and wife, or parent and child) and social institutions (for example, religion, economy, and politics). American "cultural anthropologists" focused on the ways people expressed their view of themselves and their world, especially in symbolic forms, such as art and myths. These two approaches frequently converged and generally complemented one another. For example, kinship and leadership function both as symbolic systems and as social institutions. Today almost all socio-cultural anthropologists refer to the work of both sets of predecessors, and have an equal interest in what people do and in what people say. Cross-cultural comparison One means by which anthropologists combat ethnocentrism is to engage in the process of cross-cultural comparison. It is important to test so-called "human universals" against the ethnographic record. Monogamy, for example, is frequently touted as a universal human trait, yet comparative study shows that it is not. The Human Relations Area Files, Inc. (HRAF) is a research agency based at Yale University. Since 1949, its mission has been to encourage and facilitate worldwide comparative studies of human culture, society, and behavior in the past and present. The name came from the Institute of Human Relations, an interdisciplinary program/building at Yale at the time. The Institute of Human Relations had sponsored HRAF's precursor, the Cross-Cultural Survey (see George Peter Murdock), as part of an effort to develop an integrated science of human behavior and culture. The two eHRAF databases on the Web are expanded and updated annually. eHRAF World Cultures includes materials on cultures, past and present, and covers nearly 400 cultures. The second database, eHRAF Archaeology, covers major archaeological traditions and many more sub-traditions and sites around the world. Comparison across cultures includies the industrialized (or de-industrialized) West. Cultures in the more traditional standard cross-cultural sample of small scale societies are: Multi-sited ethnography Ethnography dominates socio-cultural anthropology. Nevertheless, many contemporary socio-cultural anthropologists have rejected earlier models of ethnography as treating local cultures as bounded and isolated. These anthropologists continue to concern themselves with the distinct ways people in different locales experience and understand their lives, but they often argue that one cannot understand these particular ways of life solely from a local perspective; they instead combine a focus on the local with an effort to grasp larger political, economic, and cultural frameworks that impact local lived realities. Notable proponents of this approach include Arjun Appadurai, James Clifford, George Marcus, Sidney Mintz, Michael Taussig, Eric Wolf and Ronald Daus. A growing trend in anthropological research and analysis is the use of multi-sited ethnography, discussed in George Marcus' article, "Ethnography In/Of the World System: the Emergence of Multi-Sited Ethnography". Looking at culture as embedded in macro-constructions of a global social order, multi-sited ethnography uses traditional methodology in various locations both spatially and temporally. Through this methodology, greater insight can be gained when examining the impact of world-systems on local and global communities. Also emerging in multi-sited ethnography are greater interdisciplinary approaches to fieldwork, bringing in methods from cultural studies, media studies, science and technology studies, and others. In multi-sited ethnography, research tracks a subject across spatial and temporal boundaries. For example, a multi-sited ethnography may follow a "thing", such as a particular commodity, as it is transported through the networks of global capitalism. Multi-sited ethnography may also follow ethnic groups in diaspora, stories or rumours that appear in multiple locations and in multiple time periods, metaphors that appear in multiple ethnographic locations, or the biographies of individual people or groups as they move through space and time. It may also follow conflicts that transcend boundaries. An example of multi-sited ethnography is Nancy Scheper-Hughes' work on the international black market for the trade of human organs. In this research, she follows organs as they are transferred through various legal and illegal networks of capitalism, as well as the rumours and urban legends that circulate in impoverished communities about child kidnapping and organ theft. Sociocultural anthropologists have increasingly turned their investigative eye on to "Western" culture. For example, Philippe Bourgois won the Margaret Mead Award in 1997 for In Search of Respect, a study of the entrepreneurs in a Harlem crack-den. Also growing more popular are ethnographies of professional communities, such as laboratory researchers, Wall Street investors, law firms, or information technology (IT) computer employees. Topics Kinship and family Kinship refers to the anthropological study of the ways in which humans form and maintain relationships with one another and how those relationships operate within and define social organization. Research in kinship studies often crosses over into different anthropological subfields including medical, feminist, and public anthropology. This is likely due to its fundamental concepts, as articulated by linguistic anthropologist Patrick McConvell: Throughout history, kinship studies have primarily focused on the topics of marriage, descent, and procreation. Anthropologists have written extensively on the variations within marriage across cultures and its legitimacy as a human institution. There are stark differences between communities in terms of marital practice and value, leaving much room for anthropological fieldwork. For instance, the Nuer of Sudan and the Brahmans of Nepal practice polygyny, where one man has several marriages to two or more women. The Nyar of India and Nyimba of Tibet and Nepal practice polyandry, where one woman is often married to two or more men. The marital practice found in most cultures, however, is monogamy, where one woman is married to one man. Anthropologists also study different marital taboos across cultures, most commonly the incest taboo of marriage within sibling and parent-child relationships. It has been found that all cultures have an incest taboo to some degree, but the taboo shifts between cultures when the marriage extends beyond the nuclear family unit. There are similar foundational differences where the act of procreation is concerned. Although anthropologists have found that biology is acknowledged in every cultural relationship to procreation, there are differences in the ways in which cultures assess the constructs of parenthood. For example, in the Nuyoo municipality of Oaxaca, Mexico, it is believed that a child can have partible maternity and partible paternity. In this case, a child would have multiple biological mothers in the case that it is born of one woman and then breastfed by another. A child would have multiple biological fathers in the case that the mother had sex with multiple men, following the commonplace belief in Nuyoo culture that pregnancy must be preceded by sex with multiple men in order have the necessary accumulation of semen. Late twentieth-century shifts in interest In the twenty-first century, Western ideas of kinship have evolved beyond the traditional assumptions of the nuclear family, raising anthropological questions of consanguinity, lineage, and normative marital expectation. The shift can be traced back to the 1960s, with the reassessment of kinship's basic principles offered by Edmund Leach, Rodney Neeham, David Schneider, and others. Instead of relying on narrow ideas of Western normalcy, kinship studies increasingly catered to "more ethnographic voices, human agency, intersecting power structures, and historical contex". The study of kinship evolved to accommodate for the fact that it cannot be separated from its institutional roots and must pay respect to the society in which it lives, including that society's contradictions, hierarchies, and individual experiences of those within it. This shift was progressed further by the emergence of second-wave feminism in the early 1970s, which introduced ideas of marital oppression, sexual autonomy, and domestic subordination. Other themes that emerged during this time included the frequent comparisons between Eastern and Western kinship systems and the increasing amount of attention paid to anthropologists' own societies, a swift turn from the focus that had traditionally been paid to largely "foreign", non-Western communities. Kinship studies began to gain mainstream recognition in the late 1990s with the surging popularity of feminist anthropology, particularly with its work related to biological anthropology and the intersectional critique of gender relations. At this time, there was the arrival of "Third World feminism", a movement that argued kinship studies could not examine the gender relations of developing countries in isolation, and must pay respect to racial and economic nuance as well. This critique became relevant, for instance, in the anthropological study of Jamaica: race and class were seen as the primary obstacles to Jamaican liberation from economic imperialism, and gender as an identity was largely ignored. Third World feminism aimed to combat this in the early twenty-first century by promoting these categories as coexisting factors. In Jamaica, marriage as an institution is often substituted for a series of partners, as poor women cannot rely on regular financial contributions in a climate of economic instability. In addition, there is a common practice of Jamaican women artificially lightening their skin tones in order to secure economic survival. These anthropological findings, according to Third World feminism, cannot see gender, racial, or class differences as separate entities, and instead must acknowledge that they interact together to produce unique individual experiences. Rise of reproductive anthropology Kinship studies have also experienced a rise in the interest of reproductive anthropology with the advancement of assisted reproductive technologies (ARTs), including in vitro fertilization (IVF). These advancements have led to new dimensions of anthropological research, as they challenge the Western standard of biogenetically based kinship, relatedness, and parenthood. According to anthropologists Maria C. Inhorn and Daphna Birenbaum-Carmeli, "ARTs have pluralized notions of relatedness and led to a more dynamic notion of "kinning" namely, kinship as a process, as something under construction, rather than a natural given". With this technology, questions of kinship have emerged over the difference between biological and genetic relatedness, as gestational surrogates can provide a biological environment for the embryo while the genetic ties remain with a third party. If genetic, surrogate, and adoptive maternities are involved, anthropologists have acknowledged that there can be the possibility for three "biological" mothers to a single child. With ARTs, there are also anthropological questions concerning the intersections between wealth and fertility: ARTs are generally only available to those in the highest income bracket, meaning the infertile poor are inherently devalued in the system. There have also been issues of reproductive tourism and bodily commodification, as individuals seek economic security through hormonal stimulation and egg harvesting, which are potentially harmful procedures. With IVF, specifically, there have been many questions of embryotic value and the status of life, particularly as it relates to the manufacturing of stem cells, testing, and research. Current issues in kinship studies, such as adoption, have revealed and challenged the Western cultural disposition towards the genetic, "blood" tie. Western biases against single parent homes have also been explored through similar anthropological research, uncovering that a household with a single parent experiences "greater levels of scrutiny and [is] routinely seen as the 'other' of the nuclear, patriarchal family". The power dynamics in reproduction, when explored through a comparative analysis of "conventional" and "unconventional" families, have been used to dissect the Western assumptions of child bearing and child rearing in contemporary kinship studies. Critiques of kinship studies Kinship, as an anthropological field of inquiry, has been heavily criticized across the discipline. One critique is that, as its inception, the framework of kinship studies was far too structured and formulaic, relying on dense language and stringent rules. Another critique, explored at length by American anthropologist David Schneider, argues that kinship has been limited by its inherent Western ethnocentrism. Schneider proposes that kinship is not a field that can be applied cross-culturally, as the theory itself relies on European assumptions of normalcy. He states in the widely circulated 1984 book A critique of the study of kinship that "[K]inship has been defined by European social scientists, and European social scientists use their own folk culture as the source of many, if not all of their ways of formulating and understanding the world about them". However, this critique has been challenged by the argument that it is linguistics, not cultural divergence, that has allowed for a European bias, and that the bias can be lifted by centering the methodology on fundamental human concepts. Polish anthropologist Anna Wierzbicka argues that "mother" and "father" are examples of such fundamental human concepts, and can only be Westernized when conflated with English concepts such as "parent" and "sibling". A more recent critique of kinship studies is its solipsistic focus on privileged, Western human relations and its promotion of normative ideals of human exceptionalism. In Critical Kinship Studies, social psychologists Elizabeth Peel and Damien Riggs argue for a move beyond this human-centered framework, opting instead to explore kinship through a "posthumanist" vantage point where anthropologists focus on the intersecting relationships of human animals, non-human animals, technologies and practices. Institutional anthropology The role of anthropology in institutions has expanded significantly since the end of the 20th century. Much of this development can be attributed to the rise in anthropologists working outside of academia and the increasing importance of globalization in both institutions and the field of anthropology. Anthropologists can be employed by institutions such as for-profit business, nonprofit organizations, and governments. For instance, cultural anthropologists are commonly employed by the United States federal government. The two types of institutions defined in the field of anthropology are total institutions and social institutions. Total institutions are places that comprehensively coordinate the actions of people within them, and examples of total institutions include prisons, convents, and hospitals. Social institutions, on the other hand, are constructs that regulate individuals' day-to-day lives, such as kinship, religion, and economics. Anthropology of institutions may analyze labor unions, businesses ranging from small enterprises to corporations, government, medical organizations, education, prisons, and financial institutions. Nongovernmental organizations have garnered particular interest in the field of institutional anthropology because they are capable of fulfilling roles previously ignored by governments, or previously realized by families or local groups, in an attempt to mitigate social problems. The types and methods of scholarship performed in the anthropology of institutions can take a number of forms. Institutional anthropologists may study the relationship between organizations or between an organization and other parts of society. Institutional anthropology may also focus on the inner workings of an institution, such as the relationships, hierarchies and cultures formed, and the ways that these elements are transmitted and maintained, transformed, or abandoned over time. Additionally, some anthropology of institutions examines the specific design of institutions and their corresponding strength. More specifically, anthropologists may analyze specific events within an institution, perform semiotic investigations, or analyze the mechanisms by which knowledge and culture are organized and dispersed. In all manifestations of institutional anthropology, participant observation is critical to understanding the intricacies of the way an institution works and the consequences of actions taken by individuals within it. Simultaneously, anthropology of institutions extends beyond examination of the commonplace involvement of individuals in institutions to discover how and why the organizational principles evolved in the manner that they did. Common considerations taken by anthropologists in studying institutions include the physical location at which a researcher places themselves, as important interactions often take place in private, and the fact that the members of an institution are often being examined in their workplace and may not have much idle time to discuss the details of their everyday endeavors. The ability of individuals to present the workings of an institution in a particular light or frame must additionally be taken into account when using interviews and document analysis to understand an institution, as the involvement of an anthropologist may be met with distrust when information being released to the public is not directly controlled by the institution and could potentially be damaging. See also References External links Official website of Human Relations Area Files (HRAF) based at Yale University A Basic Guide to Cross-Cultural Research from HRAF
5390
https://en.wikipedia.org/wiki/Conversion%20of%20units
Conversion of units
Conversion of units is the conversion between different units of measurement for the same quantity, typically through multiplicative conversion factors which change the measured quantity value without changing its effects. Unit conversion is often easier within the metric or the SI than in others, due to the regular 10-base in all units and the prefixes that increase or decrease by 3 powers of 10 at a time. Overview The process of conversion depends on the specific situation and the intended purpose. This may be governed by regulation, contract, technical specifications or other published standards. Engineering judgment may include such factors as: The precision and accuracy of measurement and the associated uncertainty of measurement. The statistical confidence interval or tolerance interval of the initial measurement. The number of significant figures of the measurement. The intended use of the measurement including the engineering tolerances. Historical definitions of the units and their derivatives used in old measurements; e.g., international foot vs. US survey foot. Some conversions from one system of units to another need to be exact, without increasing or decreasing the precision of the first measurement. This is sometimes called soft conversion. It does not involve changing the physical configuration of the item being measured. By contrast, a hard conversion or an adaptive conversion may not be exactly equivalent. It changes the measurement to convenient and workable numbers and units in the new system. It sometimes involves a slightly different configuration, or size substitution, of the item. Nominal values are sometimes allowed and used. Factor-label method The factor-label method, also known as the unit-factor method or the unity bracket method, is a widely used technique for unit conversions using the rules of algebra. The factor-label method is the sequential application of conversion factors expressed as fractions and arranged so that any dimensional unit appearing in both the numerator and denominator of any of the fractions can be cancelled out until only the desired set of dimensional units is obtained. For example, 10 miles per hour can be converted to metres per second by using a sequence of conversion factors as shown below: Each conversion factor is chosen based on the relationship between one of the original units and one of the desired units (or some intermediary unit), before being re-arranged to create a factor that cancels out the original unit. For example, as "mile" is the numerator in the original fraction and , "mile" will need to be the denominator in the conversion factor. Dividing both sides of the equation by 1 mile yields , which when simplified results in the dimensionless . Because of the identity property of multiplication, multiplying any quantity (physical or not) by the dimensionless 1 does not change that quantity. Once this and the conversion factor for seconds per hour have been multiplied by the original fraction to cancel out the units mile and hour, 10 miles per hour converts to 4.4704 metres per second. As a more complex example, the concentration of nitrogen oxides (NOx) in the flue gas from an industrial furnace can be converted to a mass flow rate expressed in grams per hour (g/h) of NOx by using the following information as shown below: NOx concentration = 10 parts per million by volume = 10 ppmv = 10 volumes/106 volumes NOx molar mass = 46 kg/kmol = 46 g/mol Flow rate of flue gas = 20 cubic metres per minute = 20 m3/min The flue gas exits the furnace at 0 °C temperature and 101.325 kPa absolute pressure. The molar volume of a gas at 0 °C temperature and 101.325 kPa is 22.414 m3/kmol. After canceling out any dimensional units that appear both in the numerators and denominators of the fractions in the above equation, the NOx concentration of 10 ppmv converts to mass flow rate of 24.63 grams per hour. Checking equations that involve dimensions The factor-label method can also be used on any mathematical equation to check whether or not the dimensional units on the left hand side of the equation are the same as the dimensional units on the right hand side of the equation. Having the same units on both sides of an equation does not ensure that the equation is correct, but having different units on the two sides (when expressed in terms of base units) of an equation implies that the equation is wrong. For example, check the universal gas law equation of , when: the pressure P is in pascals (Pa) the volume V is in cubic metres (m3) the amount of substance n is in moles (mol) the universal gas constant R is 8.3145 Pa⋅m3/(mol⋅K) the temperature T is in kelvins (K) As can be seen, when the dimensional units appearing in the numerator and denominator of the equation's right hand side are cancelled out, both sides of the equation have the same dimensional units. Dimensional analysis can be used as a tool to construct equations that relate non-associated physico-chemical properties. The equations may reveal hitherto unknown or overlooked properties of matter, in the form of left-over dimensions – dimensional adjusters – that can then be assigned physical significance. It is important to point out that such 'mathematical manipulation' is neither without prior precedent, nor without considerable scientific significance. Indeed, the Planck constant, a fundamental physical constant, was 'discovered' as a purely mathematical abstraction or representation that built on the Rayleigh–Jeans law for preventing the ultraviolet catastrophe. It was assigned and ascended to its quantum physical significance either in tandem or post mathematical dimensional adjustment – not earlier. Limitations The factor-label method can convert only unit quantities for which the units are in a linear relationship intersecting at 0. (Ratio scale in Stevens's typology) Most units fit this paradigm. An example for which it cannot be used is the conversion between degrees Celsius and kelvins (or degrees Fahrenheit). Between degrees Celsius and kelvins, there is a constant difference rather than a constant ratio, while between degrees Celsius and degrees Fahrenheit there is neither a constant difference nor a constant ratio. There is, however, an affine transform (, rather than a linear transform ) between them. For example, the freezing point of water is 0 °C and 32 °F, and a 5 °C change is the same as a 9 °F change. Thus, to convert from units of Fahrenheit to units of Celsius, one subtracts 32 °F (the offset from the point of reference), divides by 9 °F and multiplies by 5 °C (scales by the ratio of units), and adds 0 °C (the offset from the point of reference). Reversing this yields the formula for obtaining a quantity in units of Celsius from units of Fahrenheit; one could have started with the equivalence between 100 °C and 212 °F, though this would yield the same formula at the end. Hence, to convert the numerical quantity value of a temperature T[F] in degrees Fahrenheit to a numerical quantity value T[C] in degrees Celsius, this formula may be used: T[C] = (T[F] − 32) × 5/9. To convert T[C] in degrees Celsius to T[F] in degrees Fahrenheit, this formula may be used: T[F] = (T[C] × 9/5) + 32. Example Starting with: replace the original unit with its meaning in terms of the desired unit , e.g. if , then: Now and are both numerical values, so just calculate their product. Or, which is just mathematically the same thing, multiply Z by unity, the product is still Z: For example, you have an expression for a physical value Z involving the unit feet per second () and you want it in terms of the unit miles per hour (): Or as an example using the metric system, you have a value of fuel economy in the unit litres per 100 kilometres and you want it in terms of the unit microlitres per metre: Calculation involving non-SI Units In the cases where non-SI units are used, the numerical calculation of a formula can be done by first working out the pre-factor, and then plug in the numerical values of the given/known quantities. For example, in the study of Bose–Einstein condensate, atomic mass is usually given in daltons, instead of kilograms, and chemical potential is often given in the Boltzmann constant times nanokelvin. The condensate's healing length is given by: For a 23Na condensate with chemical potential of (the Boltzmann constant times) 128 nK, the calculation of healing length (in micrometres) can be done in two steps: Calculate the pre-factor Assume that this gives which is our pre-factor. Calculate the numbers Now, make use of the fact that . With , . This method is especially useful for programming and/or making a worksheet, where input quantities are taking multiple different values; For example, with the pre-factor calculated above, it is very easy to see that the healing length of 174Yb with chemical potential 20.3 nK is . Software tools There are many conversion tools. They are found in the function libraries of applications such as spreadsheets databases, in calculators, and in macro packages and plugins for many other applications such as the mathematical, scientific and technical applications. There are many standalone applications that offer the thousands of the various units with conversions. For example, the free software movement offers a command line utility GNU units for Linux and Windows. The Unified Code for Units of Measure is also a popular option. See also Accuracy and precision Conversion of units of temperature Dimensional analysis English units False precision Imperial units International System of Units Mesures usuelles Metric prefix (e.g. "kilo-" prefix) Metric system Natural units Orders of Magnitude Rounding Significant figures United States customary units Unit of length Units of measurement Notes and references Notes External links   NIST Guide to SI Units Many conversion factors listed. The Unified Code for Units of Measure Units, Symbols, and Conversions XML Dictionary "Instruction sur les poids et mesures républicaines:déduites de la grandeur de la terre,uniformes pour toute la République,et sur les calculs relatifs à leur division décimale" Unicalc Live web calculator doing units conversion by dimensional analysis Math Skills Review U.S. EPA tutorial A Discussion of Units Short Guide to Unit Conversions Canceling Units Lesson Chapter 11: Behavior of Gases Chemistry: Concepts and Applications, Denton independent school District Air Dispersion Modeling Conversions and Formulas www.gnu.org/software/units free program, very practical Metrication Conversion of units of measurement
5391
https://en.wikipedia.org/wiki/City
City
A city is a human settlement of a notable size. It can be defined as a permanent and densely settled place with administratively defined boundaries whose members work primarily on non-agricultural tasks. Cities generally have extensive systems for housing, transportation, sanitation, utilities, land use, production of goods, and communication. Their density facilitates interaction between people, government organizations, and businesses, sometimes benefiting different parties in the process, such as improving the efficiency of goods and service distribution. Historically, city dwellers have been a small proportion of humanity overall, but following two centuries of unprecedented and rapid urbanization, more than half of the world population now lives in cities, which has had profound consequences for global sustainability. Present-day cities usually form the core of larger metropolitan areas and urban areas—creating numerous commuters traveling toward city centres for employment, entertainment, and education. However, in a world of intensifying globalization, all cities are to varying degrees also connected globally beyond these regions. This increased influence means that cities also have significant influences on global issues, such as sustainable development, climate change, and global health. Because of these major influences on global issues, the international community has prioritized investment in sustainable cities through Sustainable Development Goal 11. Due to the efficiency of transportation and the smaller land consumption, dense cities hold the potential to have a smaller ecological footprint per inhabitant than more sparsely populated areas. Therefore, compact cities are often referred to as a crucial element in fighting climate change. However, this concentration can also have significant negative consequences, such as forming urban heat islands, concentrating pollution, and stressing water supplies and other resources. Other important traits of cities besides population include the capital status and relative continued occupation of the city. For example, country capitals such as Athens,Beijing, Jakarta, Kuala Lumpur, London, Manila, Mexico City, Moscow, Nairobi, New Delhi, Paris, Rome, Seoul, Singapore, Tokyo, and Washington, D.C. reflect the identity and apex of their respective nations. Some historic capitals, such as Kyoto, Yogyakarta, and Xi'an, maintain their reflection of cultural identity even without modern capital status. Religious holy sites offer another example of capital status within a religion; examples include Jerusalem, Mecca, Varanasi, Ayodhya, Haridwar, and Prayagraj. Meaning A city can be distinguished from other human settlements by its relatively great size, but also by its functions and its special symbolic status, which may be conferred by a central authority. The term can also refer either to the physical streets and buildings of the city or to the collection of people who dwell there and can be used in a general sense to mean urban rather than rural territory. National censuses use a variety of definitions – invoking factors such as population, population density, number of dwellings, economic function, and infrastructure – to classify populations as urban. Typical working definitions for small-city populations start at around 100,000 people. Common population definitions for an urban area (city or town) range between 1,500 and 50,000 people, with most U.S. states using a minimum between 1,500 and 5,000 inhabitants. Some jurisdictions set no such minima. In the United Kingdom, city status is awarded by the Crown and then remains permanent. (Historically, the qualifying factor was the presence of a cathedral, resulting in some very small cities such as Wells, with a population of 12,000 , and St Davids, with a population of 1,841 .) According to the "functional definition", a city is not distinguished by size alone, but also by the role it plays within a larger political context. Cities serve as administrative, commercial, religious, and cultural hubs for their larger surrounding areas. The presence of a literate elite is often associated with cities because of the cultural diversities present in a city. A typical city has professional administrators, regulations, and some form of taxation (food and other necessities or means to trade for them) to support the government workers. (This arrangement contrasts with the more typically horizontal relationships in a tribe or village accomplishing common goals through informal agreements between neighbors, or the leadership of a chief.) The governments may be based on heredity, religion, military power, work systems such as canal-building, food distribution, land-ownership, agriculture, commerce, manufacturing, finance, or a combination of these. Societies that live in cities are often called civilizations. The degree of urbanization is a modern metric to help define what comprises a city: "a population of at least 50,000 inhabitants in contiguous dense grid cells (>1,500 inhabitants per square kilometer)". This metric was "devised over years by the European Commission, OECD, World Bank and others, and endorsed in March [2021] by the United Nations ... largely for the purpose of international statistical comparison". Etymology The word city and the related civilization come from the Latin root civitas, originally meaning 'citizenship' or 'community member' and eventually coming to correspond with urbs, meaning 'city' in a more physical sense. The Roman civitas was closely linked with the Greek polis—another common root appearing in English words such as metropolis. In toponymic terminology, names of individual cities and towns are called astionyms (from Ancient Greek ἄστυ 'city or town' and ὄνομα 'name'). Geography Urban geography deals both with cities in their larger context and with their internal structure. Cities are estimated to cover about 3% of the land surface of the Earth. Site Town siting has varied through history according to natural, technological, economic, and military contexts. Access to water has long been a major factor in city placement and growth, and despite exceptions enabled by the advent of rail transport in the nineteenth century, through the present most of the world's urban population lives near the coast or on a river. Urban areas as a rule cannot produce their own food and therefore must develop some relationship with a hinterland that sustains them. Only in special cases such as mining towns which play a vital role in long-distance trade, are cities disconnected from the countryside which feeds them. Thus, centrality within a productive region influences siting, as economic forces would, in theory, favor the creation of marketplaces in optimal mutually reachable locations. Center The vast majority of cities have a central area containing buildings with special economic, political, and religious significance. Archaeologists refer to this area by the Greek term temenos or if fortified as a citadel. These spaces historically reflect and amplify the city's centrality and importance to its wider sphere of influence. Today cities have a city center or downtown, sometimes coincident with a central business district. Public space Cities typically have public spaces where anyone can go. These include privately owned spaces open to the public as well as forms of public land such as public domain and the commons. Western philosophy since the time of the Greek agora has considered physical public space as the substrate of the symbolic public sphere. Public art adorns (or disfigures) public spaces. Parks and other natural sites within cities provide residents with relief from the hardness and regularity of typical built environments. Urban green spaces are another component of public space that provides the benefit of mitigating the urban heat island effect, especially in cities that are in warmer climates. These spaces prevent carbon imbalances, extreme habitat losses, electricity and water consumption, and human health risks. Internal structure The urban structure generally follows one or more basic patterns: geomorphic, radial, concentric, rectilinear, and curvilinear. The physical environment generally constrains the form in which a city is built. If located on a mountainside, urban structures may rely on terraces and winding roads. It may be adapted to its means of subsistence (e.g. agriculture or fishing). And it may be set up for optimal defense given the surrounding landscape. Beyond these "geomorphic" features, cities can develop internal patterns, due to natural growth or to city planning. In a radial structure, main roads converge on a central point. This form could evolve from successive growth over a long time, with concentric traces of town walls and citadels marking older city boundaries. In more recent history, such forms were supplemented by ring roads moving traffic around the outskirts of a town. Dutch cities such as Amsterdam and Haarlem are structured as a central square surrounded by concentric canals marking every expansion. In cities such as Moscow, this pattern is still clearly visible. A system of rectilinear city streets and land plots, known as the grid plan, has been used for millennia in Asia, Europe, and the Americas. The Indus Valley civilization built Mohenjo-Daro, Harappa, and other cities on a grid pattern, using ancient principles described by Kautilya, and aligned with the compass points. The ancient Greek city of Priene exemplifies a grid plan with specialized districts used across the Hellenistic Mediterranean. Urban areas The urban-type settlement extends far beyond the traditional boundaries of the city proper in a form of development sometimes described critically as urban sprawl. Decentralization and dispersal of city functions (commercial, industrial, residential, cultural, political) has transformed the very meaning of the term and has challenged geographers seeking to classify territories according to an urban-rural binary. Metropolitan areas include suburbs and exurbs organized around the needs of commuters, and sometimes edge cities characterized by a degree of economic and political independence. (In the US these are grouped into metropolitan statistical areas for purposes of demography and marketing.) Some cities are now part of a continuous urban landscape called urban agglomeration, conurbation, or megalopolis (exemplified by the BosWash corridor of the Northeastern United States.) History The emergence of cities from proto-urban settlements, such as Çatalhöyük, is a non-linear development that demonstrates the varied experiences of early urbanization. The cities of Jericho, Aleppo, Faiyum, Yerevan, Athens, Matera, Damascus, and Argos are among those laying claim to the longest continual inhabitation. Cities, characterized by population density, symbolic function, and urban planning, have existed for thousands of years. In the conventional view, civilization and the city were both followed by the development of agriculture, which enabled the production of surplus food and thus a social division of labor (with concomitant social stratification) and trade. Early cities often featured granaries, sometimes within a temple. A minority viewpoint considers that cities may have arisen without agriculture, due to alternative means of subsistence (fishing), to use as communal seasonal shelters, to their value as bases for defensive and offensive military organization, or to their inherent economic function. Cities played a crucial role in the establishment of political power over an area, and ancient leaders such as Alexander the Great founded and created them with zeal. Ancient times Jericho and Çatalhöyük, dated to the eighth millennium BC, are among the earliest proto-cities known to archaeologists. However, the Mesopotamian city of Uruk from the mid-fourth millennium BC (ancient Iraq) is considered by most archaeologists to be the first true city, innovating many characteristics for cities to follow, with its name attributed to the Uruk period. In the fourth and third millennium BC, complex civilizations flourished in the river valleys of Mesopotamia, India, China, and Egypt. Excavations in these areas have found the ruins of cities geared variously towards trade, politics, or religion. Some had large, dense populations, but others carried out urban activities in the realms of politics or religion without having large associated populations. Among the early Old World cities, Mohenjo-Daro of the Indus Valley civilization in present-day Pakistan, existing from about 2600 BC, was one of the largest, with a population of 50,000 or more and a sophisticated sanitation system. China's planned cities were constructed according to sacred principles to act as celestial microcosms. The Ancient Egyptian cities known physically by archaeologists are not extensive. They include (known by their Arab names) El Lahun, a workers' town associated with the pyramid of Senusret II, and the religious city Amarna built by Akhenaten and abandoned. These sites appear planned in a highly regimented and stratified fashion, with a minimalistic grid of rooms for the workers and increasingly more elaborate housing available for higher classes. In Mesopotamia, the civilization of Sumer, followed by Assyria and Babylon, gave rise to numerous cities, governed by kings and fostered multiple languages written in cuneiform. The Phoenician trading empire, flourishing around the turn of the first millennium BC, encompassed numerous cities extending from Tyre, Cydon, and Byblos to Carthage and Cádiz. In the following centuries, independent city-states of Greece, especially Athens, developed the polis, an association of male landowning citizens who collectively constituted the city. The agora, meaning "gathering place" or "assembly", was the center of the athletic, artistic, spiritual, and political life of the polis. Rome was the first city that surpassed one million inhabitants. Under the authority of its empire, Rome transformed and founded many cities (), and with them brought its principles of urban architecture, design, and society. In the ancient Americas, early urban traditions developed in the Andes and Mesoamerica. In the Andes, the first urban centers developed in the Norte Chico civilization, Chavin and Moche cultures, followed by major cities in the Huari, Chimu, and Inca cultures. The Norte Chico civilization included as many as 30 major population centers in what is now the Norte Chico region of north-central coastal Peru. It is the oldest known civilization in the Americas, flourishing between the 30th and 18th centuries BC. Mesoamerica saw the rise of early urbanism in several cultural regions, beginning with the Olmec and spreading to the Preclassic Maya, the Zapotec of Oaxaca, and Teotihuacan in central Mexico. Later cultures such as the Aztec, Andean civilizations, Mayan, Mississippians, and Pueblo peoples drew on these earlier urban traditions. Many of their ancient cities continue to be inhabited, including major metropolitan cities such as Mexico City, in the same location as Tenochtitlan; while ancient continuously inhabited Pueblos are near modern urban areas in New Mexico, such as Acoma Pueblo near the Albuquerque metropolitan area and Taos Pueblo near Taos; while others like Lima are located nearby ancient Peruvian sites such as Pachacamac. Jenné-Jeno, located in present-day Mali and dating to the third century BC, lacked monumental architecture and a distinctive elite social class—but nevertheless had specialized production and relations with a hinterland. Pre-Arabic trade contacts probably existed between Jenné-Jeno and North Africa. Other early urban centers in sub-Saharan Africa, dated to around 500 AD, include Awdaghust, Kumbi-Saleh the ancient capital of Ghana, and Maranda a center located on a trade route between Egypt and Gao. Middle Ages In the remnants of the Roman Empire, cities of late antiquity gained independence but soon lost population and importance. The locus of power in the West shifted to Constantinople and to the ascendant Islamic civilization with its major cities Baghdad, Cairo, and Córdoba. From the 9th through the end of the 12th century, Constantinople, the capital of the Eastern Roman Empire, was the largest and wealthiest city in Europe, with a population approaching 1 million. The Ottoman Empire gradually gained control over many cities in the Mediterranean area, including Constantinople in 1453. In the Holy Roman Empire, beginning in the 12th century, free imperial cities such as Nuremberg, Strasbourg, Frankfurt, Basel, Zurich, and Nijmegen became a privileged elite among towns having won self-governance from their local lord or having been granted self-governance by the emperor and being placed under his immediate protection. By 1480, these cities, as far as still part of the empire, became part of the Imperial Estates governing the empire with the emperor through the Imperial Diet. By the 13th and 14th centuries, some cities become powerful states, taking surrounding areas under their control or establishing extensive maritime empires. In Italy, medieval communes developed into city-states including the Republic of Venice and the Republic of Genoa. In Northern Europe, cities including Lübeck and Bruges formed the Hanseatic League for collective defense and commerce. Their power was later challenged and eclipsed by the Dutch commercial cities of Ghent, Ypres, and Amsterdam. Similar phenomena existed elsewhere, as in the case of Sakai, which enjoyed considerable autonomy in late medieval Japan. In the first millennium AD, the Khmer capital of Angkor in Cambodia grew into the most extensive preindustrial settlement in the world by area, covering over 1,000 km2 and possibly supporting up to one million people. Early modern In the West, nation-states became the dominant unit of political organization following the Peace of Westphalia in the seventeenth century. Western Europe's larger capitals (London and Paris) benefited from the growth of commerce following the emergence of an Atlantic trade. However, most towns remained small. During the Spanish colonization of the Americas, the old Roman city concept was extensively used. Cities were founded in the middle of the newly conquered territories and were bound to several laws regarding administration, finances, and urbanism. Industrial age The growth of the modern industry from the late 18th century onward led to massive urbanization and the rise of new great cities, first in Europe and then in other regions, as new opportunities brought huge numbers of migrants from rural communities into urban areas. England led the way as London became the capital of a world empire and cities across the country grew in locations strategic for manufacturing. In the United States from 1860 to 1910, the introduction of railroads reduced transportation costs, and large manufacturing centers began to emerge, fueling migration from rural to city areas. Some industrialized cities were confronted with health challenges associated with overcrowding, occupational hazards of industry, contaminated water and air, poor sanitation, and communicable diseases such as typhoid and cholera. Factories and slums emerged as regular features of the urban landscape. Post-industrial age In the second half of the 20th century, deindustrialization (or "economic restructuring") in the West led to poverty, homelessness, and urban decay in formerly prosperous cities. America's "Steel Belt" became a "Rust Belt" and cities such as Detroit, Michigan, and Gary, Indiana began to shrink, contrary to the global trend of massive urban expansion. Such cities have shifted with varying success into the service economy and public-private partnerships, with concomitant gentrification, uneven revitalization efforts, and selective cultural development. Under the Great Leap Forward and subsequent five-year plans continuing today, China has undergone concomitant urbanization and industrialization and become the world's leading manufacturer. Amidst these economic changes, high technology and instantaneous telecommunication enable select cities to become centers of the knowledge economy. A new smart city paradigm, supported by institutions such as the RAND Corporation and IBM, is bringing computerized surveillance, data analysis, and governance to bear on cities and city dwellers. Some companies are building brand-new master-planned cities from scratch on greenfield sites. Urbanization Urbanization is the process of migration from rural to urban areas, driven by various political, economic, and cultural factors. Until the 18th century, an equilibrium existed between the rural agricultural population and towns featuring markets and small-scale manufacturing. With the agricultural and industrial revolutions urban population began its unprecedented growth, both through migration and demographic expansion. In England, the proportion of the population living in cities jumped from 17% in 1801 to 72% in 1891. In 1900, 15% of the world's population lived in cities. The cultural appeal of cities also plays a role in attracting residents. Urbanization rapidly spread across Europe and the Americas and since the 1950s has taken hold in Asia and Africa as well. The Population Division of the United Nations Department of Economic and Social Affairs reported in 2014 that for the first time, more than half of the world population lives in cities. Latin America is the most urban continent, with four-fifths of its population living in cities, including one-fifth of the population said to live in shantytowns (favelas, poblaciones callampas, etc.). Batam, Indonesia, Mogadishu, Somalia, Xiamen, China, and Niamey, Niger, are considered among the world's fastest-growing cities, with annual growth rates of 5–8%. In general, the more developed countries of the "Global North" remain more urbanized than the less developed countries of the "Global South"—but the difference continues to shrink because urbanization is happening faster in the latter group. Asia is home to by far the greatest absolute number of city-dwellers: over two billion and counting. The UN predicts an additional 2.5 billion city dwellers (and 300 million fewer country dwellers) worldwide by 2050, with 90% of urban population expansion occurring in Asia and Africa. Megacities, cities with populations in the multi-millions, have proliferated into the dozens, arising especially in Asia, Africa, and Latin America. Economic globalization fuels the growth of these cities, as new torrents of foreign capital arrange for rapid industrialization, as well as the relocation of major businesses from Europe and North America, attracting immigrants from near and far. A deep gulf divides the rich and poor in these cities, with usually contain a super-wealthy elite living in gated communities and large masses of people living in substandard housing with inadequate infrastructure and otherwise poor conditions. Cities around the world have expanded physically as they grow in population, with increases in their surface extent, with the creation of high-rise buildings for residential and commercial use, and with development underground. Urbanization can create rapid demand for water resources management, as formerly good sources of freshwater become overused and polluted, and the volume of sewage begins to exceed manageable levels. Government The local government of cities takes different forms including prominently the municipality (especially in England, in the United States, India, and other British colonies; legally, the municipal corporation; municipio in Spain and Portugal, and, along with municipalidad, in most former parts of the Spanish and Portuguese empires) and the commune (in France and Chile; or comune in Italy). The chief official of the city has the title of mayor. Whatever their true degree of political authority, the mayor typically acts as the figurehead or personification of their city. Legal conflicts and issues arise more frequently in cities than elsewhere due to the bare fact of their greater density. Modern city governments thoroughly regulate everyday life in many dimensions, including public and personal health, transport, burial, resource use and extraction, recreation, and the nature and use of buildings. Technologies, techniques, and laws governing these areas—developed in cities—have become ubiquitous in many areas. Municipal officials may be appointed from a higher level of government or elected locally. Municipal services Cities typically provide municipal services such as education, through school systems; policing, through police departments; and firefighting, through fire departments; as well as the city's basic infrastructure. These are provided more or less routinely, in a more or less equal fashion. Responsibility for administration usually falls on the city government, but some services may be operated by a higher level of government, while others may be privately run. Armies may assume responsibility for policing cities in states of domestic turmoil such as America's King assassination riots of 1968. Finance The traditional basis for municipal finance is local property tax levied on real estate within the city. Local government can also collect revenue for services, or by leasing land that it owns. However, financing municipal services, as well as urban renewal and other development projects, is a perennial problem, which cities address through appeals to higher governments, arrangements with the private sector, and techniques such as privatization (selling services into the private sector), corporatization (formation of quasi-private municipally-owned corporations), and financialization (packaging city assets into tradeable financial public contracts and other related rights). This situation has become acute in deindustrialized cities and in cases where businesses and wealthier citizens have moved outside of city limits and therefore beyond the reach of taxation. Cities in search of ready cash increasingly resort to the municipal bond, essentially a loan with interest and a repayment date. City governments have also begun to use tax increment financing, in which a development project is financed by loans based on future tax revenues which it is expected to yield. Under these circumstances, creditors and consequently city governments place a high importance on city credit ratings. Governance Governance includes government but refers to a wider domain of social control functions implemented by many actors including non-governmental organizations. The impact of globalization and the role of multinational corporations in local governments worldwide, has led to a shift in perspective on urban governance, away from the "urban regime theory" in which a coalition of local interests functionally govern, toward a theory of outside economic control, widely associated in academics with the philosophy of neoliberalism. In the neoliberal model of governance, public utilities are privatized, the industry is deregulated, and corporations gain the status of governing actors—as indicated by the power they wield in public-private partnerships and over business improvement districts, and in the expectation of self-regulation through corporate social responsibility. The biggest investors and real estate developers act as the city's de facto urban planners. The related concept of good governance places more emphasis on the state, with the purpose of assessing urban governments for their suitability for development assistance. The concepts of governance and good governance are especially invoked in emergent megacities, where international organizations consider existing governments inadequate for their large populations. Urban planning Urban planning, the application of forethought to city design, involves optimizing land use, transportation, utilities, and other basic systems, in order to achieve certain objectives. Urban planners and scholars have proposed overlapping theories as ideals for how plans should be formed. Planning tools, beyond the original design of the city itself, include public capital investment in infrastructure and land-use controls such as zoning. The continuous process of comprehensive planning involves identifying general objectives as well as collecting data to evaluate progress and inform future decisions. Government is legally the final authority on planning but in practice, the process involves both public and private elements. The legal principle of eminent domain is used by the government to divest citizens of their property in cases where its use is required for a project. Planning often involves tradeoffs—decisions in which some stand to gain and some to lose—and thus is closely connected to the prevailing political situation. The history of urban planning dates to some of the earliest known cities, especially in the Indus Valley and Mesoamerican civilizations, which built their cities on grids and apparently zoned different areas for different purposes. The effects of planning, ubiquitous in today's world, can be seen most clearly in the layout of planned communities, fully designed prior to construction, often with consideration for interlocking physical, economic, and cultural systems. Society Social structure Urban society is typically stratified. Spatially, cities are formally or informally segregated along ethnic, economic, and racial lines. People living relatively close together may live, work, and play in separate areas, and associate with different people, forming ethnic or lifestyle enclaves or, in areas of concentrated poverty, ghettoes. While in the US and elsewhere poverty became associated with the inner city, in France it has become associated with the banlieues, areas of urban development that surround the city proper. Meanwhile, across Europe and North America, the racially white majority is empirically the most segregated group. Suburbs in the West, and, increasingly, gated communities and other forms of "privatopia" around the world, allow local elites to self-segregate into secure and exclusive neighborhoods. Landless urban workers, contrasted with peasants and known as the proletariat, form a growing stratum of society in the age of urbanization. In Marxist doctrine, the proletariat will inevitably revolt against the bourgeoisie as their ranks swell with disenfranchised and disaffected people lacking all stake in the status quo. The global urban proletariat of today, however, generally lacks the status of factory workers which in the nineteenth century provided access to the means of production. Economics Historically, cities rely on rural areas for intensive farming to yield surplus crops, in exchange for which they provide money, political administration, manufactured goods, and culture. Urban economics tends to analyze larger agglomerations, stretching beyond city limits, in order to reach a more complete understanding of the local labor market. As hubs of trade, cities have long been home to retail commerce and consumption through the interface of shopping. In the 20th century, department stores using new techniques of advertising, public relations, decoration, and design, transformed urban shopping areas into fantasy worlds encouraging self-expression and escape through consumerism. In general, the density of cities expedites commerce and facilitates knowledge spillovers, helping people and firms exchange information and generate new ideas. A thicker labor market allows for better skill matching between firms and individuals. Population density enables also sharing of common infrastructure and production facilities; however, in very dense cities, increased crowding and waiting times may lead to some negative effects. Although manufacturing fueled the growth of cities, many now rely on a tertiary or service economy. The services in question range from tourism, hospitality, entertainment, and housekeeping to grey-collar work in law, financial consulting, and administration. According to a scientific model of cities by Professor Geoffrey West, with the doubling of a city's size, salaries per capita will generally increase by 15%. Culture and communications Cities are typically hubs for education and the arts, supporting universities, museums, temples, and other cultural institutions. They feature impressive displays of architecture ranging from small to enormous and ornate to brutal; skyscrapers, providing thousands of offices or homes within a small footprint, and visible from miles away, have become iconic urban features. Cultural elites tend to live in cities, bound together by shared cultural capital, and themselves play some role in governance. By virtue of their status as centers of culture and literacy, cities can be described as the locus of civilization, human history, and social change. Density makes for effective mass communication and transmission of news, through heralds, printed proclamations, newspapers, and digital media. These communication networks, though still using cities as hubs, penetrate extensively into all populated areas. In the age of rapid communication and transportation, commentators have described urban culture as nearly ubiquitous or as no longer meaningful. Today, a city's promotion of its cultural activities dovetails with place branding and city marketing, public diplomacy techniques used to inform development strategy; attract businesses, investors, residents, and tourists; and to create shared identity and sense of place within the metropolitan area. Physical inscriptions, plaques, and monuments on display physically transmit a historical context for urban places. Some cities, such as Jerusalem, Mecca, and Rome have indelible religious status and for hundreds of years have attracted pilgrims. Patriotic tourists visit Agra to see the Taj Mahal, or New York City to visit the World Trade Center. Elvis lovers visit Memphis to pay their respects at Graceland. Place brands (which include place satisfaction and place loyalty) have great economic value (comparable to the value of commodity brands) because of their influence on the decision-making process of people thinking about doing business in—"purchasing" (the brand of)—a city. Bread and circuses among other forms of cultural appeal, attract and entertain the masses. Sports also play a major role in city branding and local identity formation. Cities go to considerable lengths in competing to host the Olympic Games, which bring global attention and tourism. Paris, a city known for its cultural history, is the site of the next Olympics in the summer of 2024. Warfare Cities play a crucial strategic role in warfare due to their economic, demographic, symbolic, and political centrality. For the same reasons, they are targets in asymmetric warfare. Many cities throughout history were founded under military auspices, a great many have incorporated fortifications, and military principles continue to influence urban design. Indeed, war may have served as the social rationale and economic basis for the very earliest cities. Powers engaged in geopolitical conflict have established fortified settlements as part of military strategies, as in the case of garrison towns, America's Strategic Hamlet Program during the Vietnam War, and Israeli settlements in Palestine. While occupying the Philippines, the US Army ordered local people to concentrate in cities and towns, in order to isolate committed insurgents and battle freely against them in the countryside. During World War II, national governments on occasion declared certain cities open, effectively surrendering them to an advancing enemy in order to avoid damage and bloodshed. Urban warfare proved decisive, however, in the Battle of Stalingrad, where Soviet forces repulsed German occupiers, with extreme casualties and destruction. In an era of low-intensity conflict and rapid urbanization, cities have become sites of long-term conflict waged both by foreign occupiers and by local governments against insurgency. Such warfare, known as counterinsurgency, involves techniques of surveillance and psychological warfare as well as close combat, and functionally extends modern urban crime prevention, which already uses concepts such as defensible space. Although capture is the more common objective, warfare has in some cases spelled complete destruction for a city. Mesopotamian tablets and ruins attest to such destruction, as does the Latin motto Carthago delenda est. Since the atomic bombings of Hiroshima and Nagasaki and throughout the Cold War, nuclear strategists continued to contemplate the use of "counter-value" targeting: crippling an enemy by annihilating its valuable cities, rather than aiming primarily at its military forces. Climate change Infrastructure Urban infrastructure involves various physical networks and spaces necessary for transportation, water use, energy, recreation, and public functions. Infrastructure carries a high initial cost in fixed capital but lower marginal costs and thus positive economies of scale. Because of the higher barriers to entry, these networks have been classified as natural monopolies, meaning that economic logic favors control of each network by a single organization, public or private. Infrastructure in general plays a vital role in a city's capacity for economic activity and expansion, underpinning the very survival of the city's inhabitants, as well as technological, commercial, industrial, and social activities. Structurally, many infrastructure systems take the form of networks with redundant links and multiple pathways, so that the system as a whole continue to operate even if parts of it fail. The particulars of a city's infrastructure systems have historical path dependence because new development must build from what exists already. Megaprojects such as the construction of airports, power plants, and railways require large upfront investments and thus tend to require funding from the national government or the private sector. Privatization may also extend to all levels of infrastructure construction and maintenance. Urban infrastructure ideally serves all residents equally but in practice may prove uneven—with, in some cities, clear first-class and second-class alternatives. Utilities Public utilities (literally, useful things with general availability) include basic and essential infrastructure networks, chiefly concerned with the supply of water, electricity, and telecommunications capability to the populace. Sanitation, necessary for good health in crowded conditions, requires water supply and waste management as well as individual hygiene. Urban water systems include principally a water supply network and a network (sewerage system) for sewage and stormwater. Historically, either local governments or private companies have administered urban water supply, with a tendency toward government water supply in the 20th century and a tendency toward private operation at the turn of the twenty-first. The market for private water services is dominated by two French companies, Veolia Water (formerly Vivendi) and Engie (formerly Suez), said to hold 70% of all water contracts worldwide. Modern urban life relies heavily on the energy transmitted through electricity for the operation of electric machines (from household appliances to industrial machines to now-ubiquitous electronic systems used in communications, business, and government) and for traffic lights, street lights, and indoor lighting. Cities rely to a lesser extent on hydrocarbon fuels such as gasoline and natural gas for transportation, heating, and cooking. Telecommunications infrastructure such as telephone lines and coaxial cables also traverse cities, forming dense networks for mass and point-to-point communications. Transportation Because cities rely on specialization and an economic system based on wage labor, their inhabitants must have the ability to regularly travel between home, work, commerce, and entertainment. City dwellers travel by foot or by wheel on roads and walkways, or use special rapid transit systems based on underground, overground, and elevated rail. Cities also rely on long-distance transportation (truck, rail, and airplane) for economic connections with other cities and rural areas. City streets historically were the domain of horses and their riders and pedestrians, who only sometimes had sidewalks and special walking areas reserved for them. In the West, bicycles or (velocipedes), efficient human-powered machines for short- and medium-distance travel, enjoyed a period of popularity at the beginning of the twentieth century before the rise of automobiles. Soon after, they gained a more lasting foothold in Asian and African cities under European influence. In Western cities, industrializing, expanding, and electrifying public transit systems, and especially streetcars enabled urban expansion as new residential neighborhoods sprung up along transit lines and workers rode to and from work downtown. Since the mid-20th century, cities have relied heavily on motor vehicle transportation, with major implications for their layout, environment, and aesthetics. (This transformation occurred most dramatically in the US—where corporate and governmental policies favored automobile transport systems—and to a lesser extent in Europe.) The rise of personal cars accompanied the expansion of urban economic areas into much larger metropolises, subsequently creating ubiquitous traffic issues with the accompanying construction of new highways, wider streets, and alternative walkways for pedestrians. However, severe traffic jams still occur regularly in cities around the world, as private car ownership and urbanization continue to increase, overwhelming existing urban street networks. The urban bus system, the world's most common form of public transport, uses a network of scheduled routes to move people through the city, alongside cars, on the roads. The economic function itself also became more decentralized as concentration became impractical and employers relocated to more car-friendly locations (including edge cities). Some cities have introduced bus rapid transit systems which include exclusive bus lanes and other methods for prioritizing bus traffic over private cars. Many big American cities still operate conventional public transit by rail, as exemplified by the ever-popular New York City Subway system. Rapid transit is widely used in Europe and has increased in Latin America and Asia. Walking and cycling ("non-motorized transport") enjoy increasing favor (more pedestrian zones and bike lanes) in American and Asian urban transportation planning, under the influence of such trends as the Healthy Cities movement, the drive for sustainable development, and the idea of a carfree city. Techniques such as road space rationing and road use charges have been introduced to limit urban car traffic. Housing The housing of residents presents one of the major challenges every city must face. Adequate housing entails not only physical shelters but also the physical systems necessary to sustain life and economic activity. Homeownership represents status and a modicum of economic security, compared to renting which may consume much of the income of low-wage urban workers. Homelessness, or lack of housing, is a challenge currently faced by millions of people in countries rich and poor. Because cities generally have higher population densities than rural areas, city dwellers are more likely to reside in apartments and less likely to live in a single-family home. Ecology Urban ecosystems, influenced as they are by the density of human buildings and activities, differ considerably from those of their rural surroundings. Anthropogenic buildings and waste, as well as cultivation in gardens, create physical and chemical environments which have no equivalents in the wilderness, in some cases enabling exceptional biodiversity. They provide homes not only for immigrant humans but also for immigrant plants, bringing about interactions between species that never previously encountered each other. They introduce frequent disturbances (construction, walking) to plant and animal habitats, creating opportunities for recolonization and thus favoring young ecosystems with r-selected species dominant. On the whole, urban ecosystems are less complex and productive than others, due to the diminished absolute amount of biological interactions. Typical urban fauna includes insects (especially ants), rodents (mice, rats), and birds, as well as cats and dogs (domesticated and feral). Large predators are scarce. However, in North America, large predators such as coyotes and white-tailed deer roam in urban wildlife Cities generate considerable ecological footprints, locally and at longer distances, due to concentrated populations and technological activities. From one perspective, cities are not ecologically sustainable due to their resource needs. From another, proper management may be able to ameliorate a city's ill effects. Air pollution arises from various forms of combustion, including fireplaces, wood or coal-burning stoves, other heating systems, and internal combustion engines. Industrialized cities, and today third-world megacities, are notorious for veils of smog (industrial haze) that envelop them, posing a chronic threat to the health of their millions of inhabitants. Urban soil contains higher concentrations of heavy metals (especially lead, copper, and nickel) and has lower pH than soil in the comparable wilderness. Modern cities are known for creating their own microclimates, due to concrete, asphalt, and other artificial surfaces, which heat up in sunlight and channel rainwater into underground ducts. The temperature in New York City exceeds nearby rural temperatures by an average of 2–3 °C and at times 5–10 °C differences have been recorded. This effect varies nonlinearly with population changes (independently of the city's physical size). Aerial particulates increase rainfall by 5–10%. Thus, urban areas experience unique climates, with earlier flowering and later leaf dropping than in nearby countries. Poor and working-class people face disproportionate exposure to environmental risks (known as environmental racism when intersecting also with racial segregation). For example, within the urban microclimate, less-vegetated poor neighborhoods bear more of the heat (but have fewer means of coping with it). One of the main methods of improving the urban ecology is including in the cities more urban green spaces: parks, gardens, lawns, and trees. These areas improve the health and well-being of the human, animal, and plant populations of the cities. Well-maintained urban trees can provide many social, ecological, and physical benefits to the residents of the city. A study published in Nature's Scientific Reports journal in 2019 found that people who spent at least two hours per week in nature were 23 percent more likely to be satisfied with their life and were 59 percent more likely to be in good health than those who had zero exposure. The study used data from almost 20,000 people in the UK. Benefits increased for up to 300 minutes of exposure. The benefits are applied to men and women of all ages, as well as across different ethnicities, socioeconomic statuses, and even those with long-term illnesses and disabilities. People who did not get at least two hours – even if they surpassed an hour per week – did not get the benefits. The study is the latest addition to a compelling body of evidence for the health benefits of nature. Many doctors already give nature prescriptions to their patients. The study didn't count time spent in a person's own yard or garden as time in nature, but the majority of nature visits in the study took place within two miles of home. "Even visiting local urban green spaces seems to be a good thing," Dr. White said in a press release. "Two hours a week is hopefully a realistic target for many people, especially given that it can be spread over an entire week to get the benefit." World city system As the world becomes more closely linked through economics, politics, technology, and culture (a process called globalization), cities have come to play a leading role in transnational affairs, exceeding the limitations of international relations conducted by national governments. This phenomenon, resurgent today, can be traced back to the Silk Road, Phoenicia, and the Greek city-states, through the Hanseatic League and other alliances of cities. Today the information economy based on high-speed internet infrastructure enables instantaneous telecommunication around the world, effectively eliminating the distance between cities for the purposes of the international markets and other high-level elements of the world economy, as well as personal communications and mass media. Global city A global city, also known as a world city, is a prominent centre of trade, banking, finance, innovation, and markets. Saskia Sassen used the term "global city" in her 1991 work, The Global City: New York, London, Tokyo to refer to a city's power, status, and cosmopolitanism, rather than to its size. Following this view of cities, it is possible to rank the world's cities hierarchically. Global cities form the capstone of the global hierarchy, exerting command and control through their economic and political influence. Global cities may have reached their status due to early transition to post-industrialism or through inertia which has enabled them to maintain their dominance from the industrial era. This type of ranking exemplifies an emerging discourse in which cities, considered variations on the same ideal type, must compete with each other globally to achieve prosperity. Critics of the notion point to the different realms of power and interchange. The term "global city" is heavily influenced by economic factors and, thus, may not account for places that are otherwise significant. Paul James, for example argues that the term is "reductive and skewed" in its focus on financial systems. Multinational corporations and banks make their headquarters in global cities and conduct much of their business within this context. American firms dominate the international markets for law and engineering and maintain branches in the biggest foreign global cities. Large cities have a great divide between populations of both ends of the financial spectrum. Regulations on immigration promote the exploitation of low- and high-skilled immigrant workers from poor areas. During employment, migrant workers may be subject to unfair working conditions, including working overtime, low wages, and lack of safety in workplaces. Transnational activity Cities increasingly participate in world political activities independently of their enclosing nation-states. Early examples of this phenomenon are the sister city relationship and the promotion of multi-level governance within the European Union as a technique for European integration. Cities including Hamburg, Prague, Amsterdam, The Hague, and City of London maintain their own embassies to the European Union at Brussels. New urban dwellers are increasingly transmigrants, keeping one foot each (through telecommunications if not travel) in their old and their new homes. Global governance Cities participate in global governance by various means including membership in global networks which transmit norms and regulations. At the general, global level, United Cities and Local Governments (UCLG) is a significant umbrella organization for cities; regionally and nationally, Eurocities, Asian Network of Major Cities 21, the Federation of Canadian Municipalities the National League of Cities, and the United States Conference of Mayors play similar roles. UCLG took responsibility for creating Agenda 21 for culture, a program for cultural policies promoting sustainable development, and has organized various conferences and reports for its furtherance. Networks have become especially prevalent in the arena of environmentalism and specifically climate change following the adoption of Agenda 21. Environmental city networks include the C40 Cities Climate Leadership Group, the United Nations Global Compact Cities Programme, the Carbon Neutral Cities Alliance (CNCA), the Covenant of Mayors and the Compact of Mayors, ICLEI – Local Governments for Sustainability, and the Transition Towns network. Cities with world political status as meeting places for advocacy groups, non-governmental organizations, lobbyists, educational institutions, intelligence agencies, military contractors, information technology firms, and other groups with a stake in world policymaking. They are consequently also sites for symbolic protest. South Africa has one of the highest rate of protests in the world. Pretoria, a city in South Africa had a rally where 5 thousand people took part in order to advocate for increasing wages to afford living costs. United Nations System The United Nations System has been involved in a series of events and declarations dealing with the development of cities during this period of rapid urbanization. The Habitat I conference in 1976 adopted the "Vancouver Declaration on Human Settlements" which identifies urban management as a fundamental aspect of development and establishes various principles for maintaining urban habitats. Citing the Vancouver Declaration, the UN General Assembly in December 1977 authorized the United Nations Commission Human Settlements and the HABITAT Centre for Human Settlements, intended to coordinate UN activities related to housing and settlements. The 1992 Earth Summit in Rio de Janeiro resulted in a set of international agreements including Agenda 21 which establishes principles and plans for sustainable development. The Habitat II conference in 1996 called for cities to play a leading role in this program, which subsequently advanced the Millennium Development Goals and Sustainable Development Goals. In January 2002 the UN Commission on Human Settlements became an umbrella agency called the United Nations Human Settlements Programme or UN-Habitat, a member of the United Nations Development Group. The Habitat III conference of 2016 focused on implementing these goals under the banner of a "New Urban Agenda". The four mechanisms envisioned for effecting the New Urban Agenda are (1) national policies promoting integrated sustainable development, (2) stronger urban governance, (3) long-term integrated urban and territorial planning, and (4) effective financing frameworks. Just before this conference, the European Union concurrently approved an "Urban Agenda for the European Union" known as the Pact of Amsterdam. UN-Habitat coordinates the U.N. urban agenda, working with the UN Environmental Programme, the UN Development Programme, the Office of the High Commissioner for Human Rights, the World Health Organization, and the World Bank. The World Bank, a U.N. specialized agency, has been a primary force in promoting the Habitat conferences, and since the first Habitat conference has used their declarations as a framework for issuing loans for urban infrastructure. The bank's structural adjustment programs contributed to urbanization in the Third World by creating incentives to move to cities. The World Bank and UN-Habitat in 1999 jointly established the Cities Alliance (based at the World Bank headquarters in Washington, D.C.) to guide policymaking, knowledge sharing, and grant distribution around the issue of urban poverty. (UN-Habitat plays an advisory role in evaluating the quality of a locality's governance.) The Bank's policies have tended to focus on bolstering real estate markets through credit and technical assistance. The United Nations Educational, Scientific and Cultural Organization, UNESCO has increasingly focused on cities as key sites for influencing cultural governance. It has developed various city networks including the International Coalition of Cities against Racism and the Creative Cities Network. UNESCO's capacity to select World Heritage Sites gives the organization significant influence over cultural capital, tourism, and historic preservation funding. Representation in culture Cities figure prominently in traditional Western culture, appearing in the Bible in both evil and holy forms, symbolized by Babylon and Jerusalem. Cain and Nimrod are the first city builders in the Book of Genesis. In Sumerian mythology Gilgamesh built the walls of Uruk. Cities can be perceived in terms of extremes or opposites: at once liberating and oppressive, wealthy and poor, organized and chaotic. The name anti-urbanism refers to various types of ideological opposition to cities, whether because of their culture or their political relationship with the country. Such opposition may result from identification of cities with oppression and the ruling elite. This and other political ideologies strongly influence narratives and themes in discourse about cities. In turn, cities symbolize their home societies. Writers, painters, and filmmakers have produced innumerable works of art concerning the urban experience. Classical and medieval literature includes a genre of descriptiones which treat of city features and history. Modern authors such as Charles Dickens and James Joyce are famous for evocative descriptions of their home cities. Fritz Lang conceived the idea for his influential 1927 film Metropolis while visiting Times Square and marveling at the nighttime neon lighting. Other early cinematic representations of cities in the twentieth century generally depicted them as technologically efficient spaces with smoothly functioning systems of automobile transport. By the 1960s, however, traffic congestion began to appear in such films as The Fast Lady (1962) and Playtime (1967). Literature, film, and other forms of popular culture have supplied visions of future cities both utopian and dystopian. The prospect of expanding, communicating, and increasingly interdependent world cities has given rise to images such as Nylonkong (New York, London, Hong Kong) and visions of a single world-encompassing ecumenopolis. See also Lists of cities List of adjectivals and demonyms for cities Lost city Metropolis Compact city Megacity Settlement hierarchy Urbanization Notes References Bibliography Abrahamson, Mark (2004). Global Cities. Oxford University Press. Ashworth, G.J. War and the City. London & New York: Routledge, 1991. . Bridge, Gary, and Sophie Watson, eds. (2000). A Companion to the City. Malden, MA: Blackwell, 2000/2003. Brighenti, Andrea Mubi, ed. (2013). Urban Interstices: The Aesthetics and the Politics of the In-between. Farnham: Ashgate Publishing. . Carter, Harold (1995). The Study of Urban Geography. 4th ed. London: Arnold. Clark, Peter (ed.) (2013). The Oxford Handbook of Cities in World History. Oxford University Press. Curtis, Simon (2016). Global Cities and Global Order. Oxford University Press. Ellul, Jacques (1970). The Meaning of the City. Translated by Dennis Pardee. Grand Rapids, Michigan: Eerdmans, 1970. ; French original (written earlier, published later as): Sans feu ni lieu : Signification biblique de la Grande Ville; Paris: Gallimard, 1975. Republished 2003 with Gupta, Joyetta, Karin Pfeffer, Hebe Verrest, & Mirjam Ros-Tonen, eds. (2015). Geographies of Urban Governance: Advanced Theories, Methods and Practices. Springer, 2015. . Hahn, Harlan, & Charles Levine (1980). Urban Politics: Past, Present, & Future. New York & London: Longman. Hanson, Royce (ed.). Perspectives on Urban Infrastructure. Committee on National Urban Policy, Commission on Behavioral and Social Sciences and Education, National Research Council. Washington: National Academy Press, 1984. Herrschel, Tassilo & Peter Newman (2017). Cities as International Actors: Urban and Regional Governance Beyond the Nation State. Palgrave Macmillan (Springer Nature). Grava, Sigurd (2003). Urban Transportation Systems: Choices for Communities. McGraw Hill, e-book. Kaplan, David H.; James O. Wheeler; Steven R. Holloway; & Thomas W. Hodler, cartographer (2004). Urban Geography. John Wiley & Sons, Inc. Kavaratzis, Mihalis, Gary Warnaby, & Gregory J. Ashworth, eds. (2015). Rethinking Place Branding: Comprehensive Brand Development for Cities and Regions. Springer. . Kraas, Frauke, Surinder Aggarwal, Martin Coy, & Günter Mertins, eds. (2014). Megacities: Our Global Urban Future. United Nations "International Year of Planet Earth" book series. Springer. . Latham, Alan, Derek McCormack, Kim McNamara, & Donald McNeil (2009). Key Concepts in Urban Geography. London: SAGE. . Leach, William (1993). Land of Desire: Merchants, Power, and the Rise of a New American Culture. New York: Vintage Books (Random House), 1994. . Levy, John M. (2017). Contemporary Urban Planning. 11th ed. New York: Routledge (Taylor & Francis). Magnusson, Warren. Politics of Urbanism: Seeing like a city. London & New York: Routledge, 2011. . Marshall, John U. (1989). The Structure of Urban Systems. University of Toronto Press. . Marzluff, John M., Eric Schulenberger, Wilfried Endlicher, Marina Alberti, Gordon Bradley, Clre Ryan, Craig ZumBrunne, & Ute Simon (2008). Urban Ecology: An International Perspective on the Interaction Between Humans and Nature. New York: Springer Science+Business Media. . McQuillan, Eugene. The Law of Municipal Corporations, 3rd ed. 1987 revised volume by Charles R.P. Keating, Esq. Wilmette, Illinois: Callaghan & Company. Moholy-Nagy, Sibyl (1968). Matrix of Man: An Illustrated History of Urban Environment. New York: Frederick A Praeger. Mumford, Lewis (1961). The City in History: Its Origins, Its Transformations, and Its Prospects. New York: Harcourt, Brace & World. Paddison, Ronan, ed. (2001). Handbook of Urban Studies. London; Thousand Oaks, California; and New Delhi: Sage Publications. . Rybczynski, W., City Life: Urban Expectations in a New World, (1995) Smith, Michael E. (2002) The Earliest Cities. In Urban Life: Readings in Urban Anthropology, edited by George Gmelch and Walter Zenner, pp. 3–19. 4th ed. Waveland Press, Prospect Heights, IL. Southall, Aidan (1998). The City in Time and Space. Cambridge University Press. Wellman, Kath & Marcus Spiller, eds. (2012). Urban Infrastructure: Finance and Management. Chichester, UK: Wiley-Blackwell. . Further reading Berger, Alan S., The City: Urban Communities and Their Problems, Dubuque, Iowa : William C. Brown, 1978. Chandler, T. Four Thousand Years of Urban Growth: An Historical Census. Lewiston, NY: Edwin Mellen Press, 1987. Geddes, Patrick, City Development (1904) Kemp, Roger L. Managing America's Cities: A Handbook for Local Government Productivity, McFarland and Company, Inc., Publisher, Jefferson, North Carolina and London, 2007. (). Kemp, Roger L. How American Governments Work: A Handbook of City, County, Regional, State, and Federal Operations, McFarland and Company, Inc., Publisher, Jefferson, North Carolina and London. (). Kemp, Roger L. "City and Gown Relations: A Handbook of Best Practices", McFarland and Company, Inc., Publisher, Jefferson, North Carolina, US, and London, (2013). (). Monti, Daniel J. Jr., The American City: A Social and Cultural History. Oxford, England and Malden, Massachusetts: Blackwell Publishers, 1999. 391 pp. . Reader, John (2005) Cities. Vintage, New York. Robson, W.A., and Regan, D.E., ed., Great Cities of the World, (3d ed., 2 vol., 1972) Smethurst, Paul (2015). The Bicycle – Towards a Global History. Palgrave Macmillan. . Smith, L. Monica (2020) Cities: The First 6,000 Years. Penguin Books. Thernstrom, S., and Sennett, R., ed., Nineteenth-Century Cities (1969) Toynbee, Arnold J. (ed), Cities of Destiny, New York: McGraw-Hill, 1967. Pan historical/geographical essays, many images. Starts with "Athens", ends with "The Coming World City-Ecumenopolis". Weber, Max, The City, 1921. (tr. 1958) External links World Urbanization Prospects, Website of the United Nations Population Division (archived 10 July 2017) Urban population (% of total) – World Bank website based on UN data. Degree of urbanization (percentage of urban population in total population) by continent in 2016 – Statista, based on Population Reference Bureau data. Cities Populated places by type Types of populated places Urban geography
5394
https://en.wikipedia.org/wiki/Chervil
Chervil
Chervil (; Anthriscus cerefolium), sometimes called French parsley or garden chervil (to distinguish it from similar plants also called chervil), is a delicate annual herb related to parsley. It was formerly called myrhis due to its volatile oil with an aroma similar to the resinous substance myrrh. It is commonly used to season mild-flavoured dishes and is a constituent of the French herb mixture . Name The name chervil is from Anglo-Norman, from Latin or , meaning "leaves of joy"; the Latin is formed, as from an Ancient Greek word (). Biology A member of the Apiaceae, chervil is native to the Caucasus but was spread by the Romans through most of Europe, where it is now naturalised. It is also grown frequently in the United States, where it sometimes escapes cultivation. Such escape can be recognized, however, as garden chervil is distinguished from all other Anthriscus species growing in North America (i.e., A. caucalis and A. sylvestris) by its having lanceolate-linear bracteoles and a fruit with a relatively long beak. The plants grow to , with tripinnate leaves that may be curly. The small white flowers form small umbels, across. The fruit is about 1 cm long, oblong-ovoid with a slender, ridged beak. Uses and impact Culinary arts Chervil is used, particularly in France, to season poultry, seafood, young spring vegetables (such as carrots), soups, and sauces. More delicate than parsley, it has a faint taste of liquorice or aniseed. Chervil is one of the four traditional French , along with tarragon, chives, and parsley, which are essential to French cooking. Unlike the more pungent, robust herbs such as thyme and rosemary, which can take prolonged cooking, the are added at the last minute, to salads, omelettes, and soups. Chemistry Essential oil obtained via water distillation of wild Turkish Anthriscus cerefolium was analyzed by gas chromatography - mass spectrometry identifying 4 compounds: methyl chavicol (83.10%), 1-allyl-2,4-dimethoxybenzene (15.15%), undecane (1.75%) and β-pinene (<0.01%). Horticulture According to some, slugs are attracted to chervil and the plant is sometimes used to bait them. Health Chervil has had various uses in folk medicine. It was claimed to be useful as a digestive aid, for lowering high blood pressure, and, infused with vinegar, for curing hiccups. Besides its digestive properties, it is used as a mild stimulant. Chervil has also been implicated in "strimmer dermatitis", another name for phytophotodermatitis, due to spray from weed trimmers and similar forms of contact. Other plants in the family Apiaceae can have similar effects. Cultivation Transplanting chervil can be difficult, due to the long taproot. It prefers a cool and moist location; otherwise, it rapidly goes to seed (also known as bolting). It is usually grown as a cool-season crop, like lettuce, and should be planted in early spring and late fall or in a winter greenhouse. Regular harvesting of leaves also helps to prevent bolting. If plants bolt despite precautions, the plant can be periodically re-sown throughout the growing season, thus producing fresh plants as older plants bolt and go out of production. Chervil grows to a height of , and a width of . References Further reading Apioideae Edible Apiaceae Herbs Medicinal plants of Asia Medicinal plants of Europe Root vegetables
5395
https://en.wikipedia.org/wiki/Chives
Chives
Chives, scientific name Allium schoenoprasum, is a species of flowering plant in the family Amaryllidaceae that produces edible leaves and flowers. Their close relatives include the common onions, garlic, shallot, leek, scallion, and Chinese onion. A perennial plant, it is widespread in nature across much of Europe, Asia, and North America. A. schoenoprasum is the only species of Allium native to both the New and the Old Worlds. Chives are a commonly used herb and can be found in grocery stores or grown in home gardens. In culinary use, the green stalks (scapes) and the unopened, immature flower buds are diced and used as an ingredient for omelettes, fish, potatoes, soups, and many other dishes. The edible flowers can be used in salads. Chives have insect-repelling properties that can be used in gardens to control pests. The plant provides a great deal of nectar for pollinators. It was rated in the top 10 for most nectar production (nectar per unit cover per year) in a UK plants survey conducted by the AgriLand project which is supported by the UK Insect Pollinators Initiative. Description Chives are a bulb-forming herbaceous perennial plant, growing to tall. The bulbs are slender, conical, long and broad, and grow in dense clusters from the roots. The scapes (or stems) are hollow and tubular, up to long and across, with a soft texture, although, prior to the emergence of a flower, they may appear stiffer than usual. The grass-like leaves, which are shorter than the scapes, are also hollow and tubular, or terete, (round in cross-section) which distinguishes it at a glance from garlic chives (Allium tuberosum). The flowers are pale purple, and star-shaped with six petals, wide, and produced in a dense inflorescence of 10-30 together; before opening, the inflorescence is surrounded by a papery bract. The seeds are produced in a small, three-valved capsule, maturing in summer. The herb flowers from April to May in the southern parts of its habitat zones and in June in the northern parts. Chives are the only species of Allium native to both the New and the Old Worlds. Sometimes, the plants found in North America are classified as A. schoenoprasum var. sibiricum, although this is disputed. Differences between specimens are significant. One example was found in northern Maine growing solitary, instead of in clumps, also exhibiting dingy grey flowers. Although chives are repulsive to insects in general, due to their sulfur compounds, their flowers attract bees, and they are at times kept to increase desired insect life. Taxonomy It was formally described by the Swedish botanist Carl Linnaeus in his seminal publication Species Plantarum in 1753. The name of the species derives from the Greek σχοίνος, skhoínos (sedge or rush) and πράσον, práson (leek). Its English name, chives, derives from the French word cive, from cepa, the Latin word for onion. In the Middle Ages, it was known as 'rush leek'. Some subspecies have been proposed, but are not accepted by Plants of the World Online, , which sinks them into the species: Allium schoenoprasum subsp. gredense (Rivas Goday) Rivas Mart., Fern.Gonz. & Sánchez Mata Allium schoenoprasum subsp. latiorifolium (Pau) Rivas Mart., Fern.Gonz. & Sánchez Mata Varieties have also been proposed, including A. schoenoprasum var. sibiricum. The Flora of North America notes that the species is very variable, and considers recognition of varieties as "unsound". Distribution and habitat Chives are native to temperate areas of Europe, Asia and North America. Range It is found in Asia within the Caucasus (in Armenia, Azerbaijan and Georgia), also in China, Iran, Iraq, Japan (within the islands of Hokkaido and Honshu), Kazakhstan, Kyrgyzstan, Mongolia, Pakistan, Russian Federation (within the krais of Kamchatka, Khabarovsk, and Primorye) Siberia and Turkey. In middle Europe, it is found within Austria, the Czech Republic, Germany, the Netherlands, Poland and Switzerland. In northern Europe, in Denmark, Finland, Norway, Sweden and the United Kingdom. In southeastern Europe, within Bulgaria, Greece, Italy and Romania. It is also found in southwestern Europe, in France, Portugal and Spain. In North America, it is found in Canada (within the provinces and territories of Alberta, British Columbia, Manitoba, Northwest Territories, Nova Scotia, New Brunswick, Newfoundland, Nunavut, Ontario, Prince Edward Island, Quebec, Saskatchewan and Yukon), and the United States (within the states of Alaska, Colorado, Connecticut, Idaho, Maine, Maryland, Massachusetts, Michigan, Minnesota, Montana, New Hampshire, New Jersey, New York, Ohio, Oregon, Pennsylvania, Rhode Island, Vermont, Washington, West Virginia, Wisconsin and Wyoming). Uses Culinary arts Chives are grown for their scapes and leaves, which are used for culinary purposes as a flavoring herb, and provide a somewhat milder onion-like flavor than those of other Allium species. Chives have a wide variety of culinary uses, such as in traditional dishes in France, Sweden, and elsewhere. In his 1806 book Attempt at a Flora (Försök til en flora), Anders Jahan Retzius describes how chives are used with pancakes, soups, fish, and sandwiches. They are also an ingredient of the gräddfil sauce with the traditional herring dish served at Swedish midsummer celebrations. The flowers may also be used to garnish dishes. In Poland and Germany, chives are served with quark. Chives are one of the fines herbes of French cuisine, the others being tarragon, chervil and parsley. Chives can be found fresh at most markets year-round, making them readily available; they can also be dry-frozen without much impairment to the taste, giving home growers the opportunity to store large quantities harvested from their own gardens. Uses in plant cultivation Retzius also describes how farmers would plant chives between the rocks making up the borders of their flowerbeds, to keep the plants free from pests (such as Japanese beetles). The growing plant repels unwanted insect life, and the juice of the leaves can be used for the same purpose, as well as fighting fungal infections, mildew, and scab. Cultivation Chives are cultivated both for their culinary uses and for their ornamental value; the violet flowers are often used in ornamental dry bouquets. The flowers are also edible and are used in salads, or used to make blossom vinegars. Chives thrive in well-drained soil, rich in organic matter, with a pH of 6-7 and full sun. They can be grown from seed and mature in summer, or early the following spring. Typically, chives need to be germinated at a temperature of 15 to 20 °C (60-70 °F) and kept moist. They can also be planted under a cloche or germinated indoors in cooler climates, then planted out later. After at least four weeks, the young shoots should be ready to be planted out. They are also easily propagated by division. In cold regions, chives die back to the underground bulbs in winter, with the new leaves appearing in early spring. Chives starting to look old can be cut back to about 2–5 cm. When harvesting, the needed number of stalks should be cut to the base. During the growing season, the plant continually regrows leaves, allowing for a continuous harvest. Chives are susceptible to damage by leek moth larvae, which bore into the leaves or bulbs of the plant. History and cultural importance Chives have been cultivated in Europe since the Middle Ages (from the fifth until the 15th centuries), although their usage dates back 5,000 years. They were sometimes referred to as "rush leeks". It was mentioned in 80 A.D. by Marcus Valerius Martialis in his "Epigrams". The Romans believed chives could relieve the pain from sunburn or a sore throat. They believed eating chives could increase blood pressure and act as a diuretic. Romani have used chives in fortune telling. Bunches of dried chives hung around a house were believed to ward off disease and evil. In the 19th century, Dutch farmers fed cattle on the herb to give a different taste to their milk. References External links Nutritional Information Mrs. Grieve's "A Modern Herbal" @ Botanical.com Allium Flora of Asia Flora of Europe Flora of Northern America Garden plants Herbs Medicinal plants of Asia Medicinal plants of Europe Medicinal plants of North America Plants described in 1753
5397
https://en.wikipedia.org/wiki/Chris%20Morris%20%28satirist%29
Chris Morris (satirist)
Christopher J. Morris (born 15 June 1962) is an English comedian, radio presenter, actor, and filmmaker. Known for his deadpan, dark humour, surrealism, and controversial subject matter, he has been praised by the British Film Institute for his "uncompromising, moralistic drive". In the early 1990s, Morris teamed up with his radio producer Armando Iannucci to create On the Hour, a satire of news programmes. This was expanded into a television spin off, The Day Today, which launched the career of comedian Steve Coogan and has since been hailed as one of the most important satirical shows of the 1990s. Morris further developed the satirical news format with Brass Eye, which lampooned celebrities whilst focusing on themes such as crime and drugs. For many, the apotheosis of Morris' career was a Brass Eye special, which dealt with the moral panic surrounding paedophilia. It quickly became one of the most complained-about programmes in British television history, leading the Daily Mail to describe him as "the most loathed man on TV". Meanwhile, Morris' postmodern sketch comedy and ambient music radio show Blue Jam, which had seen controversy similar to Brass Eye, helped him to gain a cult following. Blue Jam was adapted into the TV series Jam, which some hailed as "the most radical and original television programme broadcast in years", and he went on to win the BAFTA Award for Best Short Film after expanding a Blue Jam sketch into My Wrongs 8245–8249 & 117, which starred Paddy Considine. This was followed by Nathan Barley, a sitcom written in collaboration with a then little-known Charlie Brooker that satirised hipsters, which had low ratings but found success upon its DVD release. Morris followed this by joining the cast of the sitcom The IT Crowd, his first project in which he did not have writing or producing input. In 2010, Morris directed his first feature-length film, Four Lions, which satirised Islamic terrorism through a group of inept British Muslims. Reception of the film was largely positive, earning Morris his second BAFTA Film Award, this time for Outstanding Debut. Since 2012, he has directed four episodes of Iannucci's political comedy Veep and appeared onscreen in The Double and Stewart Lee's Comedy Vehicle. His second feature-length film, The Day Shall Come, was released in 2019. Early life Christopher J Morris was born on 15 June 1962 in Colchester, Essex, the son of Rosemary Parrington and Paul Michael Morris. His father was a GP. Morris has a large red birthmark almost completely covering the left side of his face and neck, which he disguises with makeup when acting. He grew up in a Victorian farmhouse in the village of Buckden, Cambridgeshire, which he described as "very dull". He has two younger brothers, including theatre director Tom Morris. From an early age, he was a prankster and had a passion for radio. From the age of 10, he was educated at the independent Jesuit boarding school Stonyhurst College in Stonyhurst, Lancashire. He went to study zoology at the University of Bristol, where he gained a 2:1. Career Radio On graduating, Morris pursued a career as a musician in various bands, for which he played the bass guitar. He then went to work for Radio West, a local radio station in Bristol. He then took up a news traineeship with BBC Radio Cambridgeshire, where he took advantage of access to editing and recording equipment to create elaborate spoofs and parodies. He also spent time in early 1987 hosting a 2–4pm afternoon show and finally ended up presenting Saturday morning show I.T. In July 1987, he moved on to BBC Radio Bristol to present his own show No Known Cure, broadcast on Saturday and Sunday mornings. The show was surreal and satirical, with odd interviews conducted with unsuspecting members of the public. He was fired from Bristol in 1990 after "talking over the news bulletins and making silly noises". In 1988 he also joined, from its launch, Greater London Radio (GLR). He presented The Chris Morris Show on GLR until 1993, when one show got suspended after a sketch was broadcast involving a child "outing" celebrities. In 1991, Morris joined Armando Iannucci's spoof news project On the Hour. Broadcast on BBC Radio 4, it saw him work alongside Iannucci, Steve Coogan, Stewart Lee, Richard Herring and Rebecca Front. In 1992, Morris hosted Danny Baker's Radio 5 Morning Edition show for a week whilst Baker was on holiday. In 1994, Morris began a weekly evening show, the Chris Morris Music Show, on BBC Radio 1 alongside Peter Baynham and 'man with a mobile phone' Paul Garner. In the shows, Morris perfected the spoof interview style that would become a central component of his Brass Eye programme. In the same year, Morris teamed up with Peter Cook (as Sir Arthur Streeb-Greebling), in a series of improvised conversations for BBC Radio 3 entitled Why Bother?. Move into television and film In 1994, a BBC Two television series based on On the Hour was broadcast under the name The Day Today. The Day Today made a star of Morris, and marked the television debut of Steve Coogan's Alan Partridge character. The programme ended on a high after just one series, with Morris winning the 1994 British Comedy Award for Best Newcomer for his lead role as the Paxmanesque news anchor. In 1996, Morris appeared on the daytime programme The Time, The Place, posing as an academic, Thurston Lowe, in a discussion entitled "Are British Men Lousy Lovers?", but was found out when a producer alerted the show's host, John Stapleton. In 1997, the black humour which had featured in On the Hour and The Day Today became more prominent in Brass Eye, another spoof of current affairs television documentary, shown on Channel 4. All three series satirised and exaggerated issues expected of news shows. The second episode of Brass Eye, for example, satirised drugs and the political rhetoric surrounding them. To help convey the satire, Morris invented a fictional drug by the name of "cake". In the episode, British celebrities and politicians describe the supposed symptoms in detail; David Amess mentioned the fictional drug at Parliament. In 2001, Morris' satirized the moral panic regarding pedophilia in the most controversial episode of Brass Eye, "Paedogeddon". Channel 4 apologised for the episode after receiving criticism from tabloids and around 3,000 complaints from viewers, which, at the time, was the most for an episode of British television. From 1997 to 1999, Morris created Blue Jam for BBC Radio 1, a surreal taboo-breaking radio show set to an ambient soundtrack. In 2000, this was followed by Jam, a television reworking. Morris released a 'remix' version of this, entitled Jaaaaam. In 2002, Morris ventured into film, directing the short My Wrongs#8245–8249 & 117, adapted from a Blue Jam monologue about a man led astray by a sinister talking dog. It was the first film project of Warp Films, a branch of Warp Records. In 2002 it won the BAFTA for best short film. In 2005 Morris worked on a sitcom entitled Nathan Barley, based on the character created by Charlie Brooker for his website TVGoHome (Morris had contributed to TVGoHome on occasion, under the pseudonym 'Sid Peach'). Co-written by Brooker and Morris, the series was broadcast on Channel 4 in early 2005. The IT Crowd and Comedy Vehicle Morris appeared in The IT Crowd, a Channel 4 sitcom which focuses on the information technology department of the fictional company Reynholm Industries. The series was written and directed by Graham Linehan (with whom Morris collaborated on The Day Today, Brass Eye and Jam) and produced by Ash Atalla. Morris played Denholm Reynholm, the eccentric managing director of the company. This marked the first time Morris had acted in a substantial role in a project which he has not developed himself. Morris' character appeared to leave the series during episode two of the second series. His character made a brief return in the first episode of the third series. In November 2007, Morris wrote an article for The Observer in response to Ronan Bennett's article published six days earlier in The Guardian. Bennett's article, "Shame on us", accused the novelist Martin Amis of racism. Morris' response, "The absurd world of Martin Amis", was also highly critical of Amis; although he did not accede to Bennett's accusation of racism, Morris likened Amis to the Muslim cleric Abu Hamza (who was jailed for inciting racial hatred in 2006), suggesting that both men employ "mock erudition, vitriol and decontextualised quotes from the Qu'ran" to incite hatred. Morris served as script editor for the 2009 series Stewart Lee's Comedy Vehicle, working with former colleagues Stewart Lee, Kevin Eldon and Armando Iannucci. He maintained this role for the second (2011) and third series (2014), also appearing as a mock interviewer dubbed the "hostile interrogator" in the third and fourth series. Four Lions, Veep, and other appearances Morris completed his debut feature film Four Lions in late 2009, a satire based on a group of Islamist terrorists in Sheffield. It premiered at the Sundance Film Festival in January 2010 and was short-listed for the festival's World Cinema Narrative prize. The film (working title Boilerhouse) was picked up by Film Four. Morris told The Sunday Times that the film sought to do for Islamic terrorism what Dad's Army, the classic BBC comedy, did for the Nazis by showing them as "scary but also ridiculous". In 2012, Morris directed the seventh and penultimate episode of the first season of Veep, an Armando Iannucci-devised American version of The Thick of It. In 2013, he returned to direct two episodes for the second season of Veep, and a further episode for season three in 2014. In 2013, Morris appeared briefly in Richard Ayoade's The Double, a black comedy film based on the Fyodor Dostoyevsky novella of the same name. Morris had previously worked with Ayoade on Nathan Barley and The IT Crowd. In February 2014, Morris made a surprise appearance at the beginning of a Stewart Lee live show, introducing the comedian with fictional anecdotes about their work together. The following month, Morris appeared in the third series of Stewart Lee's Comedy Vehicle as a "hostile interrogator", a role previously occupied by Armando Iannucci. In December 2014, it was announced that a short radio collaboration with Noel Fielding and Richard Ayoade would be broadcast on BBC Radio 6. According to Fielding, the work had been in progress since around 2006. However, in January 2015 it was decided, 'in consultation with [Morris]', that the project was not yet complete, and so the intended broadcast did not go ahead. The Day Shall Come A statement released by Film4 in February 2016 made reference to funding what would be Morris' second feature film. In November 2017 it was reported that Morris had shot the movie, starring Anna Kendrick, in the Dominican Republic but the title was not made public. It was later reported in January 2018 that Jim Gaffigan and Rupert Friend had joined the cast of the still-untitled film, and that the plot would revolve around an FBI hostage situation gone wrong. The completed film, titled The Day Shall Come, had its world premiere at South by Southwest on 11 March 2019. Music Morris often co-writes and performs incidental music for his television shows, notably with Jam and the 'extended remix' version, Jaaaaam. In the early 1990s Morris contributed a Pixies parody track entitled "Motherbanger" to a flexi-disc given away with an edition of Select music magazine. Morris supplied sketches for British band Saint Etienne's 1993 single "You're in a Bad Way" (the sketch 'Spongbake' appears at the end of the 4th track on the CD single). In 2000, he collaborated by mail with Amon Tobin to create the track "Bad Sex", which was released as a B-side on the Tobin single "Slowly". British band Stereolab's song "Nothing to Do with Me" from their 2001 album Sound-Dust featured various lines from Chris Morris sketches as lyrics. Style Ramsey Ess of Vulture described Morris' comedy style as "crass" and "shocking", but noted an "underlying morality" and integrity, as well as the humor being Morris' priority. Recognition In 2003, Morris was listed in The Observer as one of the 50 funniest acts in British comedy. In 2005, Channel 4 aired a show called The Comedian's Comedian in which foremost writers and performers of comedy ranked their 50 favourite acts. Morris was at number eleven. Morris won the BAFTA for outstanding debut with his film Four Lions. Adeel Akhtar and Nigel Lindsay collected the award in his absence. Lindsay stated that Morris had sent him a text message before they collected the award reading, 'Doused in petrol, Zippo at the ready'. In June 2012 Morris was placed at number 16 in the Top 100 People in UK Comedy. In 2010, a biography, Disgusting Bliss: The Brass Eye of Chris Morris, was published. Written by Lucian Randall, the book depicted Morris as "brilliant but uncompromising", and a "frantic-minded perfectionist". In November 2014, a three-hour retrospective of Morris' radio career was broadcast on BBC Radio 4 Extra under the title 'Raw Meat Radio', presented by Mary Anne Hobbs and featuring interviews with Armando Iannucci, Peter Baynham, Paul Garner, and others. Awards Morris won the Best TV Comedy Newcomer award from the British Comedy Awards in 1994 for his performance in The Day Today. He has won two BAFTA awards: the BAFTA Award for Best Short Film in 2002 for My Wrongs #8245–8249 & 117, and the BAFTA Award for Outstanding Debut by a British director, writer or producer in 2011 for Four Lions. Personal life Morris and his wife, actress-turned-literary agent Jo Unwin, live in the Brixton district of London. The pair met in 1984 at the Edinburgh Festival, when he was playing bass guitar for the Cambridge Footlights Revue and she was in a comedy troupe called the Millies. They have two sons, Charles and Frederick, both of whom were born in Lambeth in south London. Giving very few interviews and avoiding all social media, Morris has been described as a recluse. Works Film Television Other Various works at BBC Radio Cambridgeshire (1986–1987) (presenter) No Known Cure (July 1987 – March 1990, BBC Radio Bristol) (presenter) Chris Morris (1988–1993, BBC GLR) (presenter) Morning Edition (July 1990, BBC Radio 5) (guest presenter) The Chris Morris Christmas Show (25 December 1990, BBC Radio 1) On the Hour (1991–1992, BBC Radio 4) (co-writer, performer) It's Only TV (September 1992, LWT) (unbroadcast pilot) Why Bother? (1994, BBC Radio 3) (performer, editor) The Chris Morris Music Show (1994, BBC Radio 1) (presenter) Blue Jam (1997–1999, BBC Radio 1) (writer, director, performer, editor) Second Class Male/Time To Go (1999, newspaper column for The Observer) The Smokehammer (2002, website) Absolute Atrocity Special (2002, newspaper pullout for The Observer) References External links 1962 births Living people Alumni of the University of Bristol English male comedians English radio DJs English radio writers English satirists English screenwriters English male screenwriters People educated at Stonyhurst College People from Colchester People from Buckden, Cambridgeshire English film directors Outstanding Debut by a British Writer, Director or Producer BAFTA Award winners Comedians from Essex
5399
https://en.wikipedia.org/wiki/Colorado
Colorado
Colorado (, other variants) is a state in the Mountain West sub-region of the Western United States. It encompasses most of the Southern Rocky Mountains, as well as the northeastern portion of the Colorado Plateau and the western edge of the Great Plains. Colorado is the eighth most extensive and 21st most populous U.S. state. The United States Census Bureau estimated the population of Colorado at 5,839,926 as of July 1, 2022, a 1.15% increase since the 2020 United States census. The region has been inhabited by Native Americans and their ancestors for at least 13,500 years and possibly much longer. The eastern edge of the Rocky Mountains was a major migration route for early peoples who spread throughout the Americas. In 1848, much of the region was annexed to the United States with the Treaty of Guadalupe Hidalgo. The Pike's Peak Gold Rush of 1858–1862 created an influx of settlers. On February 28, 1861, U.S. President James Buchanan signed an act creating the Territory of Colorado, and on August 1, 1876, President Ulysses S. Grant signed Proclamation 230 admitting Colorado to the Union as the 38th state. The Spanish adjective "colorado" means "colored red" or "ruddy". Colorado is nicknamed the "Centennial State" because it became a state one century (and four weeks) after the signing of the United States Declaration of Independence. Colorado is bordered by Wyoming to the north, Nebraska to the northeast, Kansas to the east, Oklahoma to the southeast, New Mexico to the south, Utah to the west, and touches Arizona to the southwest at the Four Corners. Colorado is noted for its landscape of mountains, forests, high plains, mesas, canyons, plateaus, rivers, and desert lands. Colorado is one of the Mountain States and is often considered to be part of the southwestern United States. The high plains of Colorado may be considered a part of the midwestern United States. Denver is the capital, the most populous city, and the center of the Front Range Urban Corridor. Colorado Springs is the second most populous city. Residents of the state are known as Coloradans, although the antiquated "Coloradoan" is occasionally used. Major parts of the economy include government and defense, mining, agriculture, tourism, and increasingly other kinds of manufacturing. With increasing temperatures and decreasing water availability, Colorado's agriculture, forestry, and tourism economies are expected to be heavily affected by climate change. History The region that is today the State of Colorado has been inhabited by Native Americans and their Paleoamerican ancestors for at least 13,500 years and possibly more than 37,000 years. The eastern edge of the Rocky Mountains was a major migration route that was important to the spread of early peoples throughout the Americas. The Lindenmeier site in Larimer County contains artifacts dating from approximately 8720 BCE. The Ancient Pueblo peoples lived in the valleys and mesas of the Colorado Plateau. The Ute Nation inhabited the mountain valleys of the Southern Rocky Mountains and the Western Rocky Mountains, even as far east as the Front Range of the present day. The Apache and the Comanche also inhabited the Eastern and Southeastern parts of the state. In the 17th century, the Arapaho and Cheyenne moved west from the Great Lakes region to hunt across the High Plains of Colorado and Wyoming. The Spanish Empire claimed Colorado as part of its New Mexico province before U.S. involvement in the region. The U.S. acquired a territorial claim to the eastern Rocky Mountains with the Louisiana Purchase from France in 1803. This U.S. claim conflicted with the claim by Spain to the upper Arkansas River Basin as the exclusive trading zone of its colony of Santa Fe de Nuevo México. In 1806, Zebulon Pike led a U.S. Army reconnaissance expedition into the disputed region. Colonel Pike and his troops were arrested by Spanish cavalrymen in the San Luis Valley the following February, taken to Chihuahua, and expelled from Mexico the following July. The U.S. relinquished its claim to all land south and west of the Arkansas River and south of 42nd parallel north and west of the 100th meridian west as part of its purchase of Florida from Spain with the Adams-Onís Treaty of 1819. The treaty took effect on February 22, 1821. Having settled its border with Spain, the U.S. admitted the southeastern portion of the Territory of Missouri to the Union as the state of Missouri on August 10, 1821. The remainder of Missouri Territory, including what would become northeastern Colorado, became an unorganized territory and remained so for 33 years over the question of slavery. After 11 years of war, Spain finally recognized the independence of Mexico with the Treaty of Córdoba signed on August 24, 1821. Mexico eventually ratified the Adams–Onís Treaty in 1831. The Texian Revolt of 1835–36 fomented a dispute between the U.S. and Mexico which eventually erupted into the Mexican–American War in 1846. Mexico surrendered its northern territory to the U.S. with the Treaty of Guadalupe Hidalgo after the war in 1848; this included much of the western and southern areas of the current state of Colorado. Most American settlers traveling overland west to the Oregon Country, the new goldfields of California, or the new Mormon settlements of the State of Deseret in the Salt Lake Valley, avoided the rugged Southern Rocky Mountains, and instead followed the North Platte River and Sweetwater River to South Pass (Wyoming), the lowest crossing of the Continental Divide between the Southern Rocky Mountains and the Central Rocky Mountains. In 1849, the Mormons of the Salt Lake Valley organized the extralegal State of Deseret, claiming the entire Great Basin and all lands drained by the rivers Green, Grand, and Colorado. The federal government of the U.S. flatly refused to recognize the new Mormon government, because it was theocratic and sanctioned plural marriage. Instead, the Compromise of 1850 divided the Mexican Cession and the northwestern claims of Texas into a new state and two new territories, the state of California, the Territory of New Mexico, and the Territory of Utah. On April 9, 1851, Mexican American settlers from the area of Taos settled the village of San Luis, then in the New Mexico Territory, later to become Colorado's first permanent Euro-American settlement. In 1854, Senator Stephen A. Douglas persuaded the U.S. Congress to divide the unorganized territory east of the Continental Divide into two new organized territories, the Territory of Kansas and the Territory of Nebraska, and an unorganized southern region known as the Indian territory. Each new territory was to decide the fate of slavery within its boundaries, but this compromise merely served to fuel animosity between free soil and pro-slavery factions. The gold seekers organized the Provisional Government of the Territory of Jefferson on August 24, 1859, but this new territory failed to secure approval from the Congress of the United States embroiled in the debate over slavery. The election of Abraham Lincoln for the President of the United States on November 6, 1860, led to the secession of nine southern slave states and the threat of civil war among the states. Seeking to augment the political power of the Union states, the Republican Party-dominated Congress quickly admitted the eastern portion of the Territory of Kansas into the Union as the free State of Kansas on January 29, 1861, leaving the western portion of the Kansas Territory, and its gold-mining areas, as unorganized territory. Territory act Thirty days later on February 28, 1861, outgoing U.S. President James Buchanan signed an Act of Congress organizing the free Territory of Colorado. The original boundaries of Colorado remain unchanged except for government survey amendments. In 1776, Spanish priest Silvestre Vélez de Escalante recorded that Native Americans in the area knew the river as el Rio Colorado for the red-brown silt that the river carried from the mountains. In 1859, a U.S. Army topographic expedition led by Captain John Macomb located the confluence of the Green River with the Grand River in what is now Canyonlands National Park in Utah. The Macomb party designated the confluence as the source of the Colorado River. On April 12, 1861, South Carolina artillery opened fire on Fort Sumter to start the American Civil War. While many gold seekers held sympathies for the Confederacy, the vast majority remained fiercely loyal to the Union cause. In 1862, a force of Texas cavalry invaded the Territory of New Mexico and captured Santa Fe on March 10. The object of this Western Campaign was to seize or disrupt the gold fields of Colorado and California and to seize ports on the Pacific Ocean for the Confederacy. A hastily organized force of Colorado volunteers force-marched from Denver City, Colorado Territory, to Glorieta Pass, New Mexico Territory, in an attempt to block the Texans. On March 28, the Coloradans and local New Mexico volunteers stopped the Texans at the Battle of Glorieta Pass, destroyed their cannon and supply wagons, and dispersed 500 of their horses and mules. The Texans were forced to retreat to Santa Fe. Having lost the supplies for their campaign and finding little support in New Mexico, the Texans abandoned Santa Fe and returned to San Antonio in defeat. The Confederacy made no further attempts to seize the Southwestern United States. In 1864, Territorial Governor John Evans appointed the Reverend John Chivington as Colonel of the Colorado Volunteers with orders to protect white settlers from Cheyenne and Arapaho warriors who were accused of stealing cattle. Colonel Chivington ordered his troops to attack a band of Cheyenne and Arapaho encamped along Sand Creek. Chivington reported that his troops killed more than 500 warriors. The militia returned to Denver City in triumph, but several officers reported that the so-called battle was a blatant massacre of Indians at peace, that most of the dead were women and children, and that the bodies of the dead had been hideously mutilated and desecrated. Three U.S. Army inquiries condemned the action, and incoming President Andrew Johnson asked Governor Evans for his resignation, but none of the perpetrators was ever punished. This event is now known as the Sand Creek massacre. In the midst and aftermath of the Civil War, many discouraged prospectors returned to their homes, but a few stayed and developed mines, mills, farms, ranches, roads, and towns in Colorado Territory. On September 14, 1864, James Huff discovered silver near Argentine Pass, the first of many silver strikes. In 1867, the Union Pacific Railroad laid its tracks west to Weir, now Julesburg, in the northeast corner of the Territory. The Union Pacific linked up with the Central Pacific Railroad at Promontory Summit, Utah, on May 10, 1869, to form the First transcontinental railroad. The Denver Pacific Railway reached Denver in June of the following year, and the Kansas Pacific arrived two months later to forge the second line across the continent. In 1872, rich veins of silver were discovered in the San Juan Mountains on the Ute Indian reservation in southwestern Colorado. The Ute people were removed from the San Juans the following year. Statehood The United States Congress passed an enabling act on March 3, 1875, specifying the requirements for the Territory of Colorado to become a state. On August 1, 1876 (four weeks after the Centennial of the United States), U.S. President Ulysses S. Grant signed a proclamation admitting Colorado to the Union as the 38th state and earning it the moniker "Centennial State". The discovery of a major silver lode near Leadville in 1878 triggered the Colorado Silver Boom. The Sherman Silver Purchase Act of 1890 invigorated silver mining, and Colorado's last, but greatest, gold strike at Cripple Creek a few months later lured a new generation of gold seekers. Colorado women were granted the right to vote on November 7, 1893, making Colorado the second state to grant universal suffrage and the first one by a popular vote (of Colorado men). The repeal of the Sherman Silver Purchase Act in 1893 led to a staggering collapse of the mining and agricultural economy of Colorado, but the state slowly and steadily recovered. Between the 1880s and 1930s, Denver's floriculture industry developed into a major industry in Colorado. This period became known locally as the Carnation Gold Rush. Twentieth and twenty-first centuries Poor labor conditions and discontent among miners resulted in several major clashes between strikers and the Colorado National Guard, including the 1903–1904 Western Federation of Miners Strike and Colorado Coalfield War, the latter of which included the Ludlow massacre that killed a dozen women and children. Both the 1913–1914 Coalfield War and the Denver streetcar strike of 1920 resulted in federal troops intervening to end the violence. In 1927, the 1927-28 Colorado coal strike occurred and was ultimately successful in winning a dollar a day increase in wages. During it however the Columbine Mine massacre resulted in six dead strikers following a confrontation with Colorado Rangers. In a separate incident in Trinidad the mayor was accused of deputizing members of the KKK against the striking workers. More than 5,000 Colorado miners—many immigrants—are estimated to have died in accidents since records were first formally collected following an 1884 accident in Crested Butte that killed 59. In 1924, the Ku Klux Klan Colorado Realm achieved dominance in Colorado politics. With peak membership levels, the Second Klan levied significant control over both the local and state Democrat and Republican parties, particularly in the governor's office and city governments of Denver, Cañon City, and Durango. A particularly strong element of the Klan controlled the Denver Police. Cross burnings became semi-regular occurrences in cities such as Florence and Pueblo. The Klan targeted African-Americans, Catholics, Eastern European immigrants, and other non-White Protestant groups. Efforts by non-Klan lawmen and lawyers including Philip Van Cise lead to a rapid decline in the organization's power, with membership waning significantly by the end of the 1920s. Colorado became the first western state to host a major political convention when the Democratic Party met in Denver in 1908. By the U.S. census in 1930, the population of Colorado first exceeded one million residents. Colorado suffered greatly through the Great Depression and the Dust Bowl of the 1930s, but a major wave of immigration following World War II boosted Colorado's fortune. Tourism became a mainstay of the state economy, and high technology became an important economic engine. The United States Census Bureau estimated that the population of Colorado exceeded five million in 2009. On September 11, 1957, a plutonium fire occurred at the Rocky Flats Plant, which resulted in the significant plutonium contamination of surrounding populated areas. From the 1940s and 1970s, many protest movements gained momentum in Colorado, predominantly in Denver. This included the Chicano Movement, a civil rights, and social movement of Mexican Americans emphasizing a Chicano identity that is widely considered to have begun in Denver. The National Chicano Liberation Youth Conference was held in Colorado in March 1969. In 1967, Colorado was the first state to loosen restrictions on abortion when governor John Love signed a law allowing abortions in cases of rape, incest, or threats to the woman's mental or physical health. Many states followed Colorado's lead in loosening abortion laws in the 1960s and 1970s. Since the late 1990s, Colorado has been the site of multiple major mass shootings, including the infamous Columbine High School massacre in 1999 which made international news, where two gunmen killed 12 students and one teacher, before committing suicide. The incident has since spawned many copycat incidents. On July 20, 2012, a gunman killed 12 people in a movie theater in Aurora. The state responded with tighter restrictions on firearms, including introducing a limit on magazine capacity. On March 22, 2021, a gunman killed 10 people, including a police officer, in a King Soopers supermarket in Boulder. In an instance of anti-LGBT violence, a gunman killed 5 people at a nightclub in Colorado Springs during the night of November 19–20, 2022. Four warships of the U.S. Navy have been named the USS Colorado. The first USS Colorado was named for the Colorado River and served in the Civil War and later the Asiatic Squadron, where it was attacked during the 1871 Korean Expedition. The later three ships were named in honor of the state, including an armored cruiser and the battleship USS Colorado, the latter of which was the lead ship of her class and served in World War II in the Pacific beginning in 1941. At the time of the attack on Pearl Harbor, the battleship USS Colorado was located at the naval base in San Diego, California, and thus went unscathed. The most recent vessel to bear the name USS Colorado is Virginia-class submarine USS Colorado (SSN-788), which was commissioned in 2018. Geography Colorado is notable for its diverse geography, which includes alpine mountains, high plains, deserts with huge sand dunes, and deep canyons. In 1861, the United States Congress defined the boundaries of the new Territory of Colorado exclusively by lines of latitude and longitude, stretching from 37°N to 41°N latitude, and from 102°02′48″W to 109°02′48″W longitude (25°W to 32°W from the Washington Meridian). After years of government surveys, the borders of Colorado were officially defined by 697 boundary markers and 697 straight boundary lines. Colorado, Wyoming, and Utah are the only states that have their borders defined solely by straight boundary lines with no natural features. The southwest corner of Colorado is the Four Corners Monument at 36°59′56″N, 109°2′43″W. The Four Corners Monument, located at the place where Colorado, New Mexico, Arizona, and Utah meet, is the only place in the United States where four states meet. Plains Approximately half of Colorado is flat and rolling land. East of the Rocky Mountains are the Colorado Eastern Plains of the High Plains, the section of the Great Plains within Colorado at elevations ranging from roughly . The Colorado plains are mostly prairies but also include deciduous forests, buttes, and canyons. Precipitation averages annually. Eastern Colorado is presently mainly farmland and rangeland, along with small farming villages and towns. Corn, wheat, hay, soybeans, and oats are all typical crops. Most villages and towns in this region boast both a water tower and a grain elevator. Irrigation water is available from both surface and subterranean sources. Surface water sources include the South Platte, the Arkansas River, and a few other streams. Subterranean water is generally accessed through artesian wells. Heavy usage of these wells for irrigation purposes caused underground water reserves to decline in the region. Eastern Colorado also hosts a considerable amount and range of livestock, such as cattle ranches and hog farms. Front Range Roughly 70% of Colorado's population resides along the eastern edge of the Rocky Mountains in the Front Range Urban Corridor between Cheyenne, Wyoming, and Pueblo, Colorado. This region is partially protected from prevailing storms that blow in from the Pacific Ocean region by the high Rockies in the middle of Colorado. The "Front Range" includes Denver, Boulder, Fort Collins, Loveland, Castle Rock, Colorado Springs, Pueblo, Greeley, and other townships and municipalities in between. On the other side of the Rockies, the significant population centers in western Colorado (which is known as "The Western Slope") are the cities of Grand Junction, Durango, and Montrose. Mountains To the west of the Great Plains of Colorado rises the eastern slope of the Rocky Mountains. Notable peaks of the Rocky Mountains include Longs Peak, Mount Blue Sky, Pikes Peak, and the Spanish Peaks near Walsenburg, in southern Colorado. This area drains to the east and the southeast, ultimately either via the Mississippi River or the Rio Grande into the Gulf of Mexico. The Rocky Mountains within Colorado contain 53 true peaks with a total of 58 that are or higher in elevation above sea level, known as fourteeners. These mountains are largely covered with trees such as conifers and aspens up to the tree line, at an elevation of about in southern Colorado to about in northern Colorado. Above this tree line, only alpine vegetation grows. Only small parts of the Colorado Rockies are snow-covered year-round. Much of the alpine snow melts by mid-August except for a few snow-capped peaks and a few small glaciers. The Colorado Mineral Belt, stretching from the San Juan Mountains in the southwest to Boulder and Central City on the front range, contains most of the historic gold- and silver-mining districts of Colorado. Mount Elbert is the highest summit of the Rocky Mountains. The 30 highest major summits of the Rocky Mountains of North America are all within the state. The summit of Mount Elbert at elevation in Lake County is the highest point in Colorado and the Rocky Mountains of North America. Colorado is the only U.S. state that lies entirely above 1,000 meters elevation. The point where the Arikaree River flows out of Yuma County, Colorado, and into Cheyenne County, Kansas, is the lowest in Colorado at elevation. This point, which is the highest low elevation point of any state, is higher than the high elevation points of 18 states and the District of Columbia. Continental Divide The Continental Divide of the Americas extends along the crest of the Rocky Mountains. The area of Colorado to the west of the Continental Divide is called the Western Slope of Colorado. West of the Continental Divide, water flows to the southwest via the Colorado River and the Green River into the Gulf of California. Within the interior of the Rocky Mountains are several large parks which are high broad basins. In the north, on the east side of the Continental Divide is the North Park of Colorado. The North Park is drained by the North Platte River, which flows north into Wyoming and Nebraska. Just to the south of North Park, but on the western side of the Continental Divide, is the Middle Park of Colorado, which is drained by the Colorado River. The South Park of Colorado is the region of the headwaters of the South Platte River. South Central region In south-central Colorado is the large San Luis Valley, where the headwaters of the Rio Grande are located. The northern part of the valley is the San Luis Closed Basin, an endorheic basin that helped created the Great Sand Dunes. The valley sits between the Sangre De Cristo Mountains and San Juan Mountains. The Rio Grande drains due south into New Mexico, Texas, and Mexico. Across the Sangre de Cristo Range to the east of the San Luis Valley lies the Wet Mountain Valley. These basins, particularly the San Luis Valley, lie along the Rio Grande Rift, a major geological formation of the Rocky Mountains, and its branches. Western Slope The Western Slope of Colorado includes the western face of the Rocky Mountains and all of the area to the western border. This area includes several terrains and climates from alpine mountains to arid deserts. The Western Slope includes many ski resort towns in the Rocky Mountains and towns west to Utah. It is less populous than the Front Range but includes a large number of national parks and monuments. The northwestern corner of Colorado is a sparsely populated region, and it contains part of the noted Dinosaur National Monument, which not only is a paleontological area, but is also a scenic area of rocky hills, canyons, arid desert, and streambeds. Here, the Green River briefly crosses over into Colorado. The Western Slope of Colorado is drained by the Colorado River and its tributaries (primarily the Gunnison River, Green River, and the San Juan River). The Colorado River flows through Glenwood Canyon, and then through an arid valley made up of desert from Rifle to Parachute, through the desert canyon of De Beque Canyon, and into the arid desert of Grand Valley, where the city of Grand Junction is located. Also prominent is the Grand Mesa, which lies to the southeast of Grand Junction; the high San Juan Mountains, a rugged mountain range; and to the north and west of the San Juan Mountains, the Colorado Plateau. Grand Junction, Colorado, at the confluence of the Colorado and Gunnison Rivers, is the largest city on the Western Slope. Grand Junction and Durango are the only major centers of television broadcasting west of the Continental Divide in Colorado, though most mountain resort communities publish daily newspapers. Grand Junction is located at the juncture of Interstate 70 and US 50, the only major highways in western Colorado. Grand Junction is also along the major railroad of the Western Slope, the Union Pacific. This railroad also provides the tracks for Amtrak's California Zephyr passenger train, which crosses the Rocky Mountains between Denver and Grand Junction. The Western Slope includes multiple notable destinations in the Colorado Rocky Mountains, including Glenwood Springs, with its resort hot springs, and the ski resorts of Aspen, Breckenridge, Vail, Crested Butte, Steamboat Springs, and Telluride. Higher education in and near the Western Slope can be found at Colorado Mesa University in Grand Junction, Western Colorado University in Gunnison, Fort Lewis College in Durango, and Colorado Mountain College in Glenwood Springs and Steamboat Springs. The Four Corners Monument in the southwest corner of Colorado marks the common boundary of Colorado, New Mexico, Arizona, and Utah; the only such place in the United States. Climate The climate of Colorado is more complex than states outside of the Mountain States region. Unlike most other states, southern Colorado is not always warmer than northern Colorado. Most of Colorado is made up of mountains, foothills, high plains, and desert lands. Mountains and surrounding valleys greatly affect the local climate. Northeast, east, and southeast Colorado are mostly the high plains, while Northern Colorado is a mix of high plains, foothills, and mountains. Northwest and west Colorado are predominantly mountainous, with some desert lands mixed in. Southwest and southern Colorado are a complex mixture of desert and mountain areas. Eastern Plains The climate of the Eastern Plains is semi-arid (Köppen climate classification: BSk) with low humidity and moderate precipitation, usually from annually, although many areas near the rivers are semi-humid climate. The area is known for its abundant sunshine and cool, clear nights, which give this area a great average diurnal temperature range. The difference between the highs of the days and the lows of the nights can be considerable as warmth dissipates to space during clear nights, the heat radiation not being trapped by clouds. The Front Range urban corridor, where most of the population of Colorado resides, lies in a pronounced precipitation shadow as a result of being on the lee side of the Rocky Mountains. In summer, this area can have many days above 95 °F (35 °C) and often 100 °F (38 °C). On the plains, the winter lows usually range from 25 to −10 °F (−4 to −23 °C). About 75% of the precipitation falls within the growing season, from April to September, but this area is very prone to droughts. Most of the precipitation comes from thunderstorms, which can be severe, and from major snowstorms that occur in the winter and early spring. Otherwise, winters tend to be mostly dry and cold. In much of the region, March is the snowiest month. April and May are normally the rainiest months, while April is the wettest month overall. The Front Range cities closer to the mountains tend to be warmer in the winter due to Chinook winds which warms the area, sometimes bringing temperatures of 70 °F (21 °C) or higher in the winter. The average July temperature is 55 °F (13 °C) in the morning and 90 °F (32 °C) in the afternoon. The average January temperature is 18 °F (−8 °C) in the morning and 48 °F (9 °C) in the afternoon, although variation between consecutive days can be 40 °F (-40 °C). Front Range foothills Just west of the plains and into the foothills, there is a wide variety of climate types. Locations merely a few miles apart can experience entirely different weather depending on the topography. Most valleys have a semi-arid climate, not unlike the eastern plains, which transitions to an alpine climate at the highest elevations. Microclimates also exist in local areas that run nearly the entire spectrum of climates, including subtropical highland (Cfb/Cwb), humid subtropical (Cfa), humid continental (Dfa/Dfb), Mediterranean (Csa/Csb) and subarctic (Dfc). Extreme weather Extreme weather changes are common in Colorado, although a significant portion of the extreme weather occurs in the least populated areas of the state. Thunderstorms are common east of the Continental Divide in the spring and summer, yet are usually brief. Hail is a common sight in the mountains east of the Divide and across the eastern Plains, especially the northeast part of the state. Hail is the most commonly reported warm-season severe weather hazard, and occasionally causes human injuries, as well as significant property damage. The eastern Plains are subject to some of the biggest hail storms in North America. Notable examples are the severe hailstorms that hit Denver on July 11, 1990, and May 8, 2017, the latter being the costliest ever in the state. The Eastern Plains are part of the extreme western portion of Tornado Alley; some damaging tornadoes in the Eastern Plains include the 1990 Limon F3 tornado and the 2008 Windsor EF3 tornado, which devastated a small town. Portions of the eastern Plains see especially frequent tornadoes, both those spawned from mesocyclones in supercell thunderstorms and from less intense landspouts, such as within the Denver convergence vorticity zone (DCVZ). The Plains are also susceptible to occasional floods and particularly severe flash floods, which are caused both by thunderstorms and by the rapid melting of snow in the mountains during warm weather. Notable examples include the 1965 Denver Flood, the Big Thompson River flooding of 1976 and the 2013 Colorado floods. Hot weather is common during summers in Denver. The city's record in 1901 for the number of consecutive days above 90 °F (32 °C) was broken during the summer of 2008. The new record of 24 consecutive days surpassed the previous record by almost a week. Much of Colorado is very dry, with the state averaging only of precipitation per year statewide. The state rarely experiences a time when some portion is not in some degree of drought. The lack of precipitation contributes to the severity of wildfires in the state, such as the Hayman Fire of 2002. Other notable fires include the Fourmile Canyon Fire of 2010, the Waldo Canyon Fire and High Park Fire of June 2012, and the Black Forest Fire of June 2013. Even these fires were exceeded in severity by the Pine Gulch Fire, Cameron Peak Fire, and East Troublesome Fire in 2020, all being the three largest fires in Colorado history (see 2020 Colorado wildfires). And the Marshall Fire which started on December 30, 2021, while not the largest in state history, was the most destructive ever in terms of property loss (see Marshall Fire). However, some of the mountainous regions of Colorado receive a huge amount of moisture from winter snowfalls. The spring melts of these snows often cause great waterflows in the Yampa River, the Colorado River, the Rio Grande, the Arkansas River, the North Platte River, and the South Platte River. Water flowing out of the Colorado Rocky Mountains is a very significant source of water for the farms, towns, and cities of the southwest states of New Mexico, Arizona, Utah, and Nevada, as well as the Midwest, such as Nebraska and Kansas, and the southern states of Oklahoma and Texas. A significant amount of water is also diverted for use in California; occasionally (formerly naturally and consistently), the flow of water reaches northern Mexico. Climate change Records The highest official ambient air temperature ever recorded in Colorado was on July 20, 2019, at John Martin Dam. The lowest official air temperature was on February 1, 1985, at Maybell. Extreme temperatures Earthquakes Despite its mountainous terrain, Colorado is relatively quiet seismically. The U.S. National Earthquake Information Center is located in Golden. On August 22, 2011, a 5.3 magnitude earthquake occurred west-southwest of the city of Trinidad. There were no casualties and only a small amount of damage was reported. It was the second-largest earthquake in Colorado's history. A magnitude 5.7 earthquake was recorded in 1973. In the early morning hours of August 24, 2018, four minor earthquakes rattled Colorado, ranging from magnitude 2.9 to 4.3. Colorado has recorded 525 earthquakes since 1973, a majority of which range 2 to 3.5 on the Richter scale. Fauna A process of extirpation by trapping and poisoning of the gray wolf (Canis lupus) from Colorado in the 1930s saw the last wild wolf in the state shot in 1945. A wolf pack recolonized Moffat County, Colorado in northwestern Colorado in 2019. Cattle farmers have expressed concern that a returning wolf population potentially threatens their herds. Coloradans voted to reintroduce gray wolves in 2020, with the state committing to a plan to have a population in the state by 2022 and permitting non-lethal methods of driving off wolves attacking livestock and pets. While there is fossil evidence of Harrington's mountain goat in Colorado between at least 800,000 years ago and its extinction with megafauna roughly 11,000 years ago, the mountain goat is not native to Colorado but was instead introduced to the state over time during the interval between 1947 and 1972. Despite being an artificially-introduced species, the state declared mountain goats a native species in 1993. In 2013, 2014, and 2019, an unknown illness killed nearly all mountain goat kids, leading to a Colorado Parks and Wildlife investigation. The native population of pronghorn in Colorado has varied wildly over the last century, reaching a low of only 15,000 individuals during the 1960s. However, conservation efforts succeeded in bringing the stable population back up to roughly 66,000 by 2013. The population was estimated to have reached 85,000 by 2019 and had increasingly more run-ins with the increased suburban housing along the eastern Front Range. State wildlife officials suggested that landowners would need to modify fencing to allow the greater number of pronghorns to move unabated through the newly developed land. Pronghorns are most readily found in the northern and eastern portions of the state, with some populations also in the western San Juan Mountains. Common wildlife found in the mountains of Colorado include mule deer, southwestern red squirrel, golden-mantled ground squirrel, yellow-bellied marmot, moose, American pika, and red fox, all at exceptionally high numbers, though moose are not native to the state. The foothills include deer, fox squirrel, desert cottontail, mountain cottontail, and coyote. The prairies are home to black-tailed prairie dog, the endangered swift fox, American badger, and white-tailed jackrabbit. Counties The State of Colorado is divided into 64 counties. Two of these counties, the City and County of Broomfield and the City and County of Denver, have consolidated city and county governments. Counties are important units of government in Colorado since there are no civil townships or other minor civil divisions. The most populous county in Colorado is El Paso County, the home of the City of Colorado Springs. The second most populous county is the City and County of Denver, the state capital. Five of the 64 counties now have more than 500,000 residents, while 12 have fewer than 5,000 residents. The ten most populous Colorado counties are all located in the Front Range Urban Corridor. Mesa County is the most populous county on the Colorado Western Slope. Municipalities Colorado has 272 active incorporated municipalities, comprising 197 towns, 73 cities, and two consolidated city and county governments. At the 2020 United States census, 4,299,942 of the 5,773,714 Colorado residents (74.47%) lived in one of these 272 municipalities. Another 714,417 residents (12.37%) lived in one of the 210 census-designated places, while the remaining 759,355 residents (13.15%) lived in the many rural and mountainous areas of the state. Colorado municipalities operate under one of five types of municipal governing authority. Colorado currently has two consolidated city and county governments, 61 home rule cities, 12 statutory cities, 35 home rule towns, 161 statutory towns, and one territorial charter municipality. The most populous municipality is the City and County of Denver. Colorado has 12 municipalities with more than 100,000 residents, and 17 with fewer than 100 residents. The 16 most populous Colorado municipalities are all located in the Front Range Urban Corridor. The City of Grand Junction is the most populous municipality on the Colorado Western Slope. The Town of Carbonate has had no year-round population since the 1890 census due to its severe winter weather and difficult access. Unincorporated communities In addition to its 272 municipalities, Colorado has 210 unincorporated census-designated places (CDPs) and many other small communities. The most populous unincorporated community in Colorado is Highlands Ranch south of Denver. The seven most populous CDPs are located in the Front Range Urban Corridor. The Clifton CDP is the most populous CDP on the Colorado Western Slope. Special districts Colorado has more than 4,000 special districts, most with property tax authority. These districts may provide schools, law enforcement, fire protection, water, sewage, drainage, irrigation, transportation, recreation, infrastructure, cultural facilities, business support, redevelopment, or other services. Some of these districts have the authority to levy sales tax as well as property tax and use fees. This has led to a hodgepodge of sales tax and property tax rates in Colorado. There are some street intersections in Colorado with a different sales tax rate on each corner, sometimes substantially different. Some of the more notable Colorado districts are: The Regional Transportation District (RTD), which affects the counties of Denver, Boulder, Jefferson, and portions of Adams, Arapahoe, Broomfield, and Douglas Counties The Scientific and Cultural Facilities District (SCFD), a special regional tax district with physical boundaries contiguous with county boundaries of Adams, Arapahoe, Boulder, Broomfield, Denver, Douglas, and Jefferson Counties It is a 0.1% retail sales and uses tax (one penny on every $10). According to the Colorado statute, the SCFD distributes the money to local organizations on an annual basis. These organizations must provide for the enlightenment and entertainment of the public through the production, presentation, exhibition, advancement, or preservation of art, music, theater, dance, zoology, botany, natural history, or cultural history. As directed by statute, SCFD recipient organizations are currently divided into three "tiers" among which receipts are allocated by percentage. Tier I includes regional organizations: the Denver Art Museum, the Denver Botanic Gardens, the Denver Museum of Nature and Science, the Denver Zoo, and the Denver Center for the Performing Arts. It receives 65.5%. Tier II currently includes 26 regional organizations. Tier II receives 21%. Tier III has more than 280 local organizations such as small theaters, orchestras, art centers, natural history, cultural history, and community groups. Tier III organizations apply for funding from the county cultural councils via a grant process. This tier receives 13.5%. An 11-member board of directors oversees the distributions by the Colorado Revised Statutes. Seven board members are appointed by county commissioners (in Denver, the Denver City Council) and four members are appointed by the Governor of Colorado. The Football Stadium District (FD or FTBL), approved by the voters to pay for and help build the Denver Broncos' stadium Empower Field at Mile High. Local Improvement Districts (LID) within designated areas of Jefferson and Broomfield counties. The Metropolitan Major League Baseball Stadium District, approved by voters to pay for and help build the Colorado Rockies' stadium Coors Field. Regional Transportation Authority (RTA) taxes at varying rates in Basalt, Carbondale, Glenwood Springs, and Gunnison County. Statistical areas Most recently on March 6, 2020, the Office of Management and Budget defined 21 statistical areas for Colorado comprising four combined statistical areas, seven metropolitan statistical areas, and ten micropolitan statistical areas. The most populous of the seven metropolitan statistical areas in Colorado is the 10-county Denver-Aurora-Lakewood, CO Metropolitan Statistical Area with a population of 2,963,821 at the 2020 United States census, an increase of +15.29% since the 2010 census. The more extensive 12-county Denver-Aurora, CO Combined Statistical Area had a population of 3,623,560 at the 2020 census, an increase of +17.23% since the 2010 census. The most populous extended metropolitan region in Rocky Mountain Region is the 18-county Front Range Urban Corridor along the northeast face of the Southern Rocky Mountains. This region with Denver at its center had a population of 5,055,344 at the 2020 census, an increase of +16.65% since the 2010 census. Demographics The United States Census Bureau estimated the population of Colorado on July 1, 2022, at 5,839,926, a 1.15% increase since the 2020 United States census. People of Hispanic and Latino American (of any race made) heritage made up 20.7% of the population. According to the 2000 census, the largest ancestry groups in Colorado are German (22%) including those of Swiss and Austrian descent, Mexican (18%), Irish (12%), and English (12%). Persons reporting German ancestry are especially numerous in the Front Range, the Rockies (west-central counties), and Eastern parts/High Plains. Colorado has a high proportion of Hispanic, mostly Mexican-American, citizens in Metropolitan Denver, Colorado Springs, as well as the smaller cities of Greeley and Pueblo, and elsewhere. Southern, Southwestern, and Southeastern Colorado have a large number of Hispanos, the descendants of the early settlers of colonial Spanish origin. In 1940, the U.S. Census Bureau reported Colorado's population as 8.2% Hispanic and 90.3% non-Hispanic white. The Hispanic population of Colorado has continued to grow quickly over the past decades. By 2019, Hispanics made up 22% of Colorado's population, and Non-Hispanic Whites made up 70%. Spoken English in Colorado has many Spanish idioms. Colorado also has some large African-American communities located in Denver, in the neighborhoods of Montbello, Five Points, Whittier, and many other East Denver areas. The state has sizable numbers of Asian-Americans of Mongolian, Chinese, Filipino, Korean, Southeast Asian, and Japanese descent. The highest population of Asian Americans can be found on the south and southeast side of Denver, as well as some on Denver's southwest side. The Denver metropolitan area is considered more liberal and diverse than much of the state when it comes to political issues and environmental concerns. The majority of Colorado's immigrants are from Mexico, India, China, Vietnam, Korea, Germany and Canada. There were a total of 70,331 births in Colorado in 2006. (Birth rate of 14.6 per thousand.) In 2007, non-Hispanic whites were involved in 59.1% of all births. Some 14.06% of those births involved a non-Hispanic white person and someone of a different race, most often with a couple including one Hispanic. A birth where at least one Hispanic person was involved counted for 43% of the births in Colorado. As of the 2010 census, Colorado has the seventh highest percentage of Hispanics (20.7%) in the U.S. behind New Mexico (46.3%), California (37.6%), Texas (37.6%), Arizona (29.6%), Nevada (26.5%), and Florida (22.5%). Per the 2000 census, the Hispanic population is estimated to be 918,899, or approximately 20% of the state's total population. Colorado has the 5th-largest population of Mexican-Americans, behind California, Texas, Arizona, and Illinois. In percentages, Colorado has the 6th-highest percentage of Mexican-Americans, behind New Mexico, California, Texas, Arizona, and Nevada. Birth data In 2011, 46% of Colorado's population younger than the age of one were minorities, meaning that they had at least one parent who was not non-Hispanic white. Note: Births in table do not add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number. Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race. In 2017, Colorado recorded the second-lowest fertility rate in the United States outside of New England, after Oregon, at 1.63 children per woman. Significant, contributing factors to the decline in pregnancies were the Title X Family Planning Program and an intrauterine device grant from Warren Buffett's family. Language English, the official language of the state, is the most commonly spoken in Colorado. One Native American language still spoken in Colorado is the Colorado River Numic language also known as the Ute dialect. Religion Major religious affiliations of the people of Colorado as of 2014 were 64% Christian, of whom there are 44% Protestant, 16% Roman Catholic, 3% Mormon, and 1% Eastern Orthodox. Other religious breakdowns according to the Pew Research Center were 1% Jewish, 1% Muslim, 1% Buddhist and 4% other. The religiously unaffiliated made up 29% of the population. In 2020, according to the Public Religion Research Institute, Christianity was 66% of the population. Judaism was also reported to have increased in this separate study, forming 2% of the religious landscape, while the religiously unaffiliated were reported to form 28% of the population in this separate study. In 2022, the same organization reported 61% was Christian (39% Protestant, 19% Catholic, 2% Mormon, 1% Eastern Orthodox), 2% New Age, 1% Jewish, 1% Hindu, and 34% religiously unaffiliated. According to the Association of Religion Data Archives, the largest Christian denominations by the number of adherents in 2010 were the Catholic Church with 811,630; multi-denominational Evangelical Protestants with 229,981; and the Church of Jesus Christ of Latter-day Saints with 151,433. In 2020, the Association of Religion Data Archives determined the largest Christian denominations were Catholics (873,236), non/multi/inter-denominational Protestants (406,798), and Mormons (150,509). Throughout its non-Christian population, there were 12,500 Hindus, 7,101 Hindu Yogis, and 17,369 Buddhists at the 2020 study. Our Lady of Guadalupe Catholic Church was the first permanent Catholic parish in modern-day Colorado and was constructed by Spanish colonists from New Mexico in modern-day Conejos. Latin Church Catholics are served by three dioceses: the Archdiocese of Denver and the Dioceses of Colorado Springs and Pueblo. The first permanent settlement by members of the Church of Jesus Christ of Latter-day Saints in Colorado arrived from Mississippi and initially camped along the Arkansas River just east of the present-day site of Pueblo. Health Colorado is generally considered among the healthiest states by behavioral and healthcare researchers. Among the positive contributing factors is the state's well-known outdoor recreation opportunities and initiatives. However, there is a stratification of health metrics with wealthier counties such as Douglas and Pitkin performing significantly better relative to southern, less wealthy counties such as Huerfano and Las Animas. Obesity According to several studies, Coloradans have the lowest rates of obesity of any state in the US. , 24% of the population was considered medically obese, and while the lowest in the nation, the percentage had increased from 17% in 2004. Life expectancy According to a report in the Journal of the American Medical Association, residents of Colorado had a 2014 life expectancy of 80.21 years, the longest of any U.S. state. Homelessness According to HUD's 2022 Annual Homeless Assessment Report, there were an estimated 10,397 homeless people in Colorado. Economy Total employment (2019): 2,473,192 Number of employer establishments: 174,258 The total state product in 2015 was $318.6 billion. Median Annual Household Income in 2016 was $70,666, 8th in the nation. Per capita personal income in 2010 was $51,940, ranking Colorado 11th in the nation. The state's economy broadened from its mid-19th-century roots in mining when irrigated agriculture developed, and by the late 19th century, raising livestock had become important. Early industry was based on the extraction and processing of minerals and agricultural products. Current agricultural products are cattle, wheat, dairy products, corn, and hay. The federal government operates several federal facilities in the state, including NORAD (North American Aerospace Defense Command), United States Air Force Academy, Schriever Air Force Base located approximately 10 miles (16 kilometers) east of Peterson Air Force Base, and Fort Carson, both located in Colorado Springs within El Paso County; NOAA, the National Renewable Energy Laboratory (NREL) in Golden, and the National Institute of Standards and Technology in Boulder; U.S. Geological Survey and other government agencies at the Denver Federal Center near Lakewood; the Denver Mint, Buckley Space Force Base, the Tenth Circuit Court of Appeals, and the Byron G. Rogers Federal Building and United States Courthouse in Denver; and a federal Supermax Prison and other federal prisons near Cañon City. In addition to these and other federal agencies, Colorado has abundant National Forest land and four National Parks that contribute to federal ownership of of land in Colorado, or 37% of the total area of the state. In the second half of the 20th century, the industrial and service sectors expanded greatly. The state's economy is diversified and is notable for its concentration on scientific research and high-technology industries. Other industries include food processing, transportation equipment, machinery, chemical products, the extraction of metals such as gold (see Gold mining in Colorado), silver, and molybdenum. Colorado now also has the largest annual production of beer in any state. Denver is an important financial center. The state's diverse geography and majestic mountains attract millions of tourists every year, including 85.2 million in 2018. Tourism contributes greatly to Colorado's economy, with tourists generating $22.3 billion in 2018. Several nationally known brand names have originated in Colorado factories and laboratories. From Denver came the forerunner of telecommunications giant Qwest in 1879, Samsonite luggage in 1910, Gates belts and hoses in 1911, and Russell Stover Candies in 1923. Kuner canned vegetables began in Brighton in 1864. From Golden came Coors beer in 1873, CoorsTek industrial ceramics in 1920, and Jolly Rancher candy in 1949. CF&I railroad rails, wire, nails, and pipe debuted in Pueblo in 1892. Holly Sugar was first milled from beets in Holly in 1905, and later moved its headquarters to Colorado Springs. The present-day Swift packed meat of Greeley evolved from Monfort of Colorado, Inc., established in 1930. Estes model rockets were launched in Penrose in 1958. Fort Collins has been the home of Woodward Governor Company's motor controllers (governors) since 1870, and Waterpik dental water jets and showerheads since 1962. Celestial Seasonings herbal teas have been made in Boulder since 1969. Rocky Mountain Chocolate Factory made its first candy in Durango in 1981. Colorado has a flat 4.63% income tax, regardless of income level. On November 3, 2020, voters authorized an initiative to lower that income tax rate to 4.55 percent. Unlike most states, which calculate taxes based on federal adjusted gross income, Colorado taxes are based on taxable income—income after federal exemptions and federal itemized (or standard) deductions. Colorado's state sales tax is 2.9% on retail sales. When state revenues exceed state constitutional limits, according to Colorado's Taxpayer Bill of Rights legislation, full-year Colorado residents can claim a sales tax refund on their individual state income tax return. Many counties and cities charge their own rates, in addition to the base state rate. There are also certain county and special district taxes that may apply. Real estate and personal business property are taxable in Colorado. The state's senior property tax exemption was temporarily suspended by the Colorado Legislature in 2003. The tax break was scheduled to return for the assessment year 2006, payable in 2007. , the state's unemployment rate was 4.2%. The West Virginia teachers' strike in 2018 inspired teachers in other states, including Colorado, to take similar action. Agriculture Corn is grown in the Eastern Plains of Colorado. Arid conditions and drought negatively impacted yields in 2020 and 2022. Natural resources Colorado has significant hydrocarbon resources. According to the Energy Information Administration, Colorado hosts seven of the largest natural gas fields in the United States, and two of the largest oil fields. Conventional and unconventional natural gas output from several Colorado basins typically accounts for more than five percent of annual U.S. natural gas production. Colorado's oil shale deposits hold an estimated of oil—nearly as much oil as the entire world's proven oil reserves. Substantial deposits of bituminous, subbituminous, and lignite coal are found in the state. Uranium mining in Colorado goes back to 1872, when pitchblende ore was taken from gold mines near Central City, Colorado. Not counting byproduct uranium from phosphate, Colorado is considered to have the third-largest uranium reserves of any U.S. state, behind Wyoming and New Mexico. When Colorado and Utah dominated radium mining from 1910 to 1922, uranium and vanadium were the byproducts (giving towns like present-day Superfund site Uravan their names). Uranium price increases from 2001 to 2007 prompted several companies to revive uranium mining in Colorado. During the 1940s, certain communities–including Naturita and Paradox–earned the moniker of "yellowcake towns" from their relationship with uranium mining. Price drops and financing problems in late 2008 forced these companies to cancel or scale back the uranium-mining project. As of 2016, there were no major uranium mining operations in the state, though plans existed to restart production. Electricity generation Colorado's high Rocky Mountain ridges and eastern plains offer wind power potential, and geologic activity in the mountain areas provides the potential for geothermal power development. Much of the state is sunny and could produce solar power. Major rivers flowing from the Rocky Mountains offer hydroelectric power resources. Culture Arts and film List of museums in Colorado List of theaters in Colorado Music of Colorado Several film productions have been shot on location in Colorado, especially prominent Westerns like True Grit, The Searchers, and Butch Cassidy and the Sundance Kid. Several historic military forts, railways with trains still operating, and mining ghost towns have been used and transformed for historical accuracy in well-known films. There are also several scenic highways and mountain passes that helped to feature the open road in films such as Vanishing Point, Bingo and Starman. Some Colorado landmarks have been featured in films, such as The Stanley Hotel in Dumb and Dumber and The Shining and the Sculptured House in Sleeper. In 2015, Furious 7 was to film driving sequences on Pikes Peak Highway in Colorado. The TV adult-animated series South Park takes place in central Colorado in the titular town. Additionally, The TV series Good Luck Charlie was set, but not filmed, in Denver, Colorado. The Colorado Office of Film and Television has noted that more than 400 films have been shot in Colorado. There are also several established film festivals in Colorado, including Aspen Shortsfest, Boulder International Film Festival, Castle Rock Film Festival, Denver Film Festival, Festivus Film Festival, Mile High Horror Film Festival, Moondance International Film Festival, Mountainfilm in Telluride, Rocky Mountain Women's Film Festival, and Telluride Film Festival. Many notable writers have lived or spent extended periods in Colorado. Beat Generation writers Jack Kerouac and Neal Cassady lived in and around Denver for several years each. Irish playwright Oscar Wilde visited Colorado on his tour of the United States in 1882, writing in his 1906 Impressions of America that Leadville was "the richest city in the world. It has also got the reputation of being the roughest, and every man carries a revolver." Cuisine Colorado is known for its Southwest and Rocky Mountain cuisine, with Mexican restaurants found throughout the state. Boulder was named America's Foodiest Town 2010 by Bon Appétit. Boulder, and Colorado in general, is home to several national food and beverage companies, top-tier restaurants and farmers' markets. Boulder also has more Master Sommeliers per capita than any other city, including San Francisco and New York. Denver is known for steak, but now has a diverse culinary scene with many restaurants. Polidori Sausage is a brand of pork products available in supermarkets, which originated in Colorado, in the early 20th century. The Food & Wine Classic is held annually each June in Aspen. Aspen also has a reputation as the culinary capital of the Rocky Mountain region. Wine and beer Colorado wines include award-winning varietals that have attracted favorable notice from outside the state. With wines made from traditional Vitis vinifera grapes along with wines made from cherries, peaches, plums, and honey, Colorado wines have won top national and international awards for their quality. Colorado's grape growing regions contain the highest elevation vineyards in the United States, with most viticulture in the state practiced between above sea level. The mountain climate ensures warm summer days and cool nights. Colorado is home to two designated American Viticultural Areas of the Grand Valley AVA and the West Elks AVA, where most of the vineyards in the state are located. However, an increasing number of wineries are located along the Front Range. In 2018, Wine Enthusiast Magazine named Colorado's Grand Valley AVA in Mesa County, Colorado, as one of the Top Ten wine travel destinations in the world. Colorado is home to many nationally praised microbreweries, including New Belgium Brewing Company, Odell Brewing Company, Great Divide Brewing Company, and Bristol Brewing Company. The area of northern Colorado near and between the cities of Denver, Boulder, and Fort Collins is known as the "Napa Valley of Beer" due to its high density of craft breweries. Marijuana and hemp Colorado is open to cannabis (marijuana) tourism. With the adoption of the 64th state amendment in 2012, Colorado became the first state in the union to legalize marijuana for medicinal (2000), industrial (referring to hemp, 2012), and recreational (2012) use. Colorado's marijuana industry sold $1.31 billion worth of marijuana in 2016 and $1.26 billion in the first three-quarters of 2017. The state generated tax, fee, and license revenue of $194 million in 2016 on legal marijuana sales. Colorado regulates hemp as any part of the plant with less than 0.3% THC. On April 4, 2014, Senate Bill 14–184 addressing oversight of Colorado's industrial hemp program was first introduced, ultimately being signed into law by Governor John Hickenlooper on May 31, 2014. Medicinal use On November 7, 2000, 54% of Colorado voters passed Amendment 20, which amends the Colorado State constitution to allow the medical use of marijuana. A patient's medical use of marijuana, within the following limits, is lawful: (I) No more than of a usable form of marijuana; and (II) No more than twelve marijuana plants, with six or fewer being mature, flowering plants that are producing a usable form of marijuana. Currently, Colorado has listed "eight medical conditions for which patients can use marijuana—cancer, glaucoma, HIV/AIDS, muscle spasms, seizures, severe pain, severe nausea and cachexia, or dramatic weight loss and muscle atrophy". While governor, John Hickenlooper allocated about half of the state's $13 million "Medical Marijuana Program Cash Fund" to medical research in the 2014 budget. By 2018, the Medical Marijuana Program Cash Fund was the "largest pool of pot money in the state" and was used to fund programs including research into pediatric applications for controlling autism symptoms. Recreational use On November 6, 2012, voters amended the state constitution to protect "personal use" of marijuana for adults, establishing a framework to regulate marijuana in a manner similar to alcohol. The first recreational marijuana shops in Colorado, and by extension the United States, opened their doors on January 1, 2014. Sports Colorado has five major professional sports leagues, all based in the Denver metropolitan area. Colorado is the least populous state with a franchise in each of the major professional sports leagues. The Colorado Springs Snow Sox professional baseball team is based in Colorado Springs. The team is a member of the Pecos League, an independent baseball league which is not affiliated with Major or Minor League Baseball. The Pikes Peak International Hill Climb is a major hill climbing motor race held on the Pikes Peak Highway. The Cherry Hills Country Club has hosted several professional golf tournaments, including the U.S. Open, U.S. Senior Open, U.S. Women's Open, PGA Championship and BMW Championship. Professional sports teams College athletics The following universities and colleges participate in the National Collegiate Athletic Association Division I. The most popular college sports program is the University of Colorado Buffaloes, who used to play in the Big-12 but now play in the Pac-12. They have won the 1957 and 1991 Orange Bowl, 1995 Fiesta Bowl, and 1996 Cotton Bowl Classic. Transportation Colorado's primary mode of transportation (in terms of passengers) is its highway system. Interstate 25 (I-25) is the primary north–south highway in the state, connecting Pueblo, Colorado Springs, Denver, and Fort Collins, and extending north to Wyoming and south to New Mexico. I-70 is the primary east–west corridor. It connects Grand Junction and the mountain communities with Denver and enters Utah and Kansas. The state is home to a network of US and Colorado highways that provide access to all principal areas of the state. Many smaller communities are connected to this network only via county roads. Denver International Airport (DIA) is the third-busiest domestic U.S. and international airport in the world by passenger traffic. DIA handles by far the largest volume of commercial air traffic in Colorado and is the busiest U.S. hub airport between Chicago and the Pacific coast, making Denver the most important airport for connecting passenger traffic in the western United States. Public transportation bus services are offered both intra-city and inter-city—including the Denver metro area's RTD services. The Regional Transportation District (RTD) operates the popular RTD Bus & Rail transit system in the Denver Metropolitan Area. the RTD rail system had 170 light-rail vehicles, serving of track. In addition to local public transit, intercity bus service is provided by Burlington Trailways, Bustang, Express Arrow, and Greyhound Lines. Amtrak operates two passenger rail lines in Colorado, the California Zephyr and Southwest Chief. Colorado's contribution to world railroad history was forged principally by the Denver and Rio Grande Western Railroad which began in 1870 and wrote the book on mountain railroading. In 1988 the "Rio Grande" was acquired, but was merged into, the Southern Pacific Railroad by their joint owner Philip Anschutz. On September 11, 1996, Anschutz sold the combined company to the Union Pacific Railroad, creating the largest railroad network in the United States. The Anschutz sale was partly in response to the earlier merger of Burlington Northern and Santa Fe which formed the large Burlington Northern and Santa Fe Railway (BNSF), Union Pacific's principal competitor in western U.S. railroading. Both Union Pacific and BNSF have extensive freight operations in Colorado. Colorado's freight railroad network consists of 2,688 miles of Class I trackage. It is integral to the U.S. economy, being a critical artery for the movement of energy, agriculture, mining, and industrial commodities as well as general freight and manufactured products between the East and Midwest and the Pacific coast states. In August 2014, Colorado began to issue driver licenses to aliens not lawfully in the United States who lived in Colorado. In September 2014, KCNC reported that 524 non-citizens were issued Colorado driver licenses that are normally issued to U.S. citizens living in Colorado. Education The first institution of higher education in the Colorado Territory was the Colorado Seminary, opened on November 16, 1864, by the Methodist Episcopal Church. The seminary closed in 1867 but reopened in 1880 as the University of Denver. In 1870, the Bishop George Maxwell Randall of the Episcopal Church's Missionary District of Colorado and Parts Adjacent opened the first of what become the Colorado University Schools which would include the Territorial School of Mines opened in 1873 and sold to the Colorado Territory in 1874. These schools were initially run by the Episcopal Church. An 1861 territorial act called for the creation of a public university in Boulder, though it would not be until 1876 that the University of Colorado was founded. The 1876 act also renamed Territorial School of Mines as the Colorado School of Mines. An 1870 territorial act created the Agricultural College of Colorado which opened in 1879. The college was renamed the Colorado State College of Agriculture and Mechanic Arts in 1935, and became Colorado State University in 1957. The first Catholic college in Colorado was the Jesuit Sacred Heart College, which was founded in New Mexico in 1877, moved to Morrison in 1884, and to Denver in 1887. The college was renamed Regis College in 1921 and Regis University in 1991. On April 1, 1924, armed students patrolled the campus after a burning cross was found, the climax of tensions between Regis College and the locally-powerful Ku Klux Klan. Following a 1950 assessment by the Service Academy Board, it was determined that there was a need to supplement the U.S. Military and Naval Academies with a third school that would provide commissioned officers for the newly independent Air Force. On April 1, 1954, President Dwight Eisenhower signed a law that moved for the creation of a U.S. Air Force Academy. Later that year, Colorado Springs was selected to host the new institution. From its establishment in 1955, until the construction of appropriate facilities in Colorado Springs was completed and opened in 1958, the Air Force Academy operated out of Lowry Air Force Base in Denver. With the opening of the Colorado Springs facility, the cadets moved to the new campus, though not in the full-kit march that some urban and campus legends suggest. The first class of Space Force officers from the Air Force Academy commissioned on April 18, 2020. Military installations The major military installations in Colorado include: Buckley Space Force Base (1938–) Air Reserve Personnel Center (1953–) Fort Carson (U.S. Army 1942–) Piñon Canyon Maneuver Site (1983–) Peterson Space Force Base (1942–) Cheyenne Mountain Space Force Station (1961–) Pueblo Chemical Depot (U.S. Army 1942–) Schriever Space Force Base (1983–) United States Air Force Academy (1954–) Former military posts in Colorado include: Spanish Fort (Spanish Army 1819–1821) Fort Massachusetts (U.S. Army 1852–1858) Fort Garland (U.S. Army 1858–1883) Camp Collins (U.S. Army 1862–1870) Fort Logan (U.S. Army 1887–1946) Colorado National Guard Armory (1913–1933) Fitzsimons Army Hospital (U.S. Army 1918–1999) Denver Medical Depot (U.S. Army 1925–1949) Lowry Air Force Base (1938–1994) Pueblo Army Air Base (1941-1948) Rocky Mountain Arsenal (U.S. Army 1942–1992) Camp Hale (U.S. Army 1942–1945) La Junta Army Air Field (1942–1946) Leadville Army Air Field (1943–1944) Government State government Like the federal government and all other U.S. states, Colorado's state constitution provides for three branches of government: the legislative, the executive, and the judicial branches. The Governor of Colorado heads the state's executive branch. The current governor is Jared Polis, a Democrat. Colorado's other statewide elected executive officers are the Lieutenant Governor of Colorado (elected on a ticket with the Governor), Secretary of State of Colorado, Colorado State Treasurer, and Attorney General of Colorado, all of whom serve four-year terms. The seven-member Colorado Supreme Court is the state's highest court, with seven justices. The Colorado Court of Appeals, with 22 judges, sits in divisions of three judges each. Colorado is divided into 22 judicial districts, each of which has a district court and a county court with limited jurisdiction. The state also has specialized water courts, which sit in seven distinct divisions around the state and which decide matters relating to water rights and the use and administration of water. The state legislative body is the Colorado General Assembly, which is made up of two houses – the House of Representatives and the Senate. The House has 65 members and the Senate has 35. , the Democratic Party holds a 23 to 12 majority in the Senate and a 46 to 19 majority in the House. Most Coloradans are native to other states (nearly 60% according to the 2000 census), and this is illustrated by the fact that the state did not have a native-born governor from 1975 (when John David Vanderhoof left office) until 2007, when Bill Ritter took office; his election the previous year marked the first electoral victory for a native-born Coloradan in a gubernatorial race since 1958 (Vanderhoof had ascended from the Lieutenant Governorship when John Arthur Love was given a position in Richard Nixon's administration in 1973). Tax is collected by the Colorado Department of Revenue. Politics Colorado was once considered a swing state, but has become a relatively safe blue state in both state and federal elections. In presidential elections, it had not been won until 2020 by double digits since 1984 and has backed the winning candidate in 9 of the last 11 elections. Coloradans have elected 17 Democrats and 12 Republicans to the governorship in the last 100 years. In presidential politics, Colorado was considered a reliably Republican state during the post-World War II era, voting for the Democratic candidate only in 1948, 1964, and 1992. However, it became a competitive swing state in the 1990s. Since the mid-2000s, it has swung heavily to the Democrats, voting for Barack Obama in 2008 and 2012, Hillary Clinton in 2016, and Joe Biden in 2020. Colorado politics exhibits a contrast between conservative cities such as Colorado Springs and Grand Junction, and liberal cities such as Boulder and Denver. Democrats are strongest in metropolitan Denver, the college towns of Fort Collins and Boulder, southern Colorado (including Pueblo), and several western ski resort counties. The Republicans are strongest in the Eastern Plains, Colorado Springs, Greeley, and far Western Colorado near Grand Junction. Colorado is represented by two members of the United States Senate: Class 2, John Hickenlooper (Democratic), since 2021 Class 3, Michael Bennet (Democratic), since 2009 Colorado is represented by eight members of the United States House of Representatives: 1st district: Diana DeGette (Democratic), since 1997 2nd district: Joe Neguse (Democratic), since 2019 3rd district: Lauren Boebert (Republican), since 2021 4th district: Ken Buck (Republican), since 2015 5th district: Doug Lamborn (Republican), since 2007 6th district: Jason Crow (Democratic), since 2019 7th district: Brittany Pettersen (Democratic), since 2023 8th district: Yadira Caraveo (Democratic), since 2023 In a 2020 study, Colorado was ranked as the seventh easiest state for citizens to vote in. Significant initiatives and legislation enacted in Colorado In 1881 Colorado voters approved a referendum that selected Denver as the state capital. Colorado was the first state in the union to enact, by voter referendum, a law extending suffrage to women. That initiative was approved by the state's voters on November 7, 1893. On the November 8, 1932, ballot, Colorado approved the repeal of alcohol prohibition more than a year before the Twenty-first Amendment to the United States Constitution was ratified. Colorado has banned, via C.R.S. section 12-6-302, the sale of motor vehicles on Sunday since at least 1953. In 1972 Colorado voters rejected a referendum proposal to fund the 1976 Winter Olympics, which had been scheduled to be held in the state. Denver had been chosen by the International Olympic Committee as the host city on May 12, 1970. In 1992, by a margin of 53 to 47 percent, Colorado voters approved an amendment to the state constitution (Amendment 2) that would have prevented any city, town, or county in the state from taking any legislative, executive or judicial action to recognize homosexuals or bisexuals as a protected class. In 1996, in a 6–3 ruling in Romer v. Evans, the U.S. Supreme Court found that preventing protected status based upon homosexuality or bisexuality did not satisfy the Equal Protection Clause. In 2006, voters passed Amendment 43, which banned gay marriage in Colorado. That initiative was nullified by the U.S. Supreme Court's 2015 decision in Obergefell v. Hodges. In 2012, voters amended the state constitution protecting the "personal use" of marijuana for adults, establishing a framework to regulate cannabis like alcohol. The first recreational marijuana shops in Colorado, and by extension the United States, opened their doors on January 1, 2014. On May 29, 2019, Governor Jared Polis signed House Bill 1124 immediately prohibiting law enforcement officials in Colorado from holding undocumented immigrants solely based on a request from U.S. Immigration and Customs Enforcement. Native American reservations The two Native American reservations remaining in Colorado are the Southern Ute Indian Reservation (1873; Ute dialect: Kapuuta-wa Moghwachi Núuchi-u) and Ute Mountain Ute Indian Reservation (1940; Ute dialect: Wʉgama Núuchi). The two abolished Indian reservations in Colorado were the Cheyenne and Arapaho Indian Reservation (1851–1870) and Ute Indian Reservation (1855–1873). Protected areas Colorado is home to 4 national parks, 9 national monuments, 3 national historic sites, 2 national recreation areas, 4 national historic trails, 1 national scenic trail, 11 national forests, 2 national grasslands, 44 national wildernesses, 3 national conservation areas, 8 national wildlife refuges, 3 national heritage areas, 26 national historic landmarks, 16 national natural landmarks, more than 1,500 National Register of Historic Places, 1 wild and scenic river, 42 state parks, 307 state wildlife areas, 93 state natural areas, 28 national recreation trails, 6 regional trails, and numerous other scenic, historic, and recreational areas. See also Bibliography of Colorado Geography of Colorado History of Colorado Index of Colorado-related articles List of Colorado-related lists List of ships named the USS Colorado Outline of Colorado Footnotes References Further reading Explore Colorado, A Naturalist's Handbook, The Denver Museum of Natural History and Westcliff Publishers, 1995, for an excellent guide to the ecological regions of Colorado. The Archeology of Colorado, Revised Edition, E. Steve Cassells, Johnson Books, Boulder, Colorado, 1997, trade paperback, . Chokecherry Places, Essays from the High Plains, Merrill Gilfillan, Johnson Press, Boulder, Colorado, trade paperback, . The Tie That Binds, Kent Haruf, 1984, hardcover, , a fictional account of farming in Colorado. Railroads of Colorado: Your Guide to Colorado's Historic Trains and Railway Sites, Claude Wiatrowski, Voyageur Press, 2002, hardcover, 160 pages, External links State government State of Colorado Colorado Tourism Office History Colorado Federal government Energy & Environmental Data for Colorado USGS Colorado state facts, real-time, geographic, and other scientific resources of Colorado United States Census Bureau Colorado QuickFacts 2000 Census of Population and Housing for Colorado USDA ERS Colorado state facts Colorado State Guide, from the Library of Congress Other List of searchable databases produced by Colorado state agencies hosted by the American Library Association Government Documents Roundtable Colorado County Evolution Ask Colorado Colorado Historic Newspapers Collection (CHNC) Mountain and Desert Plants of Colorado and the Southwest, Climate of Colorado Holocene Volcano in Colorado (Smithsonian Institution Global Volcanism Program) Contiguous United States Former Spanish colonies Colorado, Territory of Colorado, State of States of the United States Western United States 1861 establishments in Colorado Territory
5401
https://en.wikipedia.org/wiki/Carboniferous
Carboniferous
The Carboniferous ( ) is a geologic period and system of the Paleozoic that spans 60 million years from the end of the Devonian Period million years ago (mya), to the beginning of the Permian Period, mya. The name Carboniferous means "coal-bearing", from the Latin ("coal") and ("bear, carry"), and refers to the many coal beds formed globally during that time. The first of the modern 'system' names, it was coined by geologists William Conybeare and William Phillips in 1822, based on a study of the British rock succession. The Carboniferous is often treated in North America as two geological periods, the earlier Mississippian and the later Pennsylvanian. Terrestrial animal life was well established by the Carboniferous Period. Tetrapods (four limbed vertebrates), which had originated from lobe-finned fish during the preceding Devonian, became pentadactylous in and diversified during the Carboniferous, including early amphibian lineages such as temnospondyls, with the first appearance of amniotes, including synapsids (the group to which modern mammals belong) and reptiles during the late Carboniferous. The period is sometimes called the Age of Amphibians, during which amphibians became dominant land vertebrates and diversified into many forms including lizard-like, snake-like, and crocodile-like. Insects underwent a major radiation during the late Carboniferous. Vast swaths of forest covered the land, which eventually fell and became the coal beds characteristic of the Carboniferous stratigraphy evident today. The later half of the period experienced glaciations, low sea level, and mountain building as the continents collided to form Pangaea. A minor marine and terrestrial extinction event, the Carboniferous rainforest collapse, occurred at the end of the period, caused by climate change. Etymology and history The term "Carboniferous" had first been used as an adjective by Irish geologist Richard Kirwan in 1799, and later used in a heading entitled "Coal-measures or Carboniferous Strata" by John Farey Sr. in 1811, becoming an informal term referring to coal-bearing sequences in Britain and elsewhere in Western Europe. Four units were originally ascribed to the Carboniferous, in ascending order, the Old Red Sandstone, Carboniferous Limestone, Millstone Grit and the Coal Measures. These four units were placed into a formalised Carboniferous unit by William Conybeare and William Phillips in 1822, and later into the Carboniferous System by Phillips in 1835. The Old Red Sandstone was later considered Devonian in age. Subsequently, separate stratigraphic schemes were developed in Western Europe, North America, and Russia. The first attempt to build an international timescale for the Carboniferous was during the Eighth International Congress on Carboniferous Stratigraphy and Geology in Moscow in 1975, when all of the modern ICS stages were proposed. Stratigraphy The Carboniferous is divided into two subsystems, the lower Mississippian and upper Pennsylvanian, which are sometimes treated as separate geological periods in North American stratigraphy. Stages can be defined globally or regionally. For global stratigraphic correlation, the International Commission on Stratigraphy (ICS) ratify global stages based on a Global Boundary Stratotype Section and Point (GSSP) from a single formation (a stratotype) identifying the lower boundary of the stage. The ICS subdivisions from youngest to oldest are as follows: ICS units The Mississippian was first proposed by Alexander Winchell, and the Pennsylvanian was proposed by J. J. Stevenson in 1888, and both were proposed as distinct and independent systems by H. S. Williams in 1881. The Tournaisian was named after the Belgian city of Tournai. It was introduced in scientific literature by Belgian geologist André Hubert Dumont in 1832. The GSSP for the base of the Tournaisian is located at the La Serre section in Montagne Noire, southern France. It is defined by the first appearance datum of the conodont Siphonodella sulcata, which was ratified in 1990. However, the GSSP was later shown to have issues, with Siphonodella sulcata being shown to occur 0.45 m below the proposed boundary. The Viséan Stage was introduced by André Dumont in 1832. Dumont named this stage after the city of Visé in Belgium's Liège Province. The GSSP for the Visean is located in Bed 83 at the Pengchong section, Guangxi, southern China, which was ratified in 2012. The GSSP for the base of the Viséan is the first appearance datum of fusulinid (an extinct group of forams) Eoparastaffella simplex. The Serpukhovian Stage was proposed in 1890 by Russian stratigrapher Sergei Nikitin. It is named after the city of Serpukhov, near Moscow. The Serpukhovian Stage currently lacks a defined GSSP. The proposed definition for the base of the Serpukhovian is the first appearance of conodont Lochriea ziegleri. The Bashkirian was named after Bashkiria, the then Russian name of the republic of Bashkortostan in the southern Ural Mountains of Russia. The stage was introduced by Russian stratigrapher Sofia Semikhatova in 1934. The GSSP for the base of the Bashkirian is located at Arrow Canyon in Nevada, US, which was ratified in 1996. The GSSP for the base of the Bashkirian is defined by the first appearance of the conodont Declinognathodus noduliferus. The Moscovian is named after Moscow, Russia, and was first introduced by Sergei Nikitin in 1890. The Moscovian currently lacks a defined GSSP. The Kasimovian is named after the Russian city of Kasimov, and originally included as part of Nikitin's original 1890 definition of the Moscovian. It was first recognised as a distinct unit by A.P. Ivanov in 1926, who named it the "Tiguliferina" Horizon after a kind of brachiopod. The Kasimovian currently lacks a defined GSSP. The Gzhelian is named after the Russian village of Gzhel (), nearby Ramenskoye, not far from Moscow. The name and type locality were defined by Sergei Nikitin in 1890. The base of the Gzhelian currently lacks a defined GSSP. The GSSP for the base of the Permian is located in the Aidaralash River valley near Aqtöbe, Kazakhstan, which was ratified in 1996. The beginning of the stage is defined by the first appearance of the conodont Streptognathodus postfusus. Regional stratigraphy North America In North American stratigraphy, the Mississippian is divided, in ascending order, into the Kinderhookian, Osagean, Meramecian and Chesterian series, while the Pennsylvanian is divided into the Morrowan, Atokan, Desmoinesian, Missourian and Virgilian series. The Kinderhookian is named after the village of Kinderhook, Pike County, Illinois. It corresponds to the lower part of the Tournasian. The Osagean is named after the Osage River in St. Clair County, Missouri. It corresponds to the upper part of the Tournaisian and the lower part of the Viséan. The Meramecian is named after the Meramec Highlands Quarry, located the near the Meramec River, southwest of St. Louis, Missouri. It corresponds to the mid Viséan. The Chesterian is named after the Chester Group, a sequence of rocks named after the town of Chester, Illinois. It corresponds to the upper Viséan and all of the Serpukhovian. The Morrowan is named after the Morrow Formation located in NW Arkansas, it corresponds to the lower Bashkirian. The Atokan was originally a formation named after the town of Atoka in southwestern Oklahoma. It corresponds to the upper Bashkirian and lower Moscovian The Desmoinesian is named after the Des Moines Formation found near the Des Moines River in central Iowa. It corresponds to the middle and upper Moscovian and lower Kasimovian. The Missourian was named at the same time as the Desmoinesian. It corresponds to the middle and upper Kasimovian. The Virgilian is named after the town of Virgil, Kansas, it corresponds to the Gzhelian. Europe The European Carboniferous is divided into the lower Dinantian and upper Silesian, the former being named for the Belgian city of Dinant, and the latter for the Silesia region of Central Europe. The boundary between the two subdivisions is older than the Mississippian-Pennsylvanian boundary, lying within the lower Serpukhovian. The boundary has traditionally been marked by the first appearance of the ammonoid Cravenoceras leion. In Europe, the Dinantian is primarily marine, the so-called "Carboniferous Limestone", while the Silesian is known primarily for its coal measures. The Dinantian is divided up into two stages, the Tournaisian and Viséan. The Tournaisian is the same length as the ICS stage, but the Viséan is longer, extending into the lower Serpukhovian. The Silesian is divided into three stages, in ascending order, the Namurian, Westphalian, Stephanian. The Autunian, which corresponds to the middle and upper Gzhelian, is considered a part of the overlying Rotliegend. The Namurian is named after the city of Namur in Belgium. It corresponds to the middle and upper Serpukhovian and the lower Bashkirian. The Westphalian is named after the region of Westphalia in Germany it corresponds to the upper Bashkirian and all but the uppermost Moscovian. The Stephanian is named after the city of Saint-Étienne in eastern France. It corresponds to the uppermost Moscovian, the Kasimovian, and the lower Gzhelian. Palaeogeography A global drop in sea level at the end of the Devonian reversed early in the Carboniferous; this created the widespread inland seas and the carbonate deposition of the Mississippian. There was also a drop in south polar temperatures; southern Gondwanaland was glaciated for much of the period, though it is uncertain if the ice sheets were a holdover from the Devonian or not. These conditions apparently had little effect in the deep tropics, where lush swamps, later to become coal, flourished to within 30 degrees of the northernmost glaciers. Mid-Carboniferous, a drop in sea level precipitated a major marine extinction, one that hit crinoids and ammonites especially hard. This sea level drop and the associated unconformity in North America separate the Mississippian Subperiod from the Pennsylvanian Subperiod. This happened about 323 million years ago, at the onset of the Permo-Carboniferous Glaciation. The Carboniferous was a time of active mountain-building as the supercontinent Pangaea came together. The southern continents remained tied together in the supercontinent Gondwana, which collided with North America–Europe (Laurussia) along the present line of eastern North America. This continental collision resulted in the Hercynian orogeny in Europe, and the Alleghenian orogeny in North America; it also extended the newly uplifted Appalachians southwestward as the Ouachita Mountains. In the same time frame, much of present eastern Eurasian plate welded itself to Europe along the line of the Ural Mountains. Most of the Mesozoic supercontinent of Pangea was now assembled, although North China (which collided in the Latest Carboniferous), and South China continents were still separated from Laurasia. The Late Carboniferous Pangaea was shaped like an "O". There were two major oceans in the Carboniferous: Panthalassa and Paleo-Tethys, which was inside the "O" in the Carboniferous Pangaea. Other minor oceans were shrinking and eventually closed: the Rheic Ocean (closed by the assembly of South and North America), the small, shallow Ural Ocean (which was closed by the collision of Baltica and Siberia continents, creating the Ural Mountains), and the Proto-Tethys Ocean (closed by North China collision with Siberia/Kazakhstania). In the Late Carboniferous, a shallow epicontinental sea covered a significant part of what is today northwestern Europe. Climate Average global temperatures in the Early Carboniferous Period were high: approximately 20 °C (68 °F). However, cooling during the Middle Carboniferous reduced average global temperatures to about 12 °C (54 °F). Atmospheric carbon dioxide levels fell during the Carboniferous Period from roughly 8 times the current level in the beginning, to a level similar to today's at the end. The Carboniferous is considered part of the Late Palaeozoic Ice Age, which began in the latest Devonian with the formation of small glaciers in Gondwana. During the Tournaisian the climate warmed, before cooling, there was another warm interval during the Viséan, but cooling began again during the early Serpukhovian. At the beginning of the Pennsylvanian around 323 million years ago, glaciers began to form around the South Pole, which grew to cover a vast area of Gondwana. This area extended from the southern reaches of the Amazon basin and covered large areas of southern Africa, as well as most of Australia and Antarctica. Cyclothems, which began around 313 million years ago, and continue into the following Permian indicate that the size of the glaciers were controlled by Milankovitch cycles akin to recent ice ages, with glacial periods and interglacials. Deep ocean temperatures during this time were cold due to the influx of cold bottom waters generated by seasonal melting of the ice cap. The cooling and drying of the climate led to the Carboniferous Rainforest Collapse (CRC) during the late Carboniferous. Tropical rainforests fragmented and then were eventually devastated by climate change. Rocks and coal Carboniferous rocks in Europe and eastern North America largely consist of a repeated sequence of limestone, sandstone, shale and coal beds. In North America, the early Carboniferous is largely marine limestone, which accounts for the division of the Carboniferous into two periods in North American schemes. The Carboniferous coal beds provided much of the fuel for power generation during the Industrial Revolution and are still of great economic importance. The large coal deposits of the Carboniferous may owe their existence primarily to two factors. The first of these is the appearance of wood tissue and bark-bearing trees. The evolution of the wood fiber lignin and the bark-sealing, waxy substance suberin variously opposed decay organisms so effectively that dead materials accumulated long enough to fossilise on a large scale. The second factor was the lower sea levels that occurred during the Carboniferous as compared to the preceding Devonian Period. This fostered the development of extensive lowland swamps and forests in North America and Europe. Based on a genetic analysis of basidiomycetes, it was proposed that large quantities of wood were buried during this period because animals and decomposing bacteria and fungi had not yet evolved enzymes that could effectively digest the resistant phenolic lignin polymers and waxy suberin polymers. They suggest that fungi that could break those substances down effectively became dominant only towards the end of the period, making subsequent coal formation much rarer. The delayed fungal evolution hypothesis has been challenged by other researchers, who conclude that tectonic and climatic conditions during the formation of Pangaea, which created water filled basins alongside developing mountain ranges, resulted in the development of widespread humid, tropical conditions and the burial of massive quantities of organic matter, were responsible for the high rate of coal formation, with large amounts of coal also being formed during the Mesozoic and Cenozoic well after lignin digesting fungi had become well established, and that fungal degredation of lignin had likely already evolved by the end of the Devonian, even if the specific enzymes used by basidiomycetes had not. Although it is often asserted that Carboniferous atmospheric oxygen concentrations were signficiantly higher than today, at around 30% of total atmospheric concentration, prehistoric atmospheric oxygen concentration estimates are highly uncertain, with other estimates suggesting that the amount of oxygen was actually lower than that present in todays atmosphere. In eastern North America, marine beds are more common in the older part of the period than the later part and are almost entirely absent by the late Carboniferous. More diverse geology existed elsewhere, of course. Marine life is especially rich in crinoids and other echinoderms. Brachiopods were abundant. Trilobites became quite uncommon. On land, large and diverse plant populations existed. Land vertebrates included large amphibians. Life Plants Early Carboniferous land plants, some of which were preserved in coal balls, were very similar to those of the preceding Late Devonian, but new groups also appeared at this time. The main Early Carboniferous plants were the Equisetales (horse-tails), Sphenophyllales (scrambling plants), Lycopodiales (club mosses), Lepidodendrales (scale trees), Filicales (ferns), Medullosales (informally included in the "seed ferns", an assemblage of a number of early gymnosperm groups) and the Cordaitales. These continued to dominate throughout the period, but during late Carboniferous, several other groups, Cycadophyta (cycads), the Callistophytales (another group of "seed ferns"), and the Voltziales, appeared. The Carboniferous lycophytes of the order Lepidodendrales, which are cousins (but not ancestors) of the tiny club-moss of today, were huge trees with trunks 30 meters high and up to 1.5 meters in diameter. These included Lepidodendron (with its cone called Lepidostrobus), Anabathra, Lepidophloios and Sigillaria. The roots of several of these forms are known as Stigmaria. Unlike present-day trees, their secondary growth took place in the cortex, which also provided stability, instead of the xylem. The Cladoxylopsids were large trees, that were ancestors of ferns, first arising in the Carboniferous. The fronds of some Carboniferous ferns are almost identical with those of living species. Probably many species were epiphytic. Fossil ferns and "seed ferns" include Pecopteris, Cyclopteris, Neuropteris, Alethopteris, and Sphenopteris; Megaphyton and Caulopteris were tree ferns. The Equisetales included the common giant form Calamites, with a trunk diameter of 30 to and a height of up to . Sphenophyllum was a slender climbing plant with whorls of leaves, which was probably related both to the calamites and the lycopods. Cordaites, a tall plant (6 to over 30 meters) with strap-like leaves, was related to the cycads and conifers; the catkin-like reproductive organs, which bore ovules/seeds, is called Cardiocarpus. These plants were thought to live in swamps. True coniferous trees (Walchia, of the order Voltziales) appear later in the Carboniferous, and preferred higher drier ground. Marine invertebrates In the oceans the marine invertebrate groups are the Foraminifera, corals, Bryozoa, Ostracoda, brachiopods, ammonoids, hederelloids, microconchids and echinoderms (especially crinoids). The diversity of brachiopods and fusilinid foraminiferans, surged beginning in the Visean, continuing through the end of the Carboniferous, although cephalopod and nektonic conodont diversity declined. This evolutionary radiation was known as the Carboniferous-Earliest Permian Biodiversification Event. For the first time foraminifera take a prominent part in the marine faunas. The large spindle-shaped genus Fusulina and its relatives were abundant in what is now Russia, China, Japan, North America; other important genera include Valvulina, Endothyra, Archaediscus, and Saccammina (the latter common in Britain and Belgium). Some Carboniferous genera are still extant. The first true priapulids appeared during this period. The microscopic shells of radiolarians are found in cherts of this age in the Culm of Devon and Cornwall, and in Russia, Germany and elsewhere. Sponges are known from spicules and anchor ropes, and include various forms such as the Calcispongea Cotyliscus and Girtycoelia, the demosponge Chaetetes, and the genus of unusual colonial glass sponges Titusvillia. Both reef-building and solitary corals diversify and flourish; these include both rugose (for example, Caninia, Corwenia, Neozaphrentis), heterocorals, and tabulate (for example, Chladochonus, Michelinia) forms. Conularids were well represented by Conularia Bryozoa are abundant in some regions; the fenestellids including Fenestella, Polypora, and Archimedes, so named because it is in the shape of an Archimedean screw. Brachiopods are also abundant; they include productids, some of which reached very large for brachiopods size and had very thick shells (for example, the -wide Gigantoproductus), while others like Chonetes were more conservative in form. Athyridids, spiriferids, rhynchonellids, and terebratulids are also very common. Inarticulate forms include Discina and Crania. Some species and genera had a very wide distribution with only minor variations. Annelids such as Serpulites are common fossils in some horizons. Among the mollusca, the bivalves continue to increase in numbers and importance. Typical genera include Aviculopecten, Posidonomya, Nucula, Carbonicola, Edmondia, and Modiola. Gastropods are also numerous, including the genera Murchisonia, Euomphalus, Naticopsis. Nautiloid cephalopods are represented by tightly coiled nautilids, with straight-shelled and curved-shelled forms becoming increasingly rare. Goniatite ammonoids such as Aenigmatoceras are common. Trilobites are rarer than in previous periods, on a steady trend towards extinction, represented only by the proetid group. Ostracoda, a class of crustaceans, were abundant as representatives of the meiobenthos; genera included Amphissites, Bairdia, Beyrichiopsis, Cavellina, Coryellina, Cribroconcha, Hollinella, Kirkbya, Knoxiella, and Libumella. Crinoids were highly numerous during the Carboniferous, though they suffered a gradual decline in diversity during the middle Mississippian. Dense submarine thickets of long-stemmed crinoids appear to have flourished in shallow seas, and their remains were consolidated into thick beds of rock. Prominent genera include Cyathocrinus, Woodocrinus, and Actinocrinus. Echinoids such as Archaeocidaris and Palaeechinus were also present. The blastoids, which included the Pentreinitidae and Codasteridae and superficially resembled crinoids in the possession of long stalks attached to the seabed, attain their maximum development at this time. Freshwater and lagoonal invertebrates Freshwater Carboniferous invertebrates include various bivalve molluscs that lived in brackish or fresh water, such as Anthraconaia, Naiadites, and Carbonicola; diverse crustaceans such as Candona, Carbonita, Darwinula, Estheria, Acanthocaris, Dithyrocaris, and Anthrapalaemon. The eurypterids were also diverse, and are represented by such genera as Adelophthalmus, Megarachne (originally misinterpreted as a giant spider, hence its name) and the specialised very large Hibbertopterus. Many of these were amphibious. Frequently a temporary return of marine conditions resulted in marine or brackish water genera such as Lingula, Orbiculoidea, and Productus being found in the thin beds known as marine bands. Terrestrial invertebrates Fossil remains of air-breathing insects, myriapods and arachnids are known from the Carboniferous. Their diversity when they do appear, however, shows that these arthropods were both well-developed and numerous. Some arthropods grew to large sizes with the up to millipede-like Arthropleura being the largest-known land invertebrate of all time. Among the insect groups are the huge predatory Protodonata (griffinflies), among which was Meganeura, a giant dragonfly-like insect and with a wingspan of ca. —the largest flying insect ever to roam the planet. Further groups are the Syntonopterodea (relatives of present-day mayflies), the abundant and often large sap-sucking Palaeodictyopteroidea, the diverse herbivorous Protorthoptera, and numerous basal Dictyoptera (ancestors of cockroaches). Many insects have been obtained from the coalfields of Saarbrücken and Commentry, and from the hollow trunks of fossil trees in Nova Scotia. Some British coalfields have yielded good specimens: Archaeoptilus, from the Derbyshire coalfield, had a large wing with preserved part, and some specimens (Brodia) still exhibit traces of brilliant wing colors. In the Nova Scotian tree trunks land snails (Archaeozonites, Dendropupa) have been found. Fish Many fish inhabited the Carboniferous seas; predominantly Elasmobranchs (sharks and their relatives). These included some, like Psammodus, with crushing pavement-like teeth adapted for grinding the shells of brachiopods, crustaceans, and other marine organisms. Other groups of elasmobranchs, like the ctenacanthiformes grew to large sizes, with some genera like Saivodus reaching around 6-9 meters (20-30 feet). Other fish had piercing teeth, such as the Symmoriida; some, the petalodonts, had peculiar cycloid cutting teeth. Most of the other cartilaginous fish were marine, but others like the Xenacanthida, and several genera like Bandringa invaded fresh waters of the coal swamps. Among the bony fish, the Palaeonisciformes found in coastal waters also appear to have migrated to rivers. Sarcopterygian fish were also prominent, and one group, the Rhizodonts, reached very large size. Most species of Carboniferous marine fish have been described largely from teeth, fin spines and dermal ossicles, with smaller freshwater fish preserved whole. Freshwater fish were abundant, and include the genera Ctenodus, Uronemus, Acanthodes, Cheirodus, and Gyracanthus. Chondrichthyes (especially holocephalans like the Stethacanthids) underwent a major evolutionary radiation during the Carboniferous. It is believed that this evolutionary radiation occurred because the decline of the placoderms at the end of the Devonian Period caused many environmental niches to become unoccupied and allowed new organisms to evolve and fill these niches. As a result of the evolutionary radiation Carboniferous holocephalans assumed a wide variety of bizarre shapes including Stethacanthus which possessed a flat brush-like dorsal fin with a patch of denticles on its top. Stethacanthus unusual fin may have been used in mating rituals. Other groups like the eugeneodonts filled in the niches left by large predatory placoderms. These fish were unique as they only possessed one row of teeth in their upper or lower jaws in the form of elaborate tooth whorls. The first members of the helicoprionidae, a family eugeneodonts that were characterized by the presence of one circular tooth whorl in the lower jaw, appeared during the lower Carboniferous. Perhaps the most bizarre radiation of holocephalans at this time was that of the iniopterygiformes, an order of holocephalans that greatly resembled modern day flying fish that could have also "flown" in the water with their massive, elongated pectoral fins. They were further characterized by their large eye sockets, club-like structures on their tails, and spines on the tips of their fins. Tetrapods Carboniferous amphibians were diverse and common by the middle of the period, more so than they are today; some were as long as 6 meters, and those fully terrestrial as adults had scaly skin. They included a number of basal tetrapod groups classified in early books under the Labyrinthodontia. These had long bodies, a head covered with bony plates and generally weak or undeveloped limbs. The largest were over 2 meters long. They were accompanied by an assemblage of smaller amphibians included under the Lepospondyli, often only about long. Some Carboniferous amphibians were aquatic and lived in rivers (Loxomma, Eogyrinus, Proterogyrinus); others may have been semi-aquatic (Ophiderpeton, Amphibamus, Hyloplesion) or terrestrial (Dendrerpeton, Tuditanus, Anthracosaurus). The Carboniferous Rainforest Collapse slowed the evolution of amphibians who could not survive as well in the cooler, drier conditions. Amniotes, however, prospered due to specific key adaptations. One of the greatest evolutionary innovations of the Carboniferous was the amniote egg, which allowed the laying of eggs in a dry environment, as well as keratinized scales and claws, allowing for the further exploitation of the land by certain tetrapods. These included the earliest sauropsid reptiles (Hylonomus), and the earliest known synapsid (Archaeothyris). Synapsids quickly became huge and diversified in the Permian, only for their dominance to stop during the Mesozoic Era. Sauropsids (reptiles, and also, later, birds) also diversified but remained small until the Mesozoic, during which they dominated the land, as well as the water and sky, only for their dominance to stop during the Cenozoic Era. Reptiles underwent a major evolutionary radiation in response to the drier climate that preceded the rainforest collapse. By the end of the Carboniferous Period, amniotes had already diversified into a number of groups, including several families of synapsid pelycosaurs, protorothyridids, captorhinids, saurians and araeoscelids. Fungi As plants and animals were growing in size and abundance in this time (for example, Lepidodendron), land fungi diversified further. Marine fungi still occupied the oceans. All modern classes of fungi were present in the Late Carboniferous (Pennsylvanian Epoch). During the Carboniferous, animals and bacteria had great difficulty with processing the lignin and cellulose that made up the gigantic trees of the period. Microbes had not evolved that could process them. The trees, after they died, simply piled up on the ground, occasionally becoming part of long-running wildfires after a lightning strike, with others very slowly degrading into coal. White rot fungus were the first organisms to be able to process these and break them down in any reasonable quantity and timescale. Thus, some have proposed that fungi helped end the Carboniferous Period, stopping accumulation of undegraded plant matter, although this idea remains highly controversial. Extinction events Romer's gap The first 15 million years of the Carboniferous had very limited terrestrial fossils. This gap in the fossil record is called Romer's gap after the American palaentologist Alfred Romer. While it has long been debated whether the gap is a result of fossilisation or relates to an actual event, recent work indicates the gap period saw a drop in atmospheric oxygen levels, indicating some sort of ecological collapse. The gap saw the demise of the Devonian fish-like ichthyostegalian labyrinthodonts, and the rise of the more advanced temnospondyl and reptiliomorphan amphibians that so typify the Carboniferous terrestrial vertebrate fauna. Carboniferous rainforest collapse Before the end of the Carboniferous Period, an extinction event occurred. On land this event is referred to as the Carboniferous Rainforest Collapse (CRC). Vast tropical rainforests collapsed suddenly as the climate changed from hot and humid to cool and arid. This was likely caused by intense glaciation and a drop in sea levels. The new climatic conditions were not favorable to the growth of rainforest and the animals within them. Rainforests shrank into isolated islands, surrounded by seasonally dry habitats. Towering lycopsid forests with a heterogeneous mixture of vegetation were replaced by much less diverse tree-fern dominated flora. Amphibians, the dominant vertebrates at the time, fared poorly through this event with large losses in biodiversity; reptiles continued to diversify due to key adaptations that let them survive in the drier habitat, specifically the hard-shelled egg and scales, both of which retain water better than their amphibian counterparts. See also List of Carboniferous tetrapods Carboniferous rainforest collapse Important Carboniferous Lagerstätten Granton Shrimp Bed; 359 mya; Edinburgh, Scotland East Kirkton Quarry; c. 350 mya; Bathgate, Scotland Bear Gulch Limestone; 324 mya; Montana, US Mazon Creek; 309 mya; Illinois, US Hamilton Quarry; 300 mya; Kansas, US List of fossil sites (with link directory) References Sources Rainer Zangerl and Gerard Ramon Case: Iniopterygia: a new order of Chondrichthyan fishes from the Pennsylvanian of North America. Fieldiana Geology Memoirs, v. 6, Field Museum of Natural History, 1973 Biodiversity Heritage Library (Volltext, engl.) External links Examples of Carboniferous Fossils 60+ images of Carboniferous Foraminifera Carboniferous (Chronostratography scale) Geological periods
5403
https://en.wikipedia.org/wiki/Comoros
Comoros
The Comoros, officially the Union of the Comoros, is an archipelagic country made up of three islands in Southeastern Africa, located at the northern end of the Mozambique Channel in the Indian Ocean. Its capital and largest city is Moroni. The religion of the majority of the population, and the official state religion, is Sunni Islam. Comoros proclaimed its independence from France on 6 July 1975. A member of the Arab League, it is the only country in the Arab world which is entirely in the Southern Hemisphere. It is a member state of the African Union, the Organisation internationale de la Francophonie, the Organisation of Islamic Cooperation, and the Indian Ocean Commission. The country has three official languages: Shikomori, French and Arabic. The sovereign state consists of three major islands and numerous smaller islands, all of the volcanic Comoro Islands with the exception of Mayotte. Mayotte voted against independence from France in a referendum in 1974, and continues to be administered by France as an overseas department. France has vetoed United Nations Security Council resolutions that would affirm Comorian sovereignty over the island. Mayotte became an overseas department and a region of France in 2011 following a referendum which was passed overwhelmingly. At , the Comoros is the third-smallest African country by area. In 2019, its population was estimated to be 850,886. The Comoros were likely first settled by Austronesian/Malagasy peoples, Bantu speakers from East Africa, and seafaring Arab traders. It became part of the French colonial empire during the 19th century, before its independence in 1975. It has experienced more than 20 coups or attempted coups, with various heads of state assassinated. Along with this constant political instability, it has one of the worst levels of income inequality of any nation, and ranks in the lowest quartile on the Human Development Index. , about half the population lived below the international poverty line of US$1.25 a day. Etymology The name "Comoros" derives from the Arabic word qamar ("moon"). History Settlement According to mythology, a jinni (spirit) dropped a jewel, which formed a great circular inferno. This became the Karthala volcano, which created the island of Ngazidja (Grande Comore). King Solomon is also said to have visited the island accompanied by his queen Bilqis. The first attested human inhabitants of the Comoro Islands are now thought to have been Austronesian settlers travelling by boat from islands in Southeast Asia. These people arrived no later than the eighth century AD, the date of the earliest known archaeological site, found on Mayotte, although settlement beginning as early as the first century has been postulated. Subsequent settlers came from the east coast of Africa, the Arabian Peninsula and the Persian Gulf, the Malay Archipelago, and Madagascar. Bantu-speaking settlers were present on the islands from the beginnings of settlement, probably brought to the islands as slaves. Development of the Comoros is divided into phases. The earliest reliably recorded phase is the Dembeni phase (eighth to tenth centuries), during which there were several small settlements on each island. From the eleventh to the fifteenth centuries, trade with the island of Madagascar and merchants from the Swahili coast and the Middle East flourished, more villages were founded and existing villages grew. Many Comorians can trace their genealogies to ancestors from the Arabian peninsula, particularly Hadhramaut, who arrived during this period. Medieval Comoros According to legend, in 632, upon hearing of Islam, islanders are said to have dispatched an emissary, Mtswa-Mwindza, to Mecca—but by the time he arrived there, the Islamic prophet Muhammad had died. Nonetheless, after a stay in Mecca, he returned to Ngazidja, where he built a mosque in his home town of Ntsaweni, and led the gradual conversion of the islanders to Islam. In 933, the Comoros was referred to by Omani sailors as the Perfume Islands. Among the earliest accounts of East Africa, the works of Al-Masudi describe early Islamic trade routes, and how the coast and islands were frequently visited by Muslims including Persian and Arab merchants and sailors in search of coral, ambergris, ivory, tortoiseshell, gold and slaves. They also brought Islam to the people of the Zanj including the Comoros. As the importance of the Comoros grew along the East African coast, both small and large mosques were constructed. The Comoros are part of the Swahili cultural and economic complex and the islands became a major hub of trade and an important location in a network of trading towns that included Kilwa, in present-day Tanzania, Sofala (an outlet for Zimbabwean gold), in Mozambique, and Mombasa in Kenya. The Portuguese arrived in the Indian Ocean at the end of the 15th century and the first Portuguese visit to the islands seems to have been that of Vasco da Gama's second fleet in 1503. For much of the 16th century the islands provided provisions to the Portuguese fort at Mozambique and although there was no formal attempt by the Portuguese crown to take possession, a number of Portuguese traders settled and married local women. By the end of the 16th century local rulers on the African mainland were beginning to push back and, with the support of the Omani Sultan Saif bin Sultan they began to defeat the Dutch and the Portuguese. One of his successors, Said bin Sultan, increased Omani Arab influence in the region, moving his administration to nearby Zanzibar, which came under Omani rule. Nevertheless, the Comoros remained independent, and although the three smaller islands were usually politically unified, the largest island, Ngazidja, was divided into a number of autonomous kingdoms (ntsi). The islands were well placed to meet the needs of Europeans, initially supplying the Portuguese in Mozambique, then ships, particularly the English, on the route to India, and, later, slaves to the plantation islands in the Mascarenes. European contact and French colonisation In the last decade of the 18th century, Malagasy warriors, mostly Betsimisaraka and Sakalava, started raiding the Comoros for slaves and the islands were devastated as crops were destroyed and the people were slaughtered, taken into captivity or fled to the African mainland: it is said that by the time the raids finally ended in the second decade of the 19th century only one man remained on Mwali. The islands were repopulated by slaves from the mainland, who were traded to the French in Mayotte and the Mascarenes. On the Comoros, it was estimated in 1865 that as much as 40% of the population consisted of slaves. France first established colonial rule in the Comoros by taking possession of Mayotte in 1841 when the Sakalava usurper sultan (also known as Tsy Levalo) signed the Treaty of April 1841, which ceded the island to the French authorities. After its annexation, France attempted to convert Mayotte into a sugar plantation colony. Meanwhile, Ndzwani (or Johanna as it was known to the British) continued to serve as a way station for English merchants sailing to India and the Far East, as well as American whalers, although the British gradually abandoned it following their possession of Mauritius in 1814, and by the time the Suez Canal opened in 1869 there was no longer any significant supply trade at Ndzwani. Local commodities exported by the Comoros were, in addition to slaves, coconuts, timber, cattle and tortoiseshell. British and American settlers, as well as the island's sultan, established a plantation-based economy that used about one-third of the land for export crops. In addition to sugar on Mayotte, ylang-ylang and other perfume plants, vanilla, cloves, coffee, cocoa beans, and sisal were introduced. In 1886, Mwali was placed under French protection by its Sultan Mardjani Abdou Cheikh. That same year, Sultan Said Ali of Bambao, one of the sultanates on Ngazidja, placed the island under French protection in exchange for French support of his claim to the entire island, which he retained until his abdication in 1910. In 1908 the four islands were unified under a single administration (Colonie de Mayotte et dépendances) and placed under the authority of the French colonial Governor-General of Madagascar. In 1909, Sultan Said Muhamed of Ndzwani abdicated in favour of French rule and in 1912 the protectorates were abolished and the islands administered as a single colony. Two years later the colony was abolished and the islands became a province of the colony of Madagascar. Agreement was reached with France in 1973 for the Comoros to become independent in 1978, despite the deputies of Mayotte voting for increased integration with France. A referendum was held on all four of the islands. Three voted for independence by large margins, while Mayotte voted against. On 6 July 1975, however, the Comorian parliament passed a unilateral resolution declaring independence. Ahmed Abdallah proclaimed the independence of the Comorian State (État comorien; دولة القمر) and became its first president. France did not recognise the new state until 31 December, and retained control of Mayotte. Independence (1975) The next 30 years were a period of political turmoil. On 3 August 1975, less than one month after independence, president Ahmed Abdallah was removed from office in an armed coup and replaced with United National Front of the Comoros (FNUK) member Said Mohamed Jaffar. Months later, in January 1976, Jaffar was ousted in favour of his Minister of Defence Ali Soilihi. The population of Mayotte voted against independence from France in three referendums during this period. The first, held on all the islands on 22 December 1974, won 63.8% support for maintaining ties with France on Mayotte; the second, held in February 1976, confirmed that vote with an overwhelming 99.4%, while the third, in April 1976, confirmed that the people of Mayotte wished to remain a French territory. The three remaining islands, ruled by President Soilihi, instituted a number of socialist and isolationist policies that soon strained relations with France. On 13 May 1978, Bob Denard, once again commissioned by the French intelligence service (SDECE), returned to overthrow President Soilihi and reinstate Abdallah with the support of the French, Rhodesian and South African governments. Ali Soilihi was captured and executed a few weeks later. In contrast to Soilihi, Abdallah's presidency was marked by authoritarian rule and increased adherence to traditional Islam and the country was renamed the Federal Islamic Republic of the Comoros (République Fédérale Islamique des Comores; جمهورية القمر الإتحادية الإسلامية). Bob Denard served as Abdallah's first advisor; nicknamed the "Viceroy of the Comoros," he was sometimes considered the real strongman of the regime. Very close to South Africa, which financed his "presidential guard," he allowed Paris to circumvent the international embargo on the apartheid regime via Moroni. He also set up from the archipelago a permanent mercenary corps, called upon to intervene at the request of Paris or Pretoria in conflicts in Africa. Abdallah continued as president until 1989 when, fearing a probable coup, he signed a decree ordering the Presidential Guard, led by Bob Denard, to disarm the armed forces. Shortly after the signing of the decree, Abdallah was allegedly shot dead in his office by a disgruntled military officer, though later sources claim an antitank missile was launched into his bedroom and killed him. Although Denard was also injured, it is suspected that Abdallah's killer was a soldier under his command. A few days later, Bob Denard was evacuated to South Africa by French paratroopers. Said Mohamed Djohar, Soilihi's older half-brother, then became president, and served until September 1995, when Bob Denard returned and attempted another coup. This time France intervened with paratroopers and forced Denard to surrender. The French removed Djohar to Reunion, and the Paris-backed Mohamed Taki Abdoulkarim became president by election. He led the country from 1996, during a time of labour crises, government suppression, and secessionist conflicts, until his death in November 1998. He was succeeded by Interim President Tadjidine Ben Said Massounde. The islands of Ndzwani and Mwali declared their independence from the Comoros in 1997, in an attempt to restore French rule. But France rejected their request, leading to bloody confrontations between federal troops and rebels. In April 1999, Colonel Azali Assoumani, Army Chief of Staff, seized power in a bloodless coup, overthrowing the Interim President Massounde, citing weak leadership in the face of the crisis. This was the Comoros' 18th coup, or attempted coup d'état since independence in 1975. Azali failed to consolidate power and reestablish control over the islands, which was the subject of international criticism. The African Union, under the auspices of President Thabo Mbeki of South Africa, imposed sanctions on Ndzwani to help broker negotiations and effect reconciliation. Under the terms of the Fomboni Accords, signed in December 2001 by the leaders of all three islands, the official name of the country was changed to the Union of the Comoros; the new state was to be highly decentralised and the central union government would devolve most powers to the new island governments, each led by a president. The Union president, although elected by national elections, would be chosen in rotation from each of the islands every five years. Azali stepped down in 2002 to run in the democratic election of the President of the Comoros, which he won. Under ongoing international pressure, as a military ruler who had originally come to power by force, and was not always democratic while in office, Azali led the Comoros through constitutional changes that enabled new elections. A Loi des compétences law was passed in early 2005 that defines the responsibilities of each governmental body, and is in the process of implementation. The elections in 2006 were won by Ahmed Abdallah Mohamed Sambi, a Sunni Muslim cleric nicknamed the "Ayatollah" for his time spent studying Islam in Iran. Azali honoured the election results, thus allowing the first peaceful and democratic exchange of power for the archipelago. Colonel Mohammed Bacar, a French-trained former gendarme elected President of Ndzwani in 2001, refused to step down at the end of his five-year mandate. He staged a vote in June 2007 to confirm his leadership that was rejected as illegal by the Comoros federal government and the African Union. On 25 March 2008 hundreds of soldiers from the African Union and the Comoros seized rebel-held Ndzwani, generally welcomed by the population: there have been reports of hundreds, if not thousands, of people tortured during Bacar's tenure. Some rebels were killed and injured, but there are no official figures. At least 11 civilians were wounded. Some officials were imprisoned. Bacar fled in a speedboat to Mayotte to seek asylum. Anti-French protests followed in the Comoros (see 2008 invasion of Anjouan). Bacar was eventually granted asylum in Benin. Since independence from France, the Comoros experienced more than 20 coups or attempted coups. Following elections in late 2010, former Vice-president Ikililou Dhoinine was inaugurated as president on 26 May 2011. A member of the ruling party, Dhoinine was supported in the election by the incumbent President Ahmed Abdallah Mohamed Sambi. Dhoinine, a pharmacist by training, is the first President of the Comoros from the island of Mwali. Following the 2016 elections, Azali Assoumani, from Ngazidja, became president for a third term. In 2018 Azali held a referendum on constitutional reform that would permit a president to serve two terms. The amendments passed, although the vote was widely contested and boycotted by the opposition, and in April 2019, and to widespread opposition, Azali was re-elected president to serve the first of potentially two five-year terms. In January 2020, the legislative elections in Comoros were dominated by President Azali Assoumani's party, the Convention for the Renewal of the Comoros, CRC. It took an overwhelming majority in the parliament, meaning his hold on power strengthened. CRC took 17 out of 24 seats of the parliament. In 2021, Comoros signed and ratified the Treaty on the Prohibition of Nuclear Weapons, making it a nuclear-weapon-free state. and in 2023, Comoros was invited as a non-member guest to the G7 summit in Hiroshima. On 18 February 2023 the Comoros assumed the presidency of the African Union. Geography The Comoros is formed by Ngazidja (Grande Comore), Mwali (Mohéli) and Ndzwani (Anjouan), three major islands in the Comoros Archipelago, as well as many minor islets. The islands are officially known by their Comorian language names, though international sources still use their French names (given in parentheses above). The capital and largest city, Moroni, is located on Ngazidja. The archipelago is situated in the Indian Ocean, in the Mozambique Channel, between the African coast (nearest to Mozambique and Tanzania) and Madagascar, with no land borders. At , it is one of the smallest countries in the world. The Comoros also has claim to of territorial seas. The interiors of the islands vary from steep mountains to low hills. The areas and populations (at the 2017 Census) of the main islands are as follows: Ngazidja is the largest of the Comoros Archipelago, with an area of 1,024 km2. It is also the most recent island, and therefore has rocky soil. The island's two volcanoes, Karthala (active) and La Grille (dormant), and the lack of good harbours are distinctive characteristics of its terrain. Mwali, with its capital at Fomboni, is the smallest of the four major islands. Ndzwani, whose capital is Mutsamudu, has a distinctive triangular shape caused by three mountain chains – Shisiwani, Nioumakele and Jimilime – emanating from a central peak, (). The islands of the Comoros Archipelago were formed by volcanic activity. Mount Karthala, an active shield volcano located on Ngazidja, is the country's highest point, at . It contains the Comoros' largest patch of disappearing rainforest. Karthala is currently one of the most active volcanoes in the world, with a minor eruption in May 2006, and prior eruptions as recently as April 2005 and 1991. In the 2005 eruption, which lasted from 17 to 19 April, 40,000 citizens were evacuated, and the crater lake in the volcano's caldera was destroyed. The Comoros also lays claim to the Îles Éparses or Îles éparses de l'océan indien (Scattered Islands in the Indian Ocean) – Glorioso Islands, comprising Grande Glorieuse, Île du Lys, Wreck Rock, South Rock, (three islets) and three unnamed islets – one of France's overseas districts. The Glorioso Islands were administered by the colonial Comoros before 1975, and are therefore sometimes considered part of the Comoros Archipelago. Banc du Geyser, a former island in the Comoros Archipelago, now submerged, is geographically located in the Îles Éparses, but was annexed by Madagascar in 1976 as an unclaimed territory. The Comoros and France each still view the Banc du Geyser as part of the Glorioso Islands and, thus, part of its particular exclusive economic zone. Climate The climate is generally tropical and mild, and the two major seasons are distinguishable by their raininess. The temperature reaches an average of in March, the hottest month in the rainy season (called kashkazi/kaskazi [meaning north monsoon], which runs from November to April), and an average low of in the cool, dry season (kusi (meaning south monsoon), which proceeds from May to October). The islands are rarely subject to cyclones. Biodiversity The Comoros constitute an ecoregion in their own right, Comoros forests. It had a 2018 Forest Landscape Integrity Index mean score of 7.69/10, ranking it 33rd globally out of 172 countries. In December 1952 a specimen of the West Indian Ocean coelacanth fish was re-discovered off the Comoros coast. The 66 million-year-old species was thought to have been long extinct until its first recorded appearance in 1938 off the South African coast. Between 1938 and 1975, 84 specimens were caught and recorded. Protected areas There are six national parks in the Comoros – Karthala, Coelacanth, and Mitsamiouli Ndroudi on Grande Comore, Mount Ntringui and Shisiwani on Anjouan, and Mohéli National Park on Mohéli. Karthala and Mount Ntrigui national parks cover the highest peaks on the respective islands, and Coelacanth, Mitsamiouli Ndroudi, and Shisiwani are marine national parks that protect the island's coastal waters and fringing reefs. Mohéli National Park includes both terrestrial and marine areas. Government Politics of the Comoros takes place in a framework of a federal presidential republic, whereby the President of the Comoros is both head of state and head of government, and of a multi-party system. The Constitution of the Union of the Comoros was ratified by referendum on 23 December 2001, and the islands' constitutions and executives were elected in the following months. It had previously been considered a military dictatorship, and the transfer of power from Azali Assoumani to Ahmed Abdallah Mohamed Sambi in May 2006 was a watershed moment as it was the first peaceful transfer in Comorian history. Executive power is exercised by the government. Federal legislative power is vested in both the government and parliament. The preamble of the constitution guarantees an Islamic inspiration in governance, a commitment to human rights, and several specific enumerated rights, democracy, "a common destiny" for all Comorians. Each of the islands (according to Title II of the Constitution) has a great amount of autonomy in the Union, including having their own constitutions (or Fundamental Law), president, and Parliament. The presidency and Assembly of the Union are distinct from each of the islands' governments. The presidency of the Union rotates between the islands. Despite widespread misgivings about the durability of the system of presidential rotation, Ngazidja holds the current presidency rotation, and Azali is President of the Union; Ndzwani is in theory to provide the next president. Legal system The Comorian legal system rests on Islamic law, an inherited French (Napoleonic Code) legal code, and customary law (mila na ntsi). Village elders, kadis or civilian courts settle most disputes. The judiciary is independent of the legislative and the executive. The Supreme Court acts as a Constitutional Council in resolving constitutional questions and supervising presidential elections. As High Court of Justice, the Supreme Court also arbitrates in cases where the government is accused of malpractice. The Supreme Court consists of two members selected by the president, two elected by the Federal Assembly, and one by the council of each island. Political culture Around 80 percent of the central government's annual budget is spent on the country's complex administrative system which provides for a semi-autonomous government and president for each of the three islands and a rotating presidency for the overarching Union government. A referendum took place on 16 May 2009 to decide whether to cut down the government's unwieldy political bureaucracy. 52.7% of those eligible voted, and 93.8% of votes were cast in approval of the referendum. Following the implementation of the changes, each island's president became a governor and the ministers became councillors. Foreign relations In November 1975, the Comoros became the 143rd member of the United Nations. The new nation was defined as comprising the entire archipelago, although the citizens of Mayotte chose to become French citizens and keep their island as a French territory. The Comoros has repeatedly pressed its claim to Mayotte before the United Nations General Assembly, which adopted a series of resolutions under the caption "Question of the Comorian Island of Mayotte", opining that Mayotte belongs to the Comoros under the principle that the territorial integrity of colonial territories should be preserved upon independence. As a practical matter, however, these resolutions have little effect and there is no foreseeable likelihood that Mayotte will become de facto part of the Comoros without its people's consent. More recently, the Assembly has maintained this item on its agenda but deferred it from year to year without taking action. Other bodies, including the Organization of African Unity, the Movement of Non-Aligned Countries and the Organisation of Islamic Cooperation, have similarly questioned French sovereignty over Mayotte. To close the debate and to avoid being integrated by force in the Union of the Comoros, the population of Mayotte overwhelmingly chose to become an overseas department and a region of France in a 2009 referendum. The new status was effective on 31 March 2011 and Mayotte has been recognised as an outermost region by the European Union on 1 January 2014. This decision legally integrates Mayotte in the French Republic. The Comoros is a member of the United Nations, the African Union, the Arab League, the World Bank, the International Monetary Fund, the Indian Ocean Commission and the African Development Bank. On 10 April 2008, the Comoros became the 179th nation to accept the Kyoto Protocol to the United Nations Framework Convention on Climate Change. The Comoros signed the UN treaty on the Prohibition of Nuclear Weapons. Azali Assoumani, President of the Comoros and Chair of the African Union, attended the 2023 Russia–Africa Summit in Saint Petersburg. In May 2013 the Union of the Comoros became known for filing a referral to the Office of the Prosecutor of the International Criminal Court (ICC) regarding the events of "the 31 May 2010 Israeli raid on the Humanitarian Aid Flotilla bound for [the] Gaza Strip". In November 2014 the ICC Prosecutor eventually decided that the events did constitute war crimes but did not meet the gravity standards of bringing the case before ICC. The emigration rate of skilled workers was about 21.2% in 2000. Military The military resources of the Comoros consist of a small standing army and a 500-member police force, as well as a 500-member defence force. A defence treaty with France provides naval resources for protection of territorial waters, training of Comorian military personnel, and air surveillance. France maintains the presence of a few senior officers in the Comoros at government request, as well as a small maritime base and a Foreign Legion Detachment (DLEM) on Mayotte. Once the new government was installed in May–June 2011, an expert mission from UNREC (Lomé) came to the Comoros and produced guidelines for the elaboration of a national security policy, which were discussed by different actors, notably the national defence authorities and civil society. By the end of the programme in end March 2012, a normative framework agreed upon by all entities involved in SSR will have been established. This will then have to be adopted by Parliament and implemented by the authorities. Human rights Both male and female same-sex sexual acts are illegal in Comoros. Such acts are punished with up to five years imprisonment. Economy The level of poverty in the Comoros is high, but "judging by the international poverty threshold of $1.9 per person per day, only two out of every ten Comorians could be classified as poor, a rate that places the Comoros ahead of other low-income countries and 30 percentage points ahead of other countries in Sub-Saharan Africa." Poverty declined by about 10% between 2014 and 2018, and living conditions generally improved. Economic inequality remains widespread, with a major gap between rural and urban areas. Remittances through the sizable Comorian diaspora form a substantial part of the country's GDP and have contributed to decreases in poverty and increases in living standards. According to ILO's ILOSTAT statistical database, between 1991 and 2019 the unemployment rate as a percent of the total labor force ranged from 4.38% to 4.3%. An October 2005 paper by the Comoros Ministry of Planning and Regional Development, however, reported that "registered unemployment rate is 14.3 percent, distributed very unevenly among and within the islands, but with marked incidence in urban areas." In 2019, more than 56% of the labor force was employed in agriculture, with 29% employed in industry and 14% employed in services. The islands' agricultural sector is based on the export of spices, including vanilla, cinnamon, and cloves, and thus susceptible to price fluctuations in the volatile world commodity market for these goods. The Comoros is the world's largest producer of ylang-ylang, a plant whose extracted essential oil is used in the perfume industry; some 80% of the world's supply comes from the Comoros. High population densities, as much as 1000 per square kilometre in the densest agricultural zones, for what is still a mostly rural, agricultural economy may lead to an environmental crisis in the near future, especially considering the high rate of population growth. In 2004 the Comoros' real GDP growth was a low 1.9% and real GDP per capita continued to decline. These declines are explained by factors including declining investment, drops in consumption, rising inflation, and an increase in trade imbalance due in part to lowered cash crop prices, especially vanilla. Fiscal policy is constrained by erratic fiscal revenues, a bloated civil service wage bill, and an external debt that is far above the HIPC threshold. Membership in the franc zone, the main anchor of stability, has nevertheless helped contain pressures on domestic prices. The Comoros has an inadequate transportation system, a young and rapidly increasing population, and few natural resources. The low educational level of the labour force contributes to a subsistence level of economic activity, high unemployment, and a heavy dependence on foreign grants and technical assistance. Agriculture contributes 40% to GDP and provides most of the exports. The government is struggling to upgrade education and technical training, to privatise commercial and industrial enterprises, to improve health services, to diversify exports, to promote tourism, and to reduce the high population growth rate. The Comoros is a member of the Organization for the Harmonization of Business Law in Africa (OHADA). Demographics With about 850,000 residents, the Comoros is one of the least-populous countries in the world, but its population density is high, with an average of . In 2001, 34% of the population was considered urban, but the urban population has since grown; in recent years rural population growth has been negative, while overall population growth is still relatively high. In 1958 the population was 183,133. Almost half the population of the Comoros is under the age of 15. Major urban centres include Moroni, Mitsamihuli, Fumbuni, Mutsamudu, Domoni, and Fomboni. There are between 200,000 and 350,000 Comorians in France. Ethnic groups The islands of the Comoros are 97.1% ethnically Comorian, which is a mixture of Bantu, Malagasy, and Arab people. Minorities include Makua and Indian (mostly Ismaili). There are recent immigrants of Chinese origin in Grande Comore (especially Moroni). Although most French left after independence in 1975, a small Creole community, descended from settlers from France, Madagascar and Réunion, lives in the Comoros. Languages The most common languages in the Comoros are the Comorian languages, collectively known as Shikomori. They are related to Swahili, and the four different variants (Shingazidja, Shimwali, Shindzwani and Shimaore) are spoken on each of the four islands. Arabic and Latin scripts are both used, Arabic being the more widely used, and an official orthography has recently been developed for the Latin script. Arabic and French are also official languages, along with Comorian. Arabic is widely known as a second language, being the language of Quranic teaching. French is the administrative language and the language of most non-Quranic formal education. Religion Sunni Islam is the dominant religion, followed by as much as 99% of the population. Comoros is the only Muslim-majority country in Southern Africa and the third southernmost Muslim-majority territory after Mayotte and the Australian territory of Cocos Islands. A minority of the population of the Comoros are Christian, both Catholic and Protestant denominations are represented, and most Malagasy residents are also Christian. Immigrants from metropolitan France are mostly Catholic. Health There are 15 physicians per 100,000 people. The fertility rate was 4.7 per adult woman in 2004. Life expectancy at birth is 67 for females and 62 for males. Education Almost all children attend Quranic schools, usually before, although increasingly in tandem with regular schooling. Children are taught about the Qur'an, and memorise it, and learn the Arabic script. Most parents prefer their children to attend Koran schools before moving on to the French-based schooling system. Although the state sector is plagued by a lack of resources, and the teachers by unpaid salaries, there are numerous private and community schools of relatively good standard. The national curriculum, apart from a few years during the revolutionary period immediately post-independence, has been very much based on the French system, both because resources are French and most Comorians hope to go on to further education in France. There have recently been moves to Comorianise the syllabus and integrate the two systems, the formal and the Quran schools, into one, thus moving away from the secular educational system inherited from France. Pre-colonization education systems in Comoros focused on necessary skills such as agriculture, caring for livestock and completing household tasks. Religious education also taught children the virtues of Islam. The education system underwent a transformation during colonization in the early 1900s which brought secular education based on the French system. This was mainly for children of the elite. After Comoros gained independence in 1975, the education system changed again. Funding for teachers' salaries was lost, and many went on strike. Thus, the public education system was not functioning between 1997 and 2001. Since gaining independence, the education system has also undergone a democratization and options exist for those other than the elite. Enrollment has also grown. In 2000, 44.2% of children aged 5 to 14 years were attending school. There is a general lack of facilities, equipment, qualified teachers, textbooks and other resources. Salaries for teachers are often so far in arrears that many refuse to work. Prior to 2000, students seeking a university education had to attend school outside of the country. However, in the early 2000s a university was created in the country. This served to help economic growth and to fight the "flight" of many educated people who were not returning to the islands to work. Comorian has no native script, but both the Arabic and Latin alphabets are used. In 2004, about 57 percent of the population was literate in the Latin script while more than 90 percent were literate in the Arabic script. Culture Traditionally, women on Ndzwani wear red and white patterned garments called shiromani, while on Ngazidja and Mwali colourful shawls called leso are worn. Many women apply a paste of ground sandalwood and coral called msindzano to their faces. Traditional male clothing is a long white shirt known as a nkandu, and a bonnet called a kofia. Marriage There are two types of marriages in Comoros, the little marriage (known as Mna daho on Ngazidja) and the customary marriage (known as ada on Ngazidja, harusi on the other islands). The little marriage is a simple legal marriage. It is small, intimate, and inexpensive, and the bride's dowry is nominal. A man may undertake a number of Mna daho marriages in his lifetime, often at the same time, a woman fewer; but both men and women will usually only undertake one ada, or grand marriage, and this must generally be within the village. The hallmarks of the grand marriage are dazzling gold jewelry, two weeks of celebration and an enormous bridal dowry. Although the expenses are shared between both families as well as with a wider social circle, an ada wedding on Ngazidja can cost up to €50,000. Many couples take a lifetime to save for their ada, and it is not uncommon for a marriage to be attended by a couple's adult children. The ada marriage marks a man's transition in the Ngazidja age system from youth to elder. His status in the social hierarchy greatly increases, and he will henceforth be entitled to speak in public and participate in the political process, both in his village and more widely across the island. He will be entitled to display his status by wearing a mharuma, a type of shawl, across his shoulders, and he can enter the mosque by the door reserved for elders, and sit at the front. A woman's status also changes, although less formally, as she becomes a "mother" and moves into her own house. The system is less formalised on the other islands, but the marriage is nevertheless a significant and costly event across the archipelago. The ada is often criticized because of its great expense, but at the same time it is a source of social cohesion and the main reason why migrants in France and elsewhere continue to send money home. Increasingly, marriages are also being taxed for the purposes of village development. Kinship and social structure Comorian society has a bilateral descent system. Lineage membership and inheritance of immovable goods (land, housing) is matrilineal, passed in the maternal line, similar to many Bantu peoples who are also matrilineal, while other goods and patronymics are passed in the male line. However, there are differences between the islands, the matrilineal element being stronger on Ngazidja. Music Twarab music, imported from Zanzibar in the early 20th century, remains the most influential genre on the islands and is popular at ada marriages. Media There are two daily national newspapers published in the Comoros, the government-owned Al-Watwan, and the privately owned La Gazette des Comores, both published in Moroni. There are a number of smaller newsletters published on an irregular basis as well as a variety of news websites. The government-owned ORTC (Office de Radio et Télévision des Comores) provides national radio and television service. There is a TV station run by the Anjouan regional government, and regional governments on the islands of Grande Comore and Anjouan each operate a radio station. There are also a few independent and small community radio stations that operate on the islands of Grande Comore and Mohéli, and these two islands have access to Mayotte Radio and French TV. See also Index of Comoros-related articles Notes References Citations Sources This article incorporates text from the Library of Congress Country Studies, which is in the public domain. External links Union des Comores – Official government website Tourism website Embassy des Comores – The Federal and Islamic Republic of the Comoros in New York, United States Comoros from the BBC News Key Development Forecasts for Comoros from International Futures Countries in Africa 1975 establishments in Africa Countries and territories where Arabic is an official language Comoros archipelago East African countries Federal republics French-speaking countries and territories Island countries of the Indian Ocean Island countries Least developed countries Member states of the African Union Member states of the Arab League Member states of the Organisation internationale de la Francophonie Member states of the Organisation of Islamic Cooperation Member states of the United Nations Small Island Developing States States and territories established in 1503 States and territories established in 1975
5404
https://en.wikipedia.org/wiki/Critical%20philosophy
Critical philosophy
The critical philosophy () movement, attributed to Immanuel Kant (1724–1804), sees the primary task of philosophy as criticism rather than justification of knowledge. Criticism, for Kant, meant judging as to the possibilities of knowledge before advancing to knowledge itself (from the Greek kritike (techne), or "art of judgment"). The basic task of philosophers, according to this view, is not to establish and demonstrate theories about reality, but rather to subject all theories—including those about philosophy itself—to critical review, and measure their validity by how well they withstand criticism. "Critical philosophy" is also used as another name for Kant's philosophy itself. Kant said that philosophy's proper inquiry is not about what is out there in reality, but rather about the character and foundations of experience itself. We must first judge how human reason works, and within what limits, so that we can afterwards correctly apply it to sense experience and determine whether it can be applied at all to metaphysical objects. The principal three sources on which the critical philosophy is based are the three critiques, namely Critique of Pure Reason, Critique of Practical Reason and Critique of Judgement, published between 1781 and 1790 and mostly concerned, respectively, with metaphysics, ethics and aesthetics. See also Critical idealism Critical thinking Charles Bernard Renouvier Léon Brunschvicg References Stanford Encyclopedia of Philosophy: Immanuel Kant Kantianism
5405
https://en.wikipedia.org/wiki/China
China
China (), officially the People's Republic of China (PRC), is a country in East Asia. It is the world's second-most-populous country, with a population exceeding 1.4 billion. China spans the equivalent of five time zones and borders fourteen countries by land, tied with Russia as having the most of any country in the world. With an area of nearly , it is the world's third-largest country by total land area. The country is divided into 22 provinces, five autonomous regions, four municipalities, and two semi-autonomous special administrative regions. The national capital is Beijing, and the most populous city and largest financial center is Shanghai. The region that is now China has been inhabited since the Paleolithic era. The earliest Chinese dynastic states, such as the Shang and the Zhou, emerged in the basin of the Yellow River before the late second millennium BCE. The eighth to third centuries BCE saw a breakdown in Zhou authority and significant conflict, as well as the emergence of Classical Chinese literature and philosophy. In 221 BCE, China was unified under an emperor, ushering in more than two millennia in which China was governed by one or more imperial dynasties, including the Han, Tang, Ming and Qing. Some of China's most notable achievements—such as the invention of gunpowder and paper, the establishment of the Silk Road, and the building of the Great Wall—occurred during this period. The Chinese culture—including languages, traditions, architecture, philosophy and more—has heavily influenced East Asia during this imperial period. In 1912, the Chinese monarchy was overthrown and the Republic of China established. The Republic saw consistent conflict for most of the mid-20th century, including a civil war between the Kuomintang government and the Chinese Communist Party (CCP), which began in 1927, as well as the Second Sino-Japanese War that began in 1937 and continued until 1945, therefore becoming involved in World War II. The latter led to a temporary stop in the civil war and numerous Japanese atrocities such as the Nanjing Massacre, which continue to influence China–Japan relations. In 1949, the CCP established control over China as the Kuomintang fled to Taiwan. Early communist rule saw two major projects: the Great Leap Forward, which resulted in a sharp economic decline and massive famine; and the Cultural Revolution, a movement to purge all non-communist elements of Chinese society that led to mass violence and persecution. Beginning in 1978, the Chinese government began economic reforms that moved the country away from planned economics, but political reforms were cut short by the 1989 Tiananmen Square protests, which ended in a massacre. Despite the event, the economic reform continued to strengthen the nation's economy in the following decades while raising China's standard of living significantly. China is a unitary one-party socialist republic led by the CCP. It is one of the five permanent members of the UN Security Council and a founding member of several multilateral and regional organizations such as the Asian Infrastructure Investment Bank, the Silk Road Fund, the New Development Bank, and the RCEP. It is also a member of the BRICS, the G20, APEC, and the East Asia Summit. China ranks poorly in measures of democracy, transparency, and human rights, including for press freedom, religious freedom, and ethnic equality. Making up around one-fifth of the world economy, China is the world's largest economy by GDP at purchasing power parity, the second-largest economy by nominal GDP, and the second-wealthiest country. The country is one of the fastest-growing major economies and is the world's largest manufacturer and exporter, as well as the second-largest importer. China is a nuclear-weapon state with the world's largest standing army by military personnel and the second-largest defense budget. Etymology The word "China" has been used in English since the 16th century; however, it was not a word used by the Chinese themselves during this period. Its origin has been traced through Portuguese, Malay, and Persian back to the Sanskrit word Cīna, used in ancient India. "China" appears in Richard Eden's 1555 translation of the 1516 journal of the Portuguese explorer Duarte Barbosa. Barbosa's usage was derived from Persian Chīn (), which was in turn derived from Sanskrit Cīna (). Cīna was first used in early Hindu scripture, including the Mahābhārata (5th century BCE) and the Laws of Manu (2nd century BCE). In 1655, Martino Martini suggested that the word China is derived ultimately from the name of the Qin dynasty (221–206 BCE). Although usage in Indian sources precedes this dynasty, this derivation is still given in various sources. The origin of the Sanskrit word is a matter of debate, according to the Oxford English Dictionary. Alternative suggestions include the names for Yelang and the Jing or Chu state. The official name of the modern state is the "People's Republic of China" (). The shorter form is "China" () from ("central") and ("state"), a term which developed under the Western Zhou dynasty in reference to its royal demesne. It was then applied to the area around Luoyi (present-day Luoyang) during the Eastern Zhou and then to China's Central Plain before being used in official documents as an synonym for the state under the Qing. It was sometimes also used as a cultural concept to distinguish the Huaxia people from perceived "barbarians". The name Zhongguo is also translated as in English. China (PRC) is sometimes referred to as the Mainland when distinguishing the ROC from the PRC. History Prehistory China is regarded as one of the world's oldest civilizations. Archaeological evidence suggests that early hominids inhabited the country 2.25 million years ago. The hominid fossils of Peking Man, a Homo erectus who used fire, were discovered in a cave at Zhoukoudian near Beijing; they have been dated to between 680,000 and 780,000 years ago. The fossilized teeth of Homo sapiens (dated to 125,000–80,000 years ago) have been discovered in Fuyan Cave in Dao County, Hunan. Chinese proto-writing existed in Jiahu around 6600 BCE, at Damaidi around 6000 BCE, Dadiwan from 5800 to 5400 BCE, and Banpo dating from the 5th millennium BCE. Some scholars have suggested that the Jiahu symbols (7th millennium BCE) constituted the earliest Chinese writing system. Early dynastic rule According to Chinese tradition, the first dynasty was the Xia, which emerged around 2100 BCE. The Xia dynasty marked the beginning of China's political system based on hereditary monarchies, or dynasties, which lasted for a millennium. The Xia dynasty was considered mythical by historians until scientific excavations found early Bronze Age sites at Erlitou, Henan in 1959. It remains unclear whether these sites are the remains of the Xia dynasty or of another culture from the same period. The succeeding Shang dynasty is the earliest to be confirmed by contemporary records. The Shang ruled the plain of the Yellow River in eastern China from the 17th to the 11th century BCE. Their oracle bone script (from BCE) represents the oldest form of Chinese writing yet found and is a direct ancestor of modern Chinese characters. The Shang was conquered by the Zhou, who ruled between the 11th and 5th centuries BCE, though centralized authority was slowly eroded by feudal warlords. Some principalities eventually emerged from the weakened Zhou, no longer fully obeyed the Zhou king, and continually waged war with each other during the 300-year Spring and Autumn period. By the time of the Warring States period of the 5th–3rd centuries BCE, there were seven major powerful states left. Imperial China The Warring States period ended in 221 BCE after the state of Qin conquered the other six kingdoms, reunited China and established the dominant order of autocracy. King Zheng of Qin proclaimed himself the First Emperor of the Qin dynasty. He enacted Qin's legalist reforms throughout China, notably the forced standardization of Chinese characters, measurements, road widths (i.e., the cart axles' length), and currency. His dynasty also conquered the Yue tribes in Guangxi, Guangdong, and Northern Vietnam. The Qin dynasty lasted only fifteen years, falling soon after the First Emperor's death, as his harsh authoritarian policies led to widespread rebellion. Following a widespread civil war during which the imperial library at Xianyang was burned, the Han dynasty emerged to rule China between 206 BCE and CE 220, creating a cultural identity among its populace still remembered in the ethnonym of the modern Han Chinese. The Han expanded the empire's territory considerably, with military campaigns reaching Central Asia, Mongolia, South Korea, and Yunnan, and the recovery of Guangdong and northern Vietnam from Nanyue. Han involvement in Central Asia and Sogdia helped establish the land route of the Silk Road, replacing the earlier path over the Himalayas to India. Han China gradually became the largest economy of the ancient world. Despite the Han's initial decentralization and the official abandonment of the Qin philosophy of Legalism in favor of Confucianism, Qin's legalist institutions and policies continued to be employed by the Han government and its successors. After the end of the Han dynasty, a period of strife known as Three Kingdoms followed, whose central figures were later immortalized in one of the Four Classics of Chinese literature. At its end, Wei was swiftly overthrown by the Jin dynasty. The Jin fell to civil war upon the ascension of a developmentally disabled emperor; the Five Barbarians then invaded and ruled northern China as the Sixteen States. The Xianbei unified them as the Northern Wei, whose Emperor Xiaowen reversed his predecessors' apartheid policies and enforced a drastic sinification on his subjects, largely integrating them into Chinese culture. In the south, the general Liu Yu secured the abdication of the Jin in favor of the Liu Song. The various successors of these states became known as the Northern and Southern dynasties, with the two areas finally reunited by the Sui in 581. The Sui restored the Han to power through China, reformed its agriculture, economy and imperial examination system, constructed the Grand Canal, and patronized Buddhism. However, they fell quickly when their conscription for public works and a failed war in northern Korea provoked widespread unrest. Under the succeeding Tang and Song dynasties, Chinese economy, technology, and culture entered a golden age. The Tang dynasty retained control of the Western Regions and the Silk Road, which brought traders to as far as Mesopotamia and the Horn of Africa, and made the capital Chang'an a cosmopolitan urban center. However, it was devastated and weakened by the An Lushan Rebellion in the 8th century. In 907, the Tang disintegrated completely when the local military governors became ungovernable. The Song dynasty ended the separatist situation in 960, leading to a balance of power between the Song and Khitan Liao. The Song was the first government in world history to issue paper money and the first Chinese polity to establish a permanent standing navy which was supported by the developed shipbuilding industry along with the sea trade. Between the 10th and 11th century CE, the population of China doubled in size to around 100 million people, mostly because of the expansion of rice cultivation in central and southern China, and the production of abundant food surpluses. The Song dynasty also saw a revival of Confucianism, in response to the growth of Buddhism during the Tang, and a flourishing of philosophy and the arts, as landscape art and porcelain were brought to new levels of maturity and complexity. However, the military weakness of the Song army was observed by the Jurchen Jin dynasty. In 1127, Emperor Huizong of Song and the capital Bianjing were captured during the Jin–Song Wars. The remnants of the Song retreated to southern China. The Mongol conquest of China began in 1205 with the gradual conquest of Western Xia by Genghis Khan, who also invaded Jin territories. In 1271, the Mongol leader Kublai Khan established the Yuan dynasty, which conquered the last remnant of the Song dynasty in 1279. Before the Mongol invasion, the population of Song China was 120 million citizens; this was reduced to 60 million by the time of the census in 1300. A peasant named Zhu Yuanzhang led a rebellion that overthrew the Yuan in 1368 and founded the Ming dynasty as the Hongwu Emperor. Under the Ming dynasty, China enjoyed another golden age, developing one of the strongest navies in the world and a rich and prosperous economy amid a flourishing of art and culture. It was during this period that admiral Zheng He led the Ming treasure voyages throughout the Indian Ocean, reaching as far as East Africa. In the early years of the Ming dynasty, China's capital was moved from Nanjing to Beijing. With the budding of capitalism, philosophers such as Wang Yangming further critiqued and expanded Neo-Confucianism with concepts of individualism and equality of four occupations. The scholar-official stratum became a supporting force of industry and commerce in the tax boycott movements, which, together with the famines and defense against Japanese invasions of Korea (1592–1598) and Later Jin incursions led to an exhausted treasury. In 1644, Beijing was captured by a coalition of peasant rebel forces led by Li Zicheng. The Chongzhen Emperor committed suicide when the city fell. The Manchu Qing dynasty, then allied with Ming dynasty general Wu Sangui, overthrew Li's short-lived Shun dynasty and subsequently seized control of Beijing, which became the new capital of the Qing dynasty. The Qing dynasty, which lasted from 1644 until 1912, was the last imperial dynasty of China. The Ming-Qing transition (1618–1683) cost 25 million lives in total, but the Qing appeared to have restored China's imperial power and inaugurated another flowering of the arts. After the Southern Ming ended, the further conquest of the Dzungar Khanate added Mongolia, Tibet and Xinjiang to the empire. Meanwhile, China's population growth resumed and shortly began to accelerate. It is commonly agreed that pre-modern China's population experienced two growth spurts, one during the Northern Song period (960-1127), and other during the Qing period (around 1700–1830). By the High Qing era China was possibly the most commercialized country in the world, and imperial China experienced a second commercial revolution in the economic history of China by the end of the 18th century. On the other hand, the centralized autocracy was strengthened in part to suppress anti-Qing sentiment with the policy of valuing agriculture and restraining commerce, like the Haijin during the early Qing period and ideological control as represented by the literary inquisition, causing some social and technological stagnation. Fall of the Qing dynasty In the mid-19th century, the Qing dynasty experienced Western imperialism in the Opium Wars with Britain and France. China was forced to pay compensation, open treaty ports, allow extraterritoriality for foreign nationals, and cede Hong Kong to the British under the 1842 Treaty of Nanking, the first of the Unequal Treaties. The First Sino-Japanese War (1894–1895) resulted in Qing China's loss of influence in the Korean Peninsula, as well as the cession of Taiwan to Japan. The Qing dynasty also began experiencing internal unrest in which tens of millions of people died, especially in the White Lotus Rebellion, the failed Taiping Rebellion that ravaged southern China in the 1850s and 1860s and the Dungan Revolt (1862–1877) in the northwest. The initial success of the Self-Strengthening Movement of the 1860s was frustrated by a series of military defeats in the 1880s and 1890s. In the 19th century, the great Chinese diaspora began. Losses due to emigration were added to by conflicts and catastrophes such as the Northern Chinese Famine of 1876–1879, in which between 9 and 13 million people died. The Guangxu Emperor drafted a reform plan in 1898 to establish a modern constitutional monarchy, but these plans were thwarted by the Empress Dowager Cixi. The ill-fated anti-foreign Boxer Rebellion of 1899–1901 further weakened the dynasty. Although Cixi sponsored a program of reforms known as the late Qing reforms, the Xinhai Revolution of 1911–1912 brought an end to the Qing dynasty and established the Republic of China. Puyi, the last Emperor of China, abdicated in 1912. Establishment of the Republic and World War II On 1 January 1912, the Republic of China was established, and Sun Yat-sen of the Kuomintang (the KMT or Nationalist Party) was proclaimed provisional president. On 12 February 1912, regent Empress Dowager Longyu sealed the imperial abdication decree on behalf of 4 year old Puyi, the last emperor of China, ending 5,000 years of monarchy in China. In March 1912, the presidency was given to Yuan Shikai, a former Qing general who in 1915 proclaimed himself Emperor of China. In the face of popular condemnation and opposition from his own Beiyang Army, he was forced to abdicate and re-establish the republic in 1916. After Yuan Shikai's death in 1916, China was politically fragmented. Its Beijing-based government was internationally recognized but virtually powerless; regional warlords controlled most of its territory. In the late 1920s, the Kuomintang under Chiang Kai-shek, the then Principal of the Republic of China Military Academy, was able to reunify the country under its own control with a series of deft military and political maneuverings, known collectively as the Northern Expedition. The Kuomintang moved the nation's capital to Nanjing and implemented "political tutelage", an intermediate stage of political development outlined in Sun Yat-sen's San-min program for transforming China into a modern democratic state. The political division in China made it difficult for Chiang to battle the communist-led People's Liberation Army (PLA), against whom the Kuomintang had been warring since 1927 in the Chinese Civil War. This war continued successfully for the Kuomintang, especially after the PLA retreated in the Long March, until Japanese aggression and the 1936 Xi'an Incident forced Chiang to confront Imperial Japan. The Second Sino-Japanese War (1937–1945), a theater of World War II, forced an uneasy alliance between the Kuomintang and the Communists. Japanese forces committed numerous war atrocities against the civilian population; in all, as many as 20 million Chinese civilians died. An estimated 40,000 to 300,000 Chinese were massacred in the city of Nanjing alone during the Japanese occupation. During the war, China, along with the UK, the United States, and the Soviet Union, were referred to as "trusteeship of the powerful" and were recognized as the Allied "Big Four" in the Declaration by United Nations. Along with the other three great powers, China was one of the four major Allies of World War II, and was later considered one of the primary victors in the war. After the surrender of Japan in 1945, Taiwan, including the Pescadores, was handed over to Chinese control. However, the validity of this handover is controversial, in that whether Taiwan's sovereignty was legally transferred and whether China is a legitimate recipient, due to complex issues that arose from the handling of Japan's surrender. China emerged victorious but war-ravaged and financially drained. The continued distrust between the Kuomintang and the Communists led to the resumption of civil war. Constitutional rule was established in 1947, but because of the ongoing unrest, many provisions of the ROC constitution were never implemented in mainland China. Civil War and the People's Republic Before the existence of the People's Republic, the CCP had declared several areas of the country as the Chinese Soviet Republic (Jiangxi Soviet), a predecessor state to the PRC, in November 1931 in Ruijin, Jiangxi. The Jiangxi Soviet was wiped out by the KMT armies in 1934 and was relocated to Yan'an in Shaanxi where the Long March concluded in 1935. It would be the base of the communists before major combat in the Chinese Civil War ended in 1949. Afterwards, the CCP took control of most of mainland China, and the Kuomintang retreating offshore to Taiwan, reducing its territory to only Taiwan, Hainan, and their surrounding islands. On 1 October 1949, CCP Chairman Mao Zedong formally proclaimed the establishment of the People's Republic of China at the new nation's founding ceremony and inaugural military parade in Tiananmen Square, Beijing. In 1950, the People's Liberation Army captured Hainan from the ROC and annexed Tibet. However, remaining Kuomintang forces continued to wage an insurgency in western China throughout the 1950s. The government consolidated its popularity among the peasants through the Land Reform Movement, which included the execution of between 1 and 2 million landlords. China developed an independent industrial system and its own nuclear weapons. The Chinese population increased from 550 million in 1950 to 900 million in 1974. However, the Great Leap Forward, an idealistic massive industrialization project, resulted in an estimated 15 to 55 million deaths between 1959 and 1961, mostly from starvation. In 1964, China's first atomic bomb exploded successfully. In 1966, Mao and his allies launched the Cultural Revolution, sparking a decade of political recrimination and social upheaval that lasted until Mao's death in 1976. In October 1971, the PRC replaced the Republic of China in the United Nations, and took its seat as a permanent member of the Security Council. This UN action also created the problem of the political status of Taiwan and the Two Chinas issue. Reforms and contemporary history After Mao's death, the Gang of Four was quickly arrested by Hua Guofeng and held responsible for the excesses of the Cultural Revolution. Deng Xiaoping took power in 1978, and instituted large-scale political and economic reforms, together with the "Eight Elders", CCP members who held huge influence during this time. The CCP loosened governmental control over citizens' personal lives, and the communes were gradually disbanded in favor of working contracted to households. The Cultural Revolution was also rebuked, with millions of its victims being rehabilitated. Agricultural collectivization was dismantled and farmlands privatized, while foreign trade became a major new focus, leading to the creation of special economic zones (SEZs). Inefficient state-owned enterprises (SOEs) were restructured and unprofitable ones were closed outright, resulting in massive job losses. This marked China's transition from a planned economy to a mixed economy with an increasingly open-market environment. China adopted its current constitution on 4 December 1982. In 1989, the country saw large pro-democracy protests, eventually leading to the Tiananmen Square massacre by the leadership, bringing condemnations and sanctions against the Chinese government from various foreign countries, though the effect on external relations was short-lived. Jiang Zemin, Party secretary of Shanghai at the time, was selected to replace Zhao Ziyang as the CCP general secretary; Zhao was put under house arrest for his sympathies to the protests. Jiang later additionally took the presidency and Central Military Commission chairmanship posts, effectively becoming China's top leader. Li Peng, who was instrumental in the crackdown, remained premier until 1998, after which Zhu Rongji became the premier. Under their administration, China continued economic reforms, further closing many SOEs and massively trimming down "iron rice bowl"; occupations with guaranteed job security. During Jiang's rule, China's economy grew sevenfold, and its performance pulled an estimated 150 million peasants out of poverty and sustained an average annual gross domestic product growth rate of 11.2%. British Hong Kong and Portuguese Macau returned to China in 1997 and 1999, respectively, as the Hong Kong and Macau special administrative regions under the principle of one country, two systems. The country joined the World Trade Organization in 2001. Between 2002 and 2003, Hu Jintao and Wen Jiabao succeeded Jiang and Zhu as paramount leader and premier respectively; Jiang attempted to remain CMC chairman for longer before giving up the post entirely between 2004 and 2005. Under Hu and Wen, China maintained its high rate of economic growth, overtaking the United Kingdom, France, Germany and Japan to become the world's second-largest economy. However, the growth also severely impacted the country's resources and environment, and caused major social displacement. Hu and Wen also took a relatively more conservative approach towards economic reform, expanding support for SOEs.Additionally under Hu, China hosted the Beijing Olympics in 2008. Xi Jinping and Li Keqiang succeeded Hu and Wen as paramount leader and premier respectively between 2012 and 2013; Li Keqiang was later succeeded by Li Qiang in 2023. Shortly after his ascension to power, Xi launched a vast anti-corruption crackdown, that prosecuted more than 2 million officials by 2022. Leading many new Central Leading Groups to bypass traditional bureaucracy, Xi consolidated power further than his predecessors. Xi has also pursued changes to China's economy, supporting SOEs and making eradicating extreme poverty through "targeted poverty alleviation" a key goal. In 2013, Xi launched the Belt and Road Initiative, a global infrastructure investment project. Xi has also taken a more assertive stance on foreign and security issues. Since 2017, the Chinese government has been engaged in a harsh crackdown in Xinjiang, with an estimated one million people, mostly Uyghurs, but including other ethnic and religious minorities, in internment camps. The National People's Congress in 2018 amended the constitution to remove the two-term limit on holding the Presidency, allowing for a third and further terms. In 2020, the Standing Committee of the National People's Congress (NPCSC) passed a national security law that authorize the Hong Kong government wide-ranging tools to crack down on dissent. From December 2019 to December 2022, the COVID-19 pandemic led the government to enforce strict public health measures intended to completely eradicate the virus, a goal that was eventually abandoned after protests against the policy in 2022. Geography China's landscape is vast and diverse, ranging from the Gobi and Taklamakan Deserts in the arid north to the subtropical forests in the wetter south. The Himalaya, Karakoram, Pamir and Tian Shan mountain ranges separate China from much of South and Central Asia. The Yangtze and Yellow Rivers, the third- and sixth-longest in the world, respectively, run from the Tibetan Plateau to the densely populated eastern seaboard. China's coastline along the Pacific Ocean is long and is bounded by the Bohai, Yellow, East China and South China seas. China connects through the Kazakh border to the Eurasian Steppe which has been an artery of communication between East and West since the Neolithic through the Steppe Route – the ancestor of the terrestrial Silk Road(s). The territory of China lies between latitudes 18° and 54° N, and longitudes 73° and 135° E. The geographical center of China is marked by the Center of the Country Monument at . China's landscapes vary significantly across its vast territory. In the east, along the shores of the Yellow Sea and the East China Sea, there are extensive and densely populated alluvial plains, while on the edges of the Inner Mongolian plateau in the north, broad grasslands predominate. Southern China is dominated by hills and low mountain ranges, while the central-east hosts the deltas of China's two major rivers, the Yellow River and the Yangtze River. Other major rivers include the Xi, Mekong, Brahmaputra and Amur. To the west sit major mountain ranges, most notably the Himalayas. High plateaus feature among the more arid landscapes of the north, such as the Taklamakan and the Gobi Desert. The world's highest point, Mount Everest (8,848 m), lies on the Sino-Nepalese border. The country's lowest point, and the world's third-lowest, is the dried lake bed of Ayding Lake (−154 m) in the Turpan Depression. Climate China's climate is mainly dominated by dry seasons and wet monsoons, which lead to pronounced temperature differences between winter and summer. In the winter, northern winds coming from high-latitude areas are cold and dry; in summer, southern winds from coastal areas at lower latitudes are warm and moist. A major environmental issue in China is the continued expansion of its deserts, particularly the Gobi Desert. Although barrier tree lines planted since the 1970s have reduced the frequency of sandstorms, prolonged drought and poor agricultural practices have resulted in dust storms plaguing northern China each spring, which then spread to other parts of East Asia, including Japan and Korea. China's environmental watchdog, SEPA, stated in 2007 that China is losing per year to desertification. Water quality, erosion, and pollution control have become important issues in China's relations with other countries. Melting glaciers in the Himalayas could potentially lead to water shortages for hundreds of millions of people. According to academics, in order to limit climate change in China to electricity generation from coal in China without carbon capture must be phased out by 2045. With current policies, the GHG emissions of China will probably peak in 2025, and by 2030 they will return to 2022 levels. However, such pathway still leads to 3 degree temperature rise. Official government statistics about Chinese agricultural productivity are considered unreliable, due to exaggeration of production at subsidiary government levels. Much of China has a climate very suitable for agriculture and the country has been the world's largest producer of rice, wheat, tomatoes, eggplant, grapes, watermelon, spinach, and many other crops. Biodiversity China is one of 17 megadiverse countries, lying in two of the world's major biogeographic realms: the Palearctic and the Indomalayan. By one measure, China has over 34,687 species of animals and vascular plants, making it the third-most biodiverse country in the world, after Brazil and Colombia. The country signed the Rio de Janeiro Convention on Biological Diversity on 11 June 1992, and became a party to the convention on 5 January 1993. It later produced a National Biodiversity Strategy and Action Plan, with one revision that was received by the convention on 21 September 2010. China is home to at least 551 species of mammals (the third-highest such number in the world), 1,221 species of birds (eighth), 424 species of reptiles (seventh) and 333 species of amphibians (seventh). Wildlife in China shares habitat with, and bears acute pressure from, the world's largest population of humans. At least 840 animal species are threatened, vulnerable or in danger of local extinction in China, due mainly to human activity such as habitat destruction, pollution and poaching for food, fur and ingredients for traditional Chinese medicine. Endangered wildlife is protected by law, and , the country has over 2,349 nature reserves, covering a total area of 149.95 million hectares, 15 percent of China's total land area. Most wild animals have been eliminated from the core agricultural regions of east and central China, but they have fared better in the mountainous south and west. The Baiji was confirmed extinct on 12 December 2006. China has over 32,000 species of vascular plants, and is home to a variety of forest types. Cold coniferous forests predominate in the north of the country, supporting animal species such as moose and Asian black bear, along with over 120 bird species. The understory of moist conifer forests may contain thickets of bamboo. In higher montane stands of juniper and yew, the bamboo is replaced by rhododendrons. Subtropical forests, which are predominate in central and southern China, support a high density of plant species including numerous rare endemics. Tropical and seasonal rainforests, though confined to Yunnan and Hainan, contain a quarter of all the animal and plant species found in China. China has over 10,000 recorded species of fungi, and of them, nearly 6,000 are higher fungi. Environment In the early 2000s, China has suffered from environmental deterioration and pollution due to its rapid pace of industrialization. Regulations such as the 1979 Environmental Protection Law are fairly stringent, though they are poorly enforced, as they are frequently disregarded by local communities and government officials in favor of rapid economic development. China is the country with the second highest death toll because of air pollution, after India, with approximately 1 million deaths caused by exposure to ambient air pollution. Although China ranks as the highest CO2 emitting country in the world, it only emits 8 tons of CO2 per capita, significantly lower than developed countries such as the United States (16.1), Australia (16.8) and South Korea (13.6). Greenhouse gas emissions by China are the world's largest. In recent years, China has clamped down on pollution. In March 2014, CCP General Secretary Xi Jinping "declared war" on pollution during the opening of the National People's Congress. In 2020, Xi announced that China aims to peak emissions before 2030 and go carbon-neutral by 2060 in accordance with the Paris Agreement, which, according to Climate Action Tracker, if accomplished it would lower the expected rise in global temperature by 0.2 – 0.3 degrees – "the biggest single reduction ever estimated by the Climate Action Tracker". In September 2021 Xi Jinping announced that China will not build "coal-fired power projects abroad". The decision can be "pivotal" in reducing emissions. The Belt and Road Initiative did not include financing such projects already in the first half of 2021. The country also had significant water pollution problems; only 84.8% of China's national surface water was graded between Grade I-III by the Ministry of Ecology and Environment in 2021, which indicates that they're suitable for human consumption. China had a 2018 Forest Landscape Integrity Index mean score of 7.14/10, ranking it 53rd globally out of 172 countries. In 2020, a sweeping law was passed by the Chinese government to protect the ecology of the Yangtze River. The new laws include strengthening ecological protection rules for hydropower projects along the river, banning chemical plants within 1 kilometer of the river, relocating polluting industries, severely restricting sand mining as well as a complete fishing ban on all the natural waterways of the river, including all its major tributaries and lakes. China is also the world's leading investor in renewable energy and its commercialization, with $546 billion invested in 2022; it is a major manufacturer of renewable energy technologies and invests heavily in local-scale renewable energy projects. In 2022, 61.2% of China's electricity came from coal (largest producer in the world), 14.9% from hydroelectric power (largest), 9.3% from wind (largest), 4.7% from solar energy (largest), 4.7% from nuclear energy (second-largest), 3.1% from natural gas (fifth-largest), and 1.9% from bioenergy (largest); in total, 30.8% of China's energy came from renewable energy sources. Despite its emphasis on renewables, China remains deeply connected to global oil markets and next to India, has been the largest importer of Russian crude oil in 2022. Political geography The People's Republic of China is the second-largest country in the world by land area after Russia, and the third or fourth largest country in the world by total area. China's total area is generally stated as being approximately . Specific area figures range from according to the Encyclopædia Britannica, to according to the UN Demographic Yearbook, and The World Factbook. China has the longest combined land border in the world, measuring and its coastline covers approximately from the mouth of the Yalu River (Amnok River) to the Gulf of Tonkin. China borders 14 nations and covers the bulk of East Asia, bordering Vietnam, Laos, and Myanmar in Southeast Asia; India, Bhutan, Nepal, Pakistan and Afghanistan in South Asia; Tajikistan, Kyrgyzstan and Kazakhstan in Central Asia; and Russia, Mongolia, and North Korea in Inner Asia and Northeast Asia. It is narrowly separated from Bangladesh and Thailand to the southwest and south, and has several maritime neighbors such as Japan, Philippines, Malaysia, and Indonesia. Politics The People's Republic of China is a one-party state governed by the Marxist–Leninist Chinese Communist Party (CCP). This makes China one of the world's last countries governed by a communist party. The Chinese constitution states that the PRC "is a socialist state governed by a people's democratic dictatorship that is led by the working class and based on an alliance of workers and peasants," and that the state institutions "shall practice the principle of democratic centralism." The main body of the constitution also declares that "the defining feature of socialism with Chinese characteristics is the leadership of the Communist Party of China." The PRC officially terms itself as a democracy, using terms such as "socialist consultative democracy", and "whole-process people's democracy". However, the country is commonly described as an authoritarian one-party state and a dictatorship, with among the heaviest restrictions worldwide in many areas, most notably against freedom of the press, freedom of assembly, reproductive rights, free formation of social organizations, freedom of religion and free access to the Internet. China has consistently been ranked amongst the lowest as an "authoritarian regime" by the Economist Intelligence Unit's Democracy Index, ranking at 156th out of 167 countries in 2022. Chinese Communist Party According to the CCP constitution, its highest body of the CCP is the National Congress held every five years. The National Congress elects the Central Committee, who then elects the party's Politburo, Politburo Standing Committee and the general secretary (party leader), the top leadership of the country. The general secretary holds ultimate power and authority over state and government and serves as the informal paramount leader. The current general secretary is Xi Jinping, who took office on 15 November 2012. At the local level, the secretary of the CCP committee of a subdivision outranks the local government level; CCP committee secretary of a provincial division outranks the governor while the CCP committee secretary of a city outranks the mayor. The CCP is officially guided by "socialism with Chinese characteristics", which is Marxism adapted to Chinese circumstances. Government The government in China is under the sole control of the CCP, with the CCP constitution outlining the party as the "highest force for political leadership". The CCP controls appointments in government bodies, with most senior government officials being CCP members. The National People's Congress (NPC), the nearly 3,000 member legislature, is constitutionally the "highest state organ of power", though it has been also described as a "rubber stamp" body. The NPC meets annually, while the NPC Standing Committee, around 150 member body elected from NPC delegates, meets every couple of months. Elections are indirect and not pluralistic, with nominations at all levels being controlled by the CCP. The NPC is dominated by the CCP, with another eight minor parties having nominal representation under the condition of upholding CCP leadership. The president is the ceremonial state representative, elected by the NPC. The incumbent president is Xi Jinping, who is also the general secretary of the CCP and the chairman of the Central Military Commission, making him China's paramount leader. The premier is the head of government, with Li Qiang being the incumbent premier. The premier is officially nominated by the president and then elected by the NPC, and has generally been either the second or third-ranking member of the Politburo Standing Committee (PSC). The premier presides over the State Council, China's cabinet, composed of four vice premiers, state councilors, and the heads of ministries and commissions. The Chinese People's Political Consultative Conference (CPPCC) is a political advisory body that is critical in China's "united front" system, which aims to gather non-CCP voices to support the CCP. Similar to the people's congresses, CPPCC's exist at various division, with the National Committee of the CPPCC being chaired by Wang Huning, fourth-ranking member of the PSC. The governance of China is characterized by a high degree of political centralization but significant economic decentralization. The central government sets the strategic direction while local officials carry it out. Policy instruments or processes are often tested locally before being applied more widely, resulting in a policy process that involves experimentation and feedback. Generally, high level central government leadership refrains from drafting specific policies, instead using the informal networks and site visits to affirm or suggest changes to the direction of local policy experiments or pilot programs. The typical approach is that high level central government leadership begins drafting formal policies, law, or regulations after policy has been developed at local levels. Administrative divisions The PRC is constitutionally a unitary state officially divided into 23 provinces, five autonomous regions (each with a designated minority group), and four directly administered municipalities—collectively referred to as "mainland China"—as well as the special administrative regions (SARs) of Hong Kong and Macau. The PRC considers Taiwan to be its 23rd province, although it is governed by the Republic of China (ROC), which claims to be the legitimate representative of China and its territory, though it has downplayed this claim since its democratization. Geographically, all 31 provincial divisions of mainland China can be grouped into six regions: North China, Northeast China, East China, South Central China, Southwest China, and Northwest China. Foreign relations The PRC has diplomatic relations with 179 United Nation members states and maintains embassies in 174. Since 2019, China has the largest diplomatic network in the world. In 1971, the PRC replaced the Republic of China (ROC) as the sole representative of China in the United Nations and as one of the five permanent members of the United Nations Security Council. It is a member of intergovernmental organizations including the G20, East Asia Summit, and APEC. China was also a former member and leader of the Non-Aligned Movement, and still considers itself an advocate for developing countries. Along with Brazil, Russia, India and South Africa, China is a member of the BRICS group of emerging major economies and hosted the group's third official summit at Sanya, Hainan in April 2011. The PRC officially maintains the one-China principle, which holds the view that there is only one sovereign state in the name of China, represented by the PRC, and that Taiwan is part of that China. The unique status of Taiwan has led to countries recognizing the PRC to maintain unique "one-China policies" that differ from each other; some countries explicitly recognize the PRC's claim over Taiwan, while others, including the US and Japan, only acknowledge the claim. Chinese officials have protested on numerous occasions when foreign countries have made diplomatic overtures to Taiwan, especially in the matter of armament sales. Most countries have switched recognition from the ROC to the PRC since the latter replaced the former in the United Nations in 1971 as the sole representative of China. Much of current Chinese foreign policy is reportedly based on Premier Zhou Enlai's Five Principles of Peaceful Coexistence, and is also driven by the concept of "harmony without uniformity", which encourages diplomatic relations between states despite ideological differences. This policy may have led China to support or maintain close ties with states that are regarded as dangerous or repressive by Western nations, such as Myanmar, North Korea and Iran. China has a close political, economic and military relationship with Russia, and the two states often vote in unison in the United Nations Security Council. Trade relations China became the world's largest trading nation in 2013 as measured by the sum of imports and exports, as well as the world's largest commodity importer. comprising roughly 45% of maritime's dry-bulk market. By 2016, China was the largest trading partner of 124 other countries. China is the largest trading partner for the ASEAN nations, with a total trade value of $669.2 billion in 2021 accounting for 20% of ASEAN's total trade. ASEAN is also China's largest trading partner. In 2020, China became the largest trading partner of the European Union for goods, with the total value of goods trade reaching nearly $700 billion. China, along with ASEAN, Japan, South Korea, Australia and New Zealand, is a member of the Regional Comprehensive Economic Partnership, the world's largest free-trade area covering 30% of the world's population and economic output. China became a member of the World Trade Organization (WTO) in 2001. In 2004, it proposed an entirely new East Asia Summit (EAS) framework as a forum for regional security issues. The EAS, which includes ASEAN Plus Three, India, Australia and New Zealand, held its inaugural summit in 2005. China has had a long and complex trade relationship with the United States. In 2000, the United States Congress approved "permanent normal trade relations" (PNTR) with China, allowing Chinese exports in at the same low tariffs as goods from most other countries. China has a significant trade surplus with the United States, one of its most important export markets. Economists have argued that the renminbi is undervalued, due to currency intervention from the Chinese government, giving China an unfair trade advantage. The US and other foreign governments have also alleged that China does not respect intellectual property (IP) rights and steals IP through espionage operations, with the US Department of Justice saying that 80% of all the prosecutions related to economic espionage it brings were about conduct to benefit the Chinese state. Since the turn of the century, China has followed a policy of engaging with African nations for trade and bilateral co-operation; in 2022, Sino-African trade totalled $282 billion, having grown more than 20 times over two decades. According to Madison Condon "China finances more infrastructure projects in Africa than the World Bank and provides billions of dollars in low-interest loans to the continent's emerging economies." China maintains extensive and highly diversified trade links with the European Union, and became its largest trading partner for goods, with the total value of goods trade reaching nearly $700 billion. China has furthermore strengthened its trade ties with major South American economies, and is the largest trading partner of Brazil, Chile, Peru, Uruguay, Argentina, and several others. In 2013, China initiated the Belt and Road Initiative (BRI), a large global infrastructure building initiative with funding on the order of $50–100 billion per year. BRI could be one of the largest development plans in modern history. It has expanded significantly over the last six years and, , includes 138 countries and 30 international organizations. In addition to intensifying foreign policy relations, the focus here is particularly on building efficient transport routes. The focus is particularly on the maritime Silk Road with its connections to East Africa and Europe and there are Chinese investments or related declarations of intent at numerous ports such as Gwadar, Kuantan, Hambantota, Piraeus and Trieste. However many of these loans made under the Belt and Road program are unsustainable and China has faced a number of calls for debt relief from debtor nations. Territorial disputes Ever since its establishment, the PRC has claimed the territories governed by the Republic of China (ROC), a separate political entity today commonly known as Taiwan, as a part of its territory. It regards the island of Taiwan as its Taiwan Province, Kinmen and Matsu as a part of Fujian Province and islands the ROC controls in the South China Sea as a part of Hainan Province and Guangdong Province. These claims are controversial because of the complicated Cross-Strait relations, with the PRC treating the one-China principle as one of its most important diplomatic principles in dealing with other countries. China has resolved its land borders with 12 out of 14 neighboring countries, having pursued substantial compromises in most of them. China currently has a disputed land border with India and Bhutan. China is additionally involved in maritime disputes with multiple countries over the ownership of several small islands in the East and South China Seas, such as Socotra Rock, the Senkaku Islands and the entirety of South China Sea Islands, along with the EEZ disputes over East China Sea. Sociopolitical issues and human rights The situation of human rights in China has attracted significant criticism from a number of foreign governments, foreign press agencies, and non-governmental organizations, alleging widespread civil rights violations such as detention without trial, forced confessions, torture, restrictions of fundamental rights, and excessive use of the death penalty. Since its inception, Freedom House has ranked China as "not free" in its Freedom in the World survey, while Amnesty International has documented significant human rights abuses. The Constitution of the People's Republic of China states that the "fundamental rights" of citizens include freedom of speech, freedom of the press, the right to a fair trial, freedom of religion, universal suffrage, and property rights. However, in practice, these provisions do not afford significant protection against criminal prosecution by the state. China has limited protections regarding LGBT rights. Although some criticisms of government policies and the ruling CCP are tolerated, censorship of political speech and information are amongst the harshest in the world and routinely used to prevent collective action. China also has the most comprehensive and sophisticated Internet censorship regime in the world, with numerous websites being blocked. The government suppresses popular protests and demonstrations that it considers a potential threat to "social stability", as was the case with the 1989 Tiananmen Square protests and massacre. China additionally uses a massive espionage network of cameras, facial recognition software, sensors, and surveillance of personal technology as a means of social control of persons living in the country. China is regularly accused of large-scale repression and human rights abuses in Tibet and Xinjiang, where significant amounts of ethnic minorities reside, including violent police crackdowns and religious suppression. In Xinjiang, repression has significantly escalated since 2016, after which at least one million Uyghurs and other ethnic and religion minorities have been detained in internment camps aimed at changing the political thinking of detainees, their identities, and their religious beliefs. According to witnesses, actions including political indoctrination, torture, physical and psychological abuse, forced sterilization, sexual abuse, and forced labor are common in these facilities. According to a 2020 report, China's treatment of Uyghurs meets the UN definition of genocide, while a separate UN Human Rights Office report said they could potentially meet the definitions for crimes against humanity. Global studies from Pew Research Center in 2014 and 2017 ranked the Chinese government's restrictions on religion as among the highest in the world, despite low to moderate rankings for religious-related social hostilities in the country. The Global Slavery Index estimated that in 2016 more than 3.8 million people were living in "conditions of modern slavery", or 0.25% of the population, including victims of human trafficking, forced labor, forced marriage, child labor, and state-imposed forced labor. The state-imposed re-education through labor (laojiao) system was formally abolished in 2013, but it is not clear to which extent its various practices have stopped. The Chinese penal system also includes the much larger reform through labor (laogai) system, which includes labor prison factories, detention centers, and re-education camps; the Laogai Research Foundation has estimated in June 2008 that there were nearly 1,422 of these facilities, though it cautioned that this number was likely an underestimate. Public views of government Political concerns in China include the growing gap between rich and poor and government corruption. Nonetheless, international surveys show a high level of the Chinese public's satisfaction with their government. These views are generally attributed to the material comforts and security available to large segments of the Chinese populace as well as the government's attentiveness and responsiveness. According to the World Values Survey (2017–2020), 95% of Chinese respondents have significant confidence in their government. Confidence decreased to 91% in the survey's 2022 edition. A Harvard University survey published in July 2020 found that citizen satisfaction with the government had increased since 2003, also rating China's government as more effective and capable than ever before in the survey's history. Military The People's Liberation Army (PLA) is considered one of the world's most powerful militaries and has rapidly modernized in the recent decades. It consists of the Ground Force (PLAGF), the Navy (PLAN), the Air Force (PLAAF), the Rocket Force (PLARF) and the Strategic Support Force (PLASSF). Its nearly 2.2 million active duty personnel is the largest in the world. The PLA holds the world's third-largest stockpile of nuclear weapons, and the world's second-largest navy by tonnage. China's official military budget for 2022 totalled US$230 billion (1.45 trillion Yuan), the second-largest in the world, though SIPRI estimates that its real expenditure that year was US$292 billion. According to SIPRI, its military spending from 2012 to 2021 averaged US$215 billion per year or 1.7 per cent of GDP, behind only the United States at US$734 billion per year or 3.6 per cent of GDP. The PLA is commanded by the Central Military Commission (CMC) of the party and the state; though officially two separate organizations, the two CMCs have identical membership except during leadership transition periods and effectively function as one organization. The chairman of the CMC is the commander-in-chief of the PLA, with the officeholder also generally being the CCP general secretary, making them the paramount leader of China. Economy China has the world's second-largest economy in terms of nominal GDP, and the world's largest in terms of purchasing power parity (PPP). , China accounts for around 18% of global economy by nominal GDP. China is one of the world's fastest-growing major economies, with its economic growth having been almost consistently above 6 percent since the introduction of economic reforms in 1978. According to the World Bank, China's GDP grew from $150 billion in 1978 to $17.96 trillion by 2022. Of the world's 500 largest companies, 142 are headquartered in China. China was one of the world's foremost economic powers throughout the arc of East Asian and global history. The country had one of the largest economies in the world for most of the past two millennia, during which it has seen cycles of prosperity and decline. Since economic reforms began in 1978, China has developed into a highly diversified economy and one of the most consequential players in international trade. Major sectors of competitive strength include manufacturing, retail, mining, steel, textiles, automobiles, energy generation, green energy, banking, electronics, telecommunications, real estate, e-commerce, and tourism. China has three out of the ten largest stock exchanges in the world—Shanghai, Hong Kong and Shenzhen—that together have a market capitalization of over $15.9 trillion, . China has four (Shanghai, Hong Kong, Beijing, and Shenzhen) out of the world's top ten most competitive financial centers, which is more than any other country in the 2020 Global Financial Centres Index. Modern-day China is often described as an example of state capitalism or party-state capitalism. In 1992, Jiang Zemin termed the country a socialist market economy. Others have described it as a form of Marxism–Leninism adapted to co-exist with global capitalism. The state dominates in strategic "pillar" sectors such as energy production and heavy industries, but private enterprise has expanded enormously, with around 30 million private businesses recorded in 2008. According to official statistics, privately owned companies constitute more than 60% of China's GDP. China has been the world's largest manufacturing nation since 2010, after overtaking the US, which had been the largest for the previous hundred years. China has also been the second largest in high-tech manufacturing country since 2012, according to US National Science Foundation. China is the second largest retail market in the world, next to the United States. China leads the world in e-commerce, accounting for over 37% of the global market share in 2021. China is the world's leader in electric vehicle consumption and production, manufacturing and buying half of all the plug-in electric cars (BEV and PHEV) in the world . China is also the leading producer of batteries for electric vehicles as well as several key raw materials for batteries. Long heavily relying on non-renewable energy sources such as coal, China's adaptation of renewable energy has increased significantly in recent years, with their share increasing from 26.3 percent in 2016 to 31.9 percent in 2022. Wealth China accounted for 17.9% of the world's total wealth in 2021, second highest in the world after the US. It ranks at 64th at GDP (nominal) per capita, making it an upper-middle income country. Though China used to make up much of the world's poor, it now makes up much of the world's middle-class. China brought more people out of extreme poverty than any other country in history—between 1978 and 2018, China reduced extreme poverty by 800 million. China reduced the extreme poverty rate—per international standard, it refers to an income of less than $1.90/day—from 88% in 1981 to 1.85% by 2013. The portion of people in China living below the international poverty line of $1.90 per day (2011 PPP) fell to 0.3% in 2018 from 66.3% in 1990. Using the lower-middle income poverty line of $3.20 per day, the portion fell to 2.9% in 2018 from 90.0% in 1990. Using the upper-middle income poverty line of $5.50 per day, the portion fell to 17.0% from 98.3% in 1990. From 1978 to 2018, the average standard of living multiplied by a factor of twenty-six. Wages in China have grown a lot in the last 40 years—real (inflation-adjusted) wages grew seven-fold from 1978 to 2007. Per capita incomes have risen significantly – when the PRC was founded in 1949, per capita income in China was one-fifth of the world average; per capita incomes now equal the world average itself. China's development is highly uneven. Its major cities and coastal areas are far more prosperous compared to rural and interior regions. It has a high level of economic inequality, which has increased quickly after the economic reforms, though has decreased significantly in the 2010s. In 2019 China's Gini coefficient was 0.382, according to the World Bank. , China was second in the world, after the US, in total number of billionaires and total number of millionaires, with 495 Chinese billionaires and 6.2 million millionaires. In 2019, China overtook the US as the home to the highest number of people who have a net personal wealth of at least $110,000, according to the global wealth report by Credit Suisse. According to the Hurun Global Rich List 2020, China is home to five of the world's top ten cities (Beijing, Shanghai, Hong Kong, Shenzhen, and Guangzhou in the 1st, 3rd, 4th, 5th, and 10th spots, respectively) by the highest number of billionaires, which is more than any other country. China had 85 female billionaires , two-thirds of the global total, and minted 24 new female billionaires in 2020. China has had the world's largest middle-class population since 2015, and the middle-class grew to a size of 400 million by 2018. China in the global economy China is a member of the WTO and is the world's largest trading power, with a total international trade value of US$6.3 trillion in 2022. China is the world's largest exporter and second-largest importer of goods. Its foreign exchange reserves reached US$3.128 trillion , making its reserves by far the world's largest. In 2022, China was amongst the world's largest recipient of inward foreign direct investment (FDI), attracting $180 billion, though most of these were speculated to be from Hong Kong. In 2021, China's foreign exchange remittances were $US53 billion making it the second largest recipient of remittances in the world. China also invests abroad, with a total outward FDI of $62.4 billion in 2012, and a number of major takeovers of foreign firms by Chinese companies. China is a major owner of US public debt, holding trillions of dollars worth of U.S. Treasury bonds. China's undervalued exchange rate has caused friction with other major economies, and it has also been widely criticized for manufacturing large quantities of counterfeit goods. In 2020, Harvard University's Economic Complexity Index ranked complexity of China's exports 17th in the world, up from 24th in 2010. Following the 2007–08 financial crisis, Chinese authorities sought to actively wean off of its dependence on the U.S. dollar as a result of perceived weaknesses of the international monetary system. To achieve those ends, China took a series of actions to further the internationalization of the Renminbi. In 2008, China established the dim sum bond market and expanded the Cross-Border Trade RMB Settlement Pilot Project, which helps establish pools of offshore RMB liquidity. This was followed with bilateral agreements to settle trades directly in renminbi with Russia, Japan, Australia, Singapore, the United Kingdom, and Canada. As a result of the rapid internationalization of the renminbi, it became the eighth-most-traded currency in the world by 2018, an emerging international reserve currency, and a component of the IMF's special drawing rights; however, partly due to capital controls that make the renminbi fall short of being a fully convertible currency, it remains far behind the Euro, Dollar and Japanese Yen in international trade volumes. , Yuan is the world's fifth-most traded currency. Science and technology Historical China was a world leader in science and technology until the Ming dynasty. Ancient and medieval Chinese discoveries and inventions, such as papermaking, printing, the compass, and gunpowder (the Four Great Inventions), became widespread across East Asia, the Middle East and later Europe. Chinese mathematicians were the first to use negative numbers. By the 17th century, the Western World surpassed China in scientific and technological advancement. The causes of this early modern Great Divergence continue to be debated by scholars. After repeated military defeats by the European colonial powers and Japan in the 19th century, Chinese reformers began promoting modern science and technology as part of the Self-Strengthening Movement. After the Communists came to power in 1949, efforts were made to organize science and technology based on the model of the Soviet Union, in which scientific research was part of central planning. After Mao's death in 1976, science and technology were promoted as one of the Four Modernizations, and the Soviet-inspired academic system was gradually reformed. Modern era Since the end of the Cultural Revolution, China has made significant investments in scientific research and is quickly catching up with the US in R&D spending. China officially spent around 2.4% of its GDP on R&D in 2020, totaling to around $377.8 billion. According to the World Intellectual Property Indicators, China received more applications than the US did in 2018 and 2019 and ranked first globally in patents, utility models, trademarks, industrial designs, and creative goods exports in 2021. It was ranked 12th in the Global Innovation Index in 2023, a considerable improvement from its rank of 35th in 2013. Chinese supercomputers have been ranked the fastest in the world on a few occasions; however, these supercomputers rely on critical components —namely processors— designed in foreign countries imported from outside of China. China has also struggled with developing several technologies domestically, such as the most advanced semiconductors and reliable jet engines. China is developing its education system with an emphasis on science, technology, engineering, and mathematics (STEM). It became the world's largest publisher of scientific papers in 2016. Chinese-born academicians have won prestigious prizes in the sciences and in mathematics, although most of them had conducted their winning research in Western nations. Space program The Chinese space program started in 1958 with some technology transfers from the Soviet Union. However, it did not launch the nation's first satellite until 1970 with the Dong Fang Hong I, which made China the fifth country to do so independently. In 2003, China became the third country in the world to independently send humans into space with Yang Liwei's spaceflight aboard Shenzhou 5. As of 2023, eighteen Chinese nationals have journeyed into space, including two women. In 2011, China launched its first space station testbed, Tiangong-1. In 2013, a Chinese robotic rover Yutu successfully touched down on the lunar surface as part of the Chang'e 3 mission. In 2019, China became the first country to land a probe—Chang'e 4—on the far side of the Moon. In 2020, Chang'e 5 successfully returned Moon samples to the Earth, making China the third country to do so independently after the United States and the Soviet Union. In 2021, China became the third country to land a spacecraft on Mars and the second one to deploy a rover (Zhurong) on Mars, after the United States. China completed its own modular space station, the Tiangong, in low Earth orbit on 3 November 2022. On 29 November 2022, China performed its first in-orbit crew handover aboard the Tiangong. In May 2023, China announced a plan to land humans on the Moon by 2030. Infrastructure After a decades-long infrastructural boom, China has produced numerous world-leading infrastructural projects: China has the world's largest high-speed rail network, the most supertall skyscrapers in the world, the world's largest power plant (the Three Gorges Dam), and a global satellite navigation system with the largest number of satellites in the world. Telecommunications China is the largest telecom market in the world and currently has the largest number of active cellphones of any country in the world, with over 1.69 billion subscribers, . It also has the world's largest number of internet and broadband users, with over 1.05 billion Internet users —equivalent to around 73.7% of its population—and almost all of them being mobile as well. By 2018, China had more than 1 billion 4G users, accounting for 40% of world's total. China is making rapid advances in 5G—by late 2018, China had started large-scale and commercial 5G trials. , China had over 500 million 5G users and 1.45 million base stations installed. China Mobile, China Unicom and China Telecom, are the three large providers of mobile and internet in China. China Telecom alone served more than 145 million broadband subscribers and 300 million mobile users; China Unicom had about 300 million subscribers; and China Mobile, the largest of them all, had 925 million users, . Combined, the three operators had over 3.4 million 4G base-stations in China. Several Chinese telecommunications companies, most notably Huawei and ZTE, have been accused of spying for the Chinese military. China has developed its own satellite navigation system, dubbed BeiDou, which began offering commercial navigation services across Asia in 2012 as well as global services by the end of 2018. Upon the completion of the 35th Beidou satellite, which was launched into orbit on 23 June 2020, Beidou followed GPS and GLONASS as the third completed global navigation satellite in the world. Transport Since the late 1990s, China's national road network has been significantly expanded through the creation of a network of national highways and expressways. In 2018, China's highways had reached a total length of , making it the longest highway system in the world. China has the world's largest market for automobiles, having surpassed the United States in both auto sales and production. The country has also become a large exporter of automobiles, being the world's second-largest exporter of cars in 2022 after Japan. In early 2023, China has overtaken Japan, becoming the world's largest exporter of cars. A side-effect of the rapid growth of China's road network has been a significant rise in traffic accidents, though the number of fatalities in traffic accidents fell by 20% from 2007 to 2017. In urban areas, bicycles remain a common mode of transport, despite the increasing prevalence of automobiles – , there are approximately 470 million bicycles in China. China's railways, which are operated by the state-owned China State Railway Group Company, are among the busiest in the world, handling a quarter of the world's rail traffic volume on only 6 percent of the world's tracks in 2006. , the country had of railways, the second longest network in the world. The railways strain to meet enormous demand particularly during the Chinese New Year holiday, when the world's largest annual human migration takes place. China's high-speed rail (HSR) system started construction in the early 2000s. By the end of 2022, high speed rail in China had reached of dedicated lines alone, making it the longest HSR network in the world. Services on the Beijing–Shanghai, Beijing–Tianjin, and Chengdu–Chongqing lines reach up to , making them the fastest conventional high speed railway services in the world. With an annual ridership of over 2.29 billion passengers in 2019, it is the world's busiest. The network includes the Beijing–Guangzhou high-speed railway, the single longest HSR line in the world, and the Beijing–Shanghai high-speed railway, which has three of longest railroad bridges in the world. The Shanghai maglev train, which reaches , is the fastest commercial train service in the world. Since 2000, the growth of rapid transit systems in Chinese cities has accelerated. , 44 Chinese cities have urban mass transit systems in operation and 39 more have metro systems approved. , China boasts the five longest metro systems in the world with the networks in Shanghai, Beijing, Guangzhou, Chengdu and Shenzhen being the largest. There were approximately 241 airports in 2021. China has over 2,000 river and seaports, about 130 of which are open to foreign shipping. In 2021, the Ports of Shanghai, Ningbo-Zhoushan, Shenzhen, Guangzhou, Qingdao, Tianjin and Hong Kong ranked in the top 10 in the world in container traffic and cargo tonnage. Water supply and sanitation Water supply and sanitation infrastructure in China is facing challenges such as rapid urbanization, as well as water scarcity, contamination, and pollution. According to data presented by the Joint Monitoring Program for Water Supply and Sanitation of World Health Organization (WHO) and UNICEF in 2015, about 36% of the rural population in China still did not have access to improved sanitation. The ongoing South–North Water Transfer Project intends to abate water shortage in the north. Demographics The national census of 2020 recorded the population of the People's Republic of China as approximately 1,411,778,724. According to the 2020 census, about 17.95% of the population were 14 years old or younger, 63.35% were between 15 and 59 years old, and 18.7% were over 60 years old. Between 2010 and 2020, the average population growth rate was 0.53%. Given concerns about population growth, China implemented a two-child limit during the 1970s, and, in 1979, began to advocate for an even stricter limit of one child per family. Beginning in the mid-1980s, however, given the unpopularity of the strict limits, China began to allow some major exemptions, particularly in rural areas, resulting in what was actually a "1.5"-child policy from the mid-1980s to 2015 (ethnic minorities were also exempt from one child limits). The next major loosening of the policy was enacted in December 2013, allowing families to have two children if one parent is an only child. In 2016, the one-child policy was replaced in favor of a two-child policy. A three-child policy was announced on 31 May 2021, due to population aging, and in July 2021, all family size limits as well as penalties for exceeding them were removed. According to data from the 2020 census, China's total fertility rate is 1.3, but some experts believe that after adjusting for the transient effects of the relaxation of restrictions, the country's actual total fertility rate is as low as 1.1. In 2023, National Bureau of Statistics estimated that the population fell 850,000 from 2021 to 2022, the first decline since 1961. According to one group of scholars, one-child limits had little effect on population growth or the size of the total population. However, these scholars have been challenged. Their own counterfactual model of fertility decline without such restrictions implies that China averted more than 500 million births between 1970 and 2015, a number which may reach one billion by 2060 given all the lost descendants of births averted during the era of fertility restrictions, with one-child restrictions accounting for the great bulk of that reduction. The policy, along with traditional preference for boys, may have contributed to an imbalance in the sex ratio at birth. According to the 2020 census, the sex ratio at birth was 105.07 boys for every 100 girls, which is beyond the normal range of around 105 boys for every 100 girls. The 2020 census found that males accounted for 51.24 percent of the total population. However, China's sex ratio is more balanced than it was in 1953, when males accounted for 51.82 percent of the total population. Ethnic groups China legally recognizes 56 distinct ethnic groups, who altogether comprise the Zhonghua minzu. The largest of these nationalities are the Han Chinese, who constitute more than 91% of the total population. The Han Chinese – the world's largest single ethnic group – outnumber other ethnic groups in every provincial-level division except Tibet and Xinjiang. Ethnic minorities account for less than 10% of the population of China, according to the 2020 census. Compared with the 2010 population census, the Han population increased by 60,378,693 persons, or 4.93%, while the population of the 55 national minorities combined increased by 11,675,179 persons, or 10.26%. The 2020 census recorded a total of 845,697 foreign nationals living in mainland China. Languages There are as many as 292 living languages in China. The languages most commonly spoken belong to the Sinitic branch of the Sino-Tibetan language family, which contains Mandarin (spoken by 80% of the population), and other varieties of Chinese language: Yue (including Cantonese and Taishanese), Wu (including Shanghainese and Suzhounese), Min (including Fuzhounese, Hokkien and Teochew), Xiang, Gan and Hakka. Languages of the Tibeto-Burman branch, including Tibetan, Qiang, Naxi and Yi, are spoken across the Tibetan and Yunnan–Guizhou Plateau. Other ethnic minority languages in southwestern China include Zhuang, Thai, Dong and Sui of the Tai-Kadai family, Miao and Yao of the Hmong–Mien family, and Wa of the Austroasiatic family. Across northeastern and northwestern China, local ethnic groups speak Altaic languages including Manchu, Mongolian and several Turkic languages: Uyghur, Kazakh, Kyrgyz, Salar and Western Yugur. Korean is spoken natively along the border with North Korea. Sarikoli, the language of Tajiks in western Xinjiang, is an Indo-European language. Taiwanese indigenous peoples, including a small population on the mainland, speak Austronesian languages. Standard Mandarin, a variety of Mandarin based on the Beijing dialect, is the official national language of China and is used as a lingua franca in the country between people of different linguistic backgrounds. Mongolian, Uyghur, Tibetan, Zhuang and various other languages are also regionally recognized throughout the country. Urbanization China has urbanized significantly in recent decades. The percent of the country's population living in urban areas increased from 20% in 1980 to over 64% in 2021. It is estimated that China's urban population will reach one billion by 2030, potentially equivalent to one-eighth of the world population. China has over 160 cities with a population of over one million, including the 17 megacities (cities with a population of over 10 million) of Chongqing, Shanghai, Beijing, Chengdu, Guangzhou, Shenzhen, Tianjin, Xi'an, Suzhou, Zhengzhou, Wuhan, Hangzhou, Linyi, Shijiazhuang, Dongguan, Qingdao and Changsha. Among them, the total permanent population of Chongqing, Shanghai, Beijing and Chengdu is above 20 million. Shanghai is China's most populous urban area while Chongqing is its largest city proper, the only city in China with the largest permanent population of over 30 million. By 2025, it is estimated that the country will be home to 221 cities with over a million inhabitants. The figures in the table below are from the 2017 census, and are only estimates of the urban populations within administrative city limits; a different ranking exists when considering the total municipal populations (which includes suburban and rural populations). The large "floating populations" of migrant workers make conducting censuses in urban areas difficult; the figures below include only long-term residents. Education Since 1986, compulsory education in China comprises primary and junior secondary school, which together last for nine years. In 2021, about 91.4 percent of students continued their education at a three-year senior secondary school. The Gaokao, China's national university entrance exam, is a prerequisite for entrance into most higher education institutions. , 58.42 percent of secondary school graduates were enrolled in higher education. Vocational education is available to students at the secondary and tertiary level. More than 10 million Chinese students graduated from vocational colleges nationwide every year. China has the largest education system in the world, with about 282 million students and 17.32 million full-time teachers in over 530,000 schools. Annual education investment went from less than US$50 billion in 2003 to more than US$817 billion in 2020. However, there remains an inequality in education spending. In 2010, the annual education expenditure per secondary school student in Beijing totalled ¥20,023, while in Guizhou, one of the poorest provinces in China, only totalled ¥3,204. Free compulsory education in China consists of primary school and junior secondary school between the ages of 6 and 15. In 2021, the graduation enrollment ratio at compulsory education level reached 95.4 percent, and around 91.4% of Chinese have received secondary education. China's literacy rate has grown dramatically, from only 20% in 1949 and 65.5% in 1979. to 97% of the population over age 15 in 2020. In the same year, Beijing, Shanghai, Jiangsu, and Zhejiang, amongst the most affluent regions in China, were ranked the highest in the world in the Programme for International Student Assessment ranking for all three categories of Mathematics, Science and Reading. , China has over 3,000 universities, with over 44.3 million students enrolled in mainland China and 240 million Chinese citizens have received high education, making China the largest higher education system in the world. , China had the world's second-highest number of top universities (the highest in Asia & Oceania region). Currently, China trails only the United States in terms of representation on lists of top 200 universities according to the Academic Ranking of World Universities (ARWU). China is home to the two of the highest ranking universities (Tsinghua University and Peking University) in Asia and emerging economies according to the Times Higher Education World University Rankings. , two universities in mainland China rank in the world's top 15, with Peking University (12th) and Tsinghua University (14th) and three other universities ranking in the world's top 50, namely Fudan, Zhejiang, and Shanghai Jiao Tong according to the QS World University Rankings. These universities are members of the C9 League, an alliance of elite Chinese universities offering comprehensive and leading education. Health The National Health and Family Planning Commission, together with its counterparts in the local commissions, oversees the health needs of the Chinese population. An emphasis on public health and preventive medicine has characterized Chinese health policy since the early 1950s. At that time, the Communist Party started the Patriotic Health Campaign, which was aimed at improving sanitation and hygiene, as well as treating and preventing several diseases. Diseases such as cholera, typhoid and scarlet fever, which were previously rife in China, were nearly eradicated by the campaign. After Deng Xiaoping began instituting economic reforms in 1978, the health of the Chinese public improved rapidly because of better nutrition, although many of the free public health services provided in the countryside disappeared along with the People's Communes. Healthcare in China became mostly privatized, and experienced a significant rise in quality. In 2009, the government began a 3-year large-scale healthcare provision initiative worth US$124 billion. By 2011, the campaign resulted in 95% of China's population having basic health insurance coverage. By 2022, China had established itself as a key producer and exporter of pharmaceuticals, with the country alone producing around 40 percent of active pharmaceutical ingredients in 2017. , the life expectancy at birth in China is 78 years, and the infant mortality rate is 5 per thousand (in 2021). Both have improved significantly since the 1950s. Rates of stunting, a condition caused by malnutrition, have declined from 33.1% in 1990 to 9.9% in 2010. Despite significant improvements in health and the construction of advanced medical facilities, China has several emerging public health problems, such as respiratory illnesses caused by widespread air pollution, hundreds of millions of cigarette smokers, and an increase in obesity among urban youths. China's large population and densely populated cities have led to serious disease outbreaks in recent years, such as the 2003 outbreak of SARS, although this has since been largely contained. In 2010, air pollution caused 1.2 million premature deaths in China. The COVID-19 pandemic was first identified in Wuhan in December 2019. Further studies are being carried out around the world on a possible origin for the virus. Beijing says it has been sharing Covid data in "a timely, open and transparent manner in accordance with the law". According to U.S. officials, the Chinese government has been concealing the extent of the outbreak before it became an international pandemic. Religion The government of the People's Republic of China and the Chinese Communist Party both officially espouse state atheism, and have conducted antireligious campaigns to this end. Religious affairs and issues in the country are overseen by the CCP's United Front Work Department. Freedom of religion is guaranteed by China's constitution, although religious organizations that lack official approval can be subject to state persecution. Over the millennia, Chinese civilization has been influenced by various religious movements. The "three teachings", including Confucianism, Taoism, and Buddhism (Chinese Buddhism), historically have a significant role in shaping Chinese culture, enriching a theological and spiritual framework which harks back to the early Shang and Zhou dynasty. Chinese popular or folk religion, which is framed by the three teachings and other traditions, consists in allegiance to the shen (), a character that signifies the "energies of generation", who can be deities of the environment or ancestral principles of human groups, concepts of civility, culture heroes, many of whom feature in Chinese mythology and history. Among the most popular cults are those of Mazu (goddess of the seas), Huangdi (one of the two divine patriarchs of the Chinese race), Guandi (god of war and business), Caishen (god of prosperity and richness), Pangu and many others. China is home to many of the world's tallest religious statues, including the tallest of all, the Spring Temple Buddha in Henan. Clear data on religious affiliation in China is difficult to gather due to varying definitions of "religion" and the unorganized, diffusive nature of Chinese religious traditions. Scholars note that in China there is no clear boundary between three teachings religions and local folk religious practice. A 2015 poll conducted by Gallup International found that 61% of Chinese people self-identified as "convinced atheist", though Chinese religions or some of their strands are definable as non-theistic and humanistic religions, since they do not believe that divine creativity is completely transcendent, but it is inherent in the world and in particular in the human being. According to a 2014 study, approximately 74% are either non-religious or practice Chinese folk belief, 16% are Buddhists, 2% are Christians, 1% are Muslims, and 8% adhere to other religions including Taoists and folk salvationism. In addition to Han people's local religious practices, there are also various ethnic minority groups in China who maintain their traditional autochthone religions. The various folk religions today comprise 2–3% of the population, while Confucianism as a religious self-identification is common within the intellectual class. Significant faiths specifically connected to certain ethnic groups include Tibetan Buddhism and the Islamic religion of the Hui, Uyghur, Kazakh, Kyrgyz and other peoples in Northwest China. The 2010 population census reported the total number of Muslims in the country as 23.14 million. A 2021 poll from Ipsos and the Policy Institute at King's College London found that 35% of Chinese people said there was tension between different religious groups, which was the second lowest percentage of the 28 countries surveyed. Culture and society Since ancient times, Chinese culture has been heavily influenced by Confucianism. Chinese culture, in turn, has heavily influenced East Asia and Southeast Asia. For much of the country's dynastic era, opportunities for social advancement could be provided by high performance in the prestigious imperial examinations, which have their origins in the Han dynasty. The literary emphasis of the exams affected the general perception of cultural refinement in China, such as the belief that calligraphy, poetry and painting were higher forms of art than dancing or drama. Chinese culture has long emphasized a sense of deep history and a largely inward-looking national perspective. Examinations and a culture of merit remain greatly valued in China today. The first leaders of the People's Republic of China were born into the traditional imperial order but were influenced by the May Fourth Movement and reformist ideals. They sought to change some traditional aspects of Chinese culture, such as rural land tenure, sexism, and the Confucian system of education, while preserving others, such as the family structure and culture of obedience to the state. Some observers see the period following the establishment of the PRC in 1949 as a continuation of traditional Chinese dynastic history, while others claim that the CCP's rule under Mao Zedong damaged the foundations of Chinese culture, especially through political movements such as the Cultural Revolution of the 1960s, where many aspects of traditional culture were destroyed, having been denounced as "regressive and harmful" or "vestiges of feudalism". Many important aspects of traditional Chinese morals and culture, such as Confucianism, art, literature, and performing arts like Peking opera, were altered to conform to government policies and propaganda at the time. Access to foreign media remains heavily restricted. Today, the Chinese government has accepted numerous elements of traditional Chinese culture as being integral to Chinese society. With the rise of Chinese nationalism and the end of the Cultural Revolution, various forms of traditional Chinese art, literature, music, film, fashion and architecture have seen a vigorous revival, and folk and variety art in particular have sparked interest nationally and even worldwide. Tourism China received 65.7 million inbound international visitors in 2019, and in 2018 was the fourth-most-visited country in the world. It also experiences an enormous volume of domestic tourism; Chinese tourists made an estimated 6 billion travels within the country in 2019. China hosts the world's second-largest number of World Heritage Sites (56) after Italy, and is one of the most popular tourist destinations in the world (first in the Asia-Pacific). Literature Chinese literature is based on the literature of the Zhou dynasty. Concepts covered within the Chinese classic texts present a wide range of thoughts and subjects including calendar, military, astrology, herbology, geography and many others. Some of the most important early texts include the I Ching and the Shujing within the Four Books and Five Classics which served as the Confucian authoritative books for the state-sponsored curriculum in dynastic era. Inherited from the Classic of Poetry, classical Chinese poetry developed to its floruit during the Tang dynasty. Li Bai and Du Fu opened the forking ways for the poetic circles through romanticism and realism respectively. Chinese historiography began with the Shiji, the overall scope of the historiographical tradition in China is termed the Twenty-Four Histories, which set a vast stage for Chinese fictions along with Chinese mythology and folklore. Pushed by a burgeoning citizen class in the Ming dynasty, Chinese classical fiction rose to a boom of the historical, town and gods and demons fictions as represented by the Four Great Classical Novels which include Water Margin, Romance of the Three Kingdoms, Journey to the West and Dream of the Red Chamber. Along with the wuxia fictions of Jin Yong and Liang Yusheng, it remains an enduring source of popular culture in the Chinese sphere of influence. In the wake of the New Culture Movement after the end of the Qing dynasty, Chinese literature embarked on a new era with written vernacular Chinese for ordinary citizens. Hu Shih and Lu Xun were pioneers in modern literature. Various literary genres, such as misty poetry, scar literature, young adult fiction and the xungen literature, which is influenced by magic realism, emerged following the Cultural Revolution. Mo Yan, a xungen literature author, was awarded the Nobel Prize in Literature in 2012. Cuisine Chinese cuisine is highly diverse, drawing on several millennia of culinary history and geographical variety, in which the most influential are known as the "Eight Major Cuisines", including Sichuan, Cantonese, Jiangsu, Shandong, Fujian, Hunan, Anhui, and Zhejiang cuisines. Chinese cuisine is also known for its width of cooking methods and ingredients, as well as food therapy that is emphasized by traditional Chinese medicine. Generally, China's staple food is rice in the south, wheat-based breads and noodles in the north. The diet of the common people in pre-modern times was largely grain and simple vegetables, with meat reserved for special occasions. The bean products, such as tofu and soy milk, remain as a popular source of protein. Pork is now the most popular meat in China, accounting for about three-fourths of the country's total meat consumption. While pork dominates the meat market, there is also the vegetarian Buddhist cuisine and the pork-free Chinese Islamic cuisine. Southern cuisine, due to the area's proximity to the ocean and milder climate, has a wide variety of seafood and vegetables; it differs in many respects from the wheat-based diets across dry northern China. Numerous offshoots of Chinese food, such as Hong Kong cuisine and American Chinese food, have emerged in the nations that play host to the Chinese diaspora. Architecture Many architectural masters and masterpieces emerged in ancient China, creating many palaces, tombs, temples, gardens, houses, etc. The architecture of China is as old as Chinese civilization. The first communities that can be identified culturally as Chinese were settled chiefly in the basin of the Yellow River. Chinese architecture is the embodiment of an architectural style that has developed over millennia in China and has remained a vestigial source of perennial influence on the development of East Asian architecture. Since its emergence during the early ancient era, the structural principles of its architecture have remained largely unchanged. The main changes involved diverse decorative details. Starting with the Tang dynasty, Chinese architecture has had a major influence on the architectural styles of neighboring East Asian countries such as Japan, Korea, and Mongolia. and minor influences on the architecture of Southeast and South Asia including the countries of Malaysia, Singapore, Indonesia, Sri Lanka, Thailand, Laos, Cambodia, Vietnam and the Philippines. Chinese architecture is characterized by bilateral symmetry, use of enclosed open spaces, feng shui (e.g. directional hierarchies), a horizontal emphasis, and an allusion to various cosmological, mythological or in general symbolic elements. Chinese architecture traditionally classifies structures according to type, ranging from pagodas to palaces. Chinese architecture varies widely based on status or affiliation, such as whether the structures were constructed for emperors, commoners, or for religious purposes. Other variations in Chinese architecture are shown in vernacular styles associated with different geographic regions and different ethnic heritages, such as the stilt houses in the south, the Yaodong buildings in the northwest, the yurt buildings of nomadic people, and the Siheyuan buildings in the north. Music Chinese music covers a highly diverse range of music from traditional music to modern music. Chinese music dates back before the pre-imperial times. Traditional Chinese musical instruments were traditionally grouped into eight categories known as bayin (八音). Traditional Chinese opera is a form of musical theatre in China originating thousands of years and has regional style forms such as Beijing opera and Cantonese opera. Chinese pop (C-Pop) includes mandopop and cantopop. Chinese rap, Chinese hip hop and Hong Kong hip hop have become popular in contemporary times. Cinema Cinema was first introduced to China in 1896 and the first Chinese film, Dingjun Mountain, was released in 1905. China has the largest number of movie screens in the world since 2016, China became the largest cinema market in the world in 2020. The top 3 highest-grossing films in China were The Battle at Lake Changjin (2021), Wolf Warrior 2 (2017), and Hi, Mom (2021). Fashion Hanfu is the historical clothing of the Han people in China. The qipao or cheongsam is a popular Chinese female dress. The hanfu movement has been popular in contemporary times and seeks to revitalize Hanfu clothing. Sports China has one of the oldest sporting cultures in the world. There is evidence that archery (shèjiàn) was practiced during the Western Zhou dynasty. Swordplay (jiànshù) and cuju, a sport loosely related to association football date back to China's early dynasties as well. Physical fitness is widely emphasized in Chinese culture, with morning exercises such as qigong and tai chi widely practiced, and commercial gyms and private fitness clubs are gaining popularity across the country. Basketball is currently the most popular spectator sport in China. The Chinese Basketball Association and the American National Basketball Association also have a huge national following amongst the Chinese populace, with native-born and NBA-bound Chinese players and well-known national household names such as Yao Ming and Yi Jianlian being held in high esteem among Chinese basketball fans. China's professional football league, now known as Chinese Super League, was established in 1994, it is the largest football market in East Asia. Other popular sports in the country include martial arts, table tennis, badminton, swimming and snooker. Board games such as go (known as wéiqí in Chinese), xiangqi, mahjong, and more recently chess, are also played at a professional level. In addition, China is home to a huge number of cyclists, with an estimated 470 million bicycles . Many more traditional sports, such as dragon boat racing, Mongolian-style wrestling and horse racing are also popular. China has participated in the Olympic Games since 1932, although it has only participated as the PRC since 1952. China hosted the 2008 Summer Olympics in Beijing, where its athletes received 48 gold medals – the highest number of gold medals of any participating nation that year. China also won the most medals of any nation at the 2012 Summer Paralympics, with 231 overall, including 95 gold medals. In 2011, Shenzhen in Guangdong, China hosted the 2011 Summer Universiade. China hosted the 2013 East Asian Games in Tianjin and the 2014 Summer Youth Olympics in Nanjing; the first country to host both regular and Youth Olympics. Beijing and its nearby city Zhangjiakou of Hebei province collaboratively hosted the 2022 Winter Olympics, making Beijing the first dual olympic city in the world by holding both the Summer Olympics and the Winter Olympics. See also Outline of China Notes References Further reading Farah, Paolo (2006). "Five Years of China's WTO Membership: EU and US Perspectives on China's Compliance with Transparency Commitments and the Transitional Review Mechanism". Legal Issues of Economic Integration. Kluwer Law International. Volume 33, Number 3. pp. 263–304. Abstract. Heilig, Gerhard K. (2006/2007). China Bibliography – Online . China-Profile.com. Jacques, Martin (2009).When China Rules the World: The End of the Western World and the Birth of a New Global Order. Penguin Books. Rev. ed. (28 August 2012). Jaffe, Amy Myers, "Green Giant: Renewable Energy and Chinese Power", Foreign Affairs, vol. 97, no. 2 (March / April 2018), pp. 83–93. Johnson, Ian, "What Holds China Together?", The New York Review of Books, vol. LXVI, no. 14 (26 September 2019), pp. 14, 16, 18. "The Manchus... had [in 1644] conquered the last ethnic Chinese empire, the Ming [and established Imperial China's last dynasty, the Qing]... The Manchus expanded the empire's borders northward to include all of Mongolia, and westward to Tibet and Xinjiang." [p. 16.] "China's rulers have no faith that anything but force can keep this sprawling country intact." [p. 18.] External links Government The Central People's Government of People's Republic of China General information China at a Glance from People's Daily Country profile – China at BBC News China. The World Factbook. Central Intelligence Agency. China, People's Republic of from UCB Libraries GovPubs Maps Google Maps—China Atheist states BRICS nations Countries and territories where Chinese is an official language Communist states Countries in Asia Cradle of civilization East Asian countries E7 nations G20 members Member states of the United Nations Northeast Asian countries One-party states Republics States with limited recognition States and territories established in 1949
5407
https://en.wikipedia.org/wiki/California
California
California is a state in the Western United States. With over 38.9million residents across a total area of approximately , it is the most populous U.S. state, the third-largest U.S. state by area, and the most populated subnational entity in North America. California borders Oregon to the north, Nevada and Arizona to the east, and the Mexican state of Baja California to the south; it has a coastline along the Pacific Ocean to the west. The Greater Los Angeles and San Francisco Bay areas in California are the nation's second and fifth-most populous urban regions, respectively. Greater Los Angeles has over 18.7 million residents and the San Francisco Bay Area has over 9.6 million residents. Los Angeles is the state's most populous city and the nation's second-most populous city. San Francisco is the second-most densely populated major city in the country. Los Angeles County is the country's most populous county, and San Bernardino County is the nation's largest county by area. Sacramento is the state's capital. California's economy is the largest of any state within the United States, with a $3.6 trillion gross state product (GSP) . It is the largest sub-national economy in the world. If California were a sovereign nation, it would rank as the world's fifth-largest economy , behind India and ahead of the United Kingdom, as well as the 37th most populous. The Greater Los Angeles area and the San Francisco area are the nation's second- and fourth-largest urban economies ($1.0trillion and $0.6trillion respectively ). The San Francisco Bay Area Combined Statistical Area had the nation's highest gross domestic product per capita ($106,757) among large primary statistical areas in 2018, and is home to five of the world's ten largest companies by market capitalization and four of the world's ten richest people. Slightly over 84 percent of the state's residents 25 or older hold a high school degree, the lowest high school education rate of all 50 states. Prior to European colonization, California was one of the most culturally and linguistically diverse areas in pre-Columbian North America, and the indigenous peoples of California constituted the highest Native American population density north of what is now Mexico. European exploration in the 16th and 17th centuries led to the colonization of California by the Spanish Empire. In 1804, it was included in Alta California province within the Viceroyalty of New Spain. The area became a part of Mexico in 1821, following its successful war for independence, but was ceded to the United States in 1848 after the Mexican–American War. The California Gold Rush started in 1848 and led to dramatic social and demographic changes, including the depopulation of indigenous peoples in the California genocide. The western portion of Alta California was then organized and admitted as the 31st state on September 9, 1850, as a free state, following the Compromise of 1850. Notable contributions to popular culture, ranging from entertainment, sports, music, and fashion, have their origins in California. The state also has made substantial contributions in the fields of communication, information, innovation, education, environmentalism, entertainment, economics, politics, technology, and religion. California is the home of Hollywood, the oldest and one of the largest film industries in the world, profoundly influencing global entertainment. It is considered the origin of the American film industry, hippie counterculture, beach and car culture, the personal computer, the internet, fast food, diners, burger joints, skateboarding, and the fortune cookie, among other inventions. The San Francisco Bay Area and the Greater Los Angeles Area are widely seen as the centers of the global technology and U.S. film industries, respectively. California's economy is very diverse. California's agricultural industry has the highest output of any U.S. state, and is led by its dairy, almonds, and grapes. With the busiest ports in the country (Los Angeles and Long Beach), California plays a pivotal role in the global supply chain, hauling in about 40% of all goods imported to the United States. The state's extremely diverse geography ranges from the Pacific Coast and metropolitan areas in the west to the Sierra Nevada mountains in the east, and from the redwood and Douglas fir forests in the northwest to the Mojave Desert in the southeast. Two-thirds of the nation's earthquake risk lies in California. The Central Valley, a fertile agricultural area, dominates the state's center. California is well known for its warm Mediterranean climate along the coast and monsoon seasonal weather inland. The large size of the state results in climates that vary from moist temperate rainforest in the north to arid desert in the interior, as well as snowy alpine in the mountains. Droughts and wildfires are an ongoing issue for the state. Etymology The Spaniards gave the name to the peninsula of Baja California and to Alta California, the latter region becoming the present-day state of California. The name derived from the mythical island of California in the fictional story of Queen Calafia, as recorded in a 1510 work The Adventures of Esplandián by Castilian author Garci Rodríguez de Montalvo. This work was the fifth in a popular Spanish chivalric romance series that began with . Queen Calafia's kingdom was said to be a remote land rich in gold and pearls, inhabited by beautiful Black women who wore gold armor and lived like Amazons, as well as griffins and other strange beasts. In the fictional paradise, the ruler Queen Calafia fought alongside Muslims and her name may have been chosen to echo the Muslim title caliph, used for Muslim leaders. Official abbreviations of the state's name include CA, Cal., Calif., and US-CA. History Indigenous California was one of the most culturally and linguistically diverse areas in pre-Columbian North America. Historians generally agree that there were at least 300,000 people living in California prior to European colonization. The indigenous peoples of California included more than 70 distinct ethnic groups, inhabiting environments ranging from mountains and deserts to islands and redwood forests. Living in these diverse geographic areas, the indigenous peoples developed complex forms of ecosystem management, including forest gardening to ensure the regular availability of food and medicinal plants. This was a form of sustainable agriculture. To mitigate destructive large wildfires from ravaging the natural environment, indigenous peoples developed a practice of controlled burning. This practice was recognized for its benefits by the California government in 2022. These groups were also diverse in their political organization, with bands, tribes, villages, and, on the resource-rich coasts, large chiefdoms, such as the Chumash, Pomo and Salinan. Trade, intermarriage, craft specialists, and military alliances fostered social and economic relationships between many groups. Although nations would sometimes war, most armed conflicts were between groups of men for vengeance. Acquiring territory was not usually the purpose of these small-scale battles. Men and women generally had different roles in society. Women were often responsible for weaving, harvesting, processing, and preparing food, while men for hunting and other forms of physical labor. Most societies also had roles for people whom the Spanish referred to as joyas, who they saw as "men who dressed as women". Joyas were responsible for death, burial, and mourning rituals, and they performed women's social roles. Indigenous societies had terms such as two-spirit to refer to them. The Chumash referred to them as 'aqi. The early Spanish settlers detested and sought to eliminate them. Spanish period The first Europeans to explore the coast of California were the members of a Spanish maritime expedition led by Portuguese captain Juan Rodríguez Cabrillo in 1542. Cabrillo was commissioned by Antonio de Mendoza, the Viceroy of New Spain, to lead an expedition up the Pacific coast in search of trade opportunities; they entered San Diego Bay on September 28, 1542, and reached at least as far north as San Miguel Island. Privateer and explorer Francis Drake explored and claimed an undefined portion of the California coast in 1579, landing north of the future city of San Francisco. Sebastián Vizcaíno explored and mapped the coast of California in 1602 for New Spain, putting ashore in Monterey. Despite the on-the-ground explorations of California in the 16th century, Rodríguez's idea of California as an island persisted. Such depictions appeared on many European maps well into the 18th century. The Portolá expedition of 1769–70 was a pivotal event in the Spanish colonization of California, resulting in the establishment of numerous missions, presidios, and pueblos. The military and civil contingent of the expedition was led by Gaspar de Portolá, who traveled over land from Sonora into California, while the religious component was headed by Junípero Serra, who came by sea from Baja California. In 1769, Portolá and Serra established Mission San Diego de Alcalá and the Presidio of San Diego, the first religious and military settlements founded by the Spanish in California. By the end of the expedition in 1770, they would establish the Presidio of Monterey and Mission San Carlos Borromeo de Carmelo on Monterey Bay. After the Portolà expedition, Spanish missionaries led by Father-President Serra set out to establish 21 Spanish missions of California along El Camino Real ("The Royal Road") and along the California coast, 16 sites of which having been chosen during the Portolá expedition. Numerous major cities in California grew out of missions, including San Francisco (Mission San Francisco de Asís), San Diego (Mission San Diego de Alcalá), Ventura (Mission San Buenaventura), or Santa Barbara (Mission Santa Barbara), among others. Juan Bautista de Anza led a similarly important expedition throughout California in 1775–76, which would extend deeper into the interior and north of California. The Anza expedition selected numerous sites for missions, presidios, and pueblos, which subsequently would be established by settlers. Gabriel Moraga, a member of the expedition, would also christen many of California's prominent rivers with their names in 1775–1776, such as the Sacramento River and the San Joaquin River. After the expedition, Gabriel's son, José Joaquín Moraga, would found the pueblo of San Jose in 1777, making it the first civilian-established city in California. During this same period, sailors from the Russian Empire explored along the northern coast of California. In 1812, the Russian-American Company established a trading post and small fortification at Fort Ross on the North Coast. Fort Ross was primarily used to supply Russia's Alaskan colonies with food supplies. The settlement did not meet much success, failing to attract settlers or establish long term trade viability, and was abandoned by 1841. During the War of Mexican Independence, Alta California was largely unaffected and uninvolved in the revolution, though many Californios supported independence from Spain, which many believed had neglected California and limited its development. Spain's trade monopoly on California had limited local trade prospects. Following Mexican independence, California ports were freely able to trade with foreign merchants. Governor Pablo Vicente de Solá presided over the transition from Spanish colonial rule to independent Mexican rule. Mexican period In 1821, the Mexican War of Independence gave the Mexican Empire (which included California) independence from Spain. For the next 25 years, Alta California remained a remote, sparsely populated, northwestern administrative district of the newly independent country of Mexico, which shortly after independence became a republic. The missions, which controlled most of the best land in the state, were secularized by 1834 and became the property of the Mexican government. The governor granted many square leagues of land to others with political influence. These huge ranchos or cattle ranches emerged as the dominant institutions of Mexican California. The ranchos developed under ownership by Californios (Hispanics native of California) who traded cowhides and tallow with Boston merchants. Beef did not become a commodity until the 1849 California Gold Rush. From the 1820s, trappers and settlers from the United States and Canada began to arrive in Northern California. These new arrivals used the Siskiyou Trail, California Trail, Oregon Trail and Old Spanish Trail to cross the rugged mountains and harsh deserts in and surrounding California. The early government of the newly independent Mexico was highly unstable, and in a reflection of this, from 1831 onwards, California also experienced a series of armed disputes, both internal and with the central Mexican government. During this tumultuous political period Juan Bautista Alvarado was able to secure the governorship during 1836–1842. The military action which first brought Alvarado to power had momentarily declared California to be an independent state, and had been aided by Anglo-American residents of California, including Isaac Graham. In 1840, one hundred of those residents who did not have passports were arrested, leading to the Graham Affair, which was resolved in part with the intercession of Royal Navy officials. One of the largest ranchers in California was John Marsh. After failing to obtain justice against squatters on his land from the Mexican courts, he determined that California should become part of the United States. Marsh conducted a letter-writing campaign espousing the California climate, the soil, and other reasons to settle there, as well as the best route to follow, which became known as "Marsh's route". His letters were read, reread, passed around, and printed in newspapers throughout the country, and started the first wagon trains rolling to California. He invited immigrants to stay on his ranch until they could get settled, and assisted in their obtaining passports. After ushering in the period of organized emigration to California, Marsh became involved in a military battle between the much-hated Mexican general, Manuel Micheltorena and the California governor he had replaced, Juan Bautista Alvarado. The armies of each met at the Battle of Providencia near Los Angeles. Marsh had been forced against his will to join Micheltorena's army. Ignoring his superiors, during the battle, he signaled the other side for a parley. There were many settlers from the United States fighting on both sides. He convinced each side that they had no reason to be fighting each other. As a result of Marsh's actions, they abandoned the fight, Micheltorena was defeated, and California-born Pio Pico was returned to the governorship. This paved the way to California's ultimate acquisition by the United States. U.S. Conquest and the California Republic In 1846, a group of American settlers in and around Sonoma rebelled against Mexican rule during the Bear Flag Revolt. Afterward, rebels raised the Bear Flag (featuring a bear, a star, a red stripe and the words "California Republic") at Sonoma. The Republic's only president was William B. Ide, who played a pivotal role during the Bear Flag Revolt. This revolt by American settlers served as a prelude to the later American military invasion of California and was closely coordinated with nearby American military commanders. The California Republic was short-lived; the same year marked the outbreak of the Mexican–American War (1846–1848). Commodore John D. Sloat of the United States Navy sailed into Monterey Bay in 1846 and began the U.S. military invasion of California, with Northern California capitulating in less than a month to the United States forces. In Southern California, Californios continued to resist American forces. Notable military engagements of the conquest include the Battle of San Pasqual and the Battle of Dominguez Rancho in Southern California, as well as the Battle of Olómpali and the Battle of Santa Clara in Northern California. After a series of defensive battles in the south, the Treaty of Cahuenga was signed by the Californios on January 13, 1847, securing a censure and establishing de facto American control in California. Early American period Following the Treaty of Guadalupe Hidalgo (February 2, 1848) that ended the war, the westernmost portion of the annexed Mexican territory of Alta California soon became the American state of California, and the remainder of the old territory was then subdivided into the new American Territories of Arizona, Nevada, Colorado and Utah. The even more lightly populated and arid lower region of old Baja California remained as a part of Mexico. In 1846, the total settler population of the western part of the old Alta California had been estimated to be no more than 8,000, plus about 100,000 Native Americans, down from about 300,000 before Hispanic settlement in 1769. In 1848, only one week before the official American annexation of the area, gold was discovered in California, this being an event which was to forever alter both the state's demographics and its finances. Soon afterward, a massive influx of immigration into the area resulted, as prospectors and miners arrived by the thousands. The population burgeoned with United States citizens, Europeans, Middle Easterns, Chinese and other immigrants during the great California Gold Rush. By the time of California's application for statehood in 1850, the settler population of California had multiplied to 100,000. By 1854, more than 300,000 settlers had come. Between 1847 and 1870, the population of San Francisco increased from 500 to 150,000. The seat of government for California under Spanish and later Mexican rule had been located in Monterey from 1777 until 1845. Pio Pico, the last Mexican governor of Alta California, had briefly moved the capital to Los Angeles in 1845. The United States consulate had also been located in Monterey, under consul Thomas O. Larkin. In 1849, a state Constitutional Convention was first held in Monterey. Among the first tasks of the convention was a decision on a location for the new state capital. The first full legislative sessions were held in San Jose (1850–1851). Subsequent locations included Vallejo (1852–1853), and nearby Benicia (1853–1854); these locations eventually proved to be inadequate as well. The capital has been located in Sacramento since 1854 with only a short break in 1862 when legislative sessions were held in San Francisco due to flooding in Sacramento. Once the state's Constitutional Convention had finalized its state constitution, it applied to the U.S. Congress for admission to statehood. On September 9, 1850, as part of the Compromise of 1850, California became a free state and September9 a state holiday. During the American Civil War (1861–1865), California sent gold shipments eastward to Washington in support of the Union. However, due to the existence of a large contingent of pro-South sympathizers within the state, the state was not able to muster any full military regiments to send eastwards to officially serve in the Union war effort. Still, several smaller military units within the Union army, such as the "California 100 Company", were unofficially associated with the state of California due to a majority of their members being from California. At the time of California's admission into the Union, travel between California and the rest of the continental United States had been a time-consuming and dangerous feat. Nineteen years later, and seven years after it was greenlighted by President Lincoln, the first transcontinental railroad was completed in 1869. California was then reachable from the eastern States in a week's time. Much of the state was extremely well suited to fruit cultivation and agriculture in general. Vast expanses of wheat, other cereal crops, vegetable crops, cotton, and nut and fruit trees were grown (including oranges in Southern California), and the foundation was laid for the state's prodigious agricultural production in the Central Valley and elsewhere. In the nineteenth century, a large number of migrants from China traveled to the state as part of the Gold Rush or to seek work. Even though the Chinese proved indispensable in building the transcontinental railroad from California to Utah, perceived job competition with the Chinese led to anti-Chinese riots in the state, and eventually the US ended migration from China partially as a response to pressure from California with the 1882 Chinese Exclusion Act. California Genocide Under earlier Spanish and Mexican rule, California's original native population had precipitously declined, above all, from Eurasian diseases to which the indigenous people of California had not yet developed a natural immunity. Under its new American administration, California's first governor Peter Hardeman Burnett instituted policies that have been described as a state-sanctioned policy of elimination toward California's indigenous people. Burnett announced in 1851 in his Second Annual Message to the Legislature: "That a war of extermination will continue to be waged between the races until the Indian race becomes extinct must be expected. While we cannot anticipate the result with but painful regret, the inevitable destiny of the race is beyond the power and wisdom of man to avert." As in other American states, indigenous peoples were forcibly removed from their lands by American settlers, like miners, ranchers, and farmers. Although California had entered the American union as a free state, the "loitering or orphaned Indians," were de facto enslaved by their new Anglo-American masters under the 1850 Act for the Government and Protection of Indians. One of these de facto slave auctions was approved by the Los Angeles City Council and occurred for nearly twenty years. There were many massacres in which hundreds of indigenous people were killed by settlers for their land. Between 1850 and 1860, the California state government paid around 1.5million dollars (some 250,000 of which was reimbursed by the federal government) to hire militias with the stated purpose of protecting settlers, however these militias perpetrated numerous massacres of indigenous people. Indigenous people were also forcibly moved to reservations and rancherias, which were often small and isolated and without enough natural resources or funding from the government to adequately sustain the populations living on them. As a result, settler colonialism was a calamity for indigenous people. Several scholars and Native American activists, including Benjamin Madley and Ed Castillo, have described the actions of the California government as a genocide, as well as the 40th governor of California Gavin Newsom. Benjamin Madley estimates that from 1846 to 1873, between 9,492 and 16,092 indigenous people were killed, including between 1,680 and 3,741 killed by the U.S. Army. 1900–present In the twentieth century, thousands of Japanese people migrated to the US and California specifically to attempt to purchase and own land in the state. However, the state in 1913 passed the Alien Land Act, excluding Asian immigrants from owning land. During World War II, Japanese Americans in California were interned in concentration camps such as at Tule Lake and Manzanar. In 2020, California officially apologized for this internment. Migration to California accelerated during the early 20th century with the completion of major transcontinental highways like the Lincoln Highway and Route 66. In the period from 1900 to 1965, the population grew from fewer than one million to the greatest in the Union. In 1940, the Census Bureau reported California's population as 6.0% Hispanic, 2.4% Asian, and 89.5% non-Hispanic white. To meet the population's needs, major engineering feats like the California and Los Angeles Aqueducts; the Oroville and Shasta Dams; and the Bay and Golden Gate Bridges were built across the state. The state government also adopted the California Master Plan for Higher Education in 1960 to develop a highly efficient system of public education. Meanwhile, attracted to the mild Mediterranean climate, cheap land, and the state's wide variety of geography, filmmakers established the studio system in Hollywood in the 1920s. California manufactured 8.7 percent of total United States military armaments produced during World War II, ranking third (behind New York and Michigan) among the 48 states. California however easily ranked first in production of military ships during the war (transport, cargo, [merchant ships] such as Liberty ships, Victory ships, and warships) at drydock facilities in San Diego, Los Angeles, and the San Francisco Bay Area. After World War II, California's economy greatly expanded due to strong aerospace and defense industries, whose size decreased following the end of the Cold War. Stanford University and its Dean of Engineering Frederick Terman began encouraging faculty and graduates to stay in California instead of leaving the state, and develop a high-tech region in the area now known as Silicon Valley. As a result of these efforts, California is regarded as a world center of the entertainment and music industries, of technology, engineering, and the aerospace industry, and as the United States center of agricultural production. Just before the Dot Com Bust, California had the fifth-largest economy in the world among nations. In the mid and late twentieth century, a number of race-related incidents occurred in the state. Tensions between police and African Americans, combined with unemployment and poverty in inner cities, led to violent riots, such as the 1965 Watts riots and 1992 Rodney King riots. California was also the hub of the Black Panther Party, a group known for arming African Americans to defend against racial injustice and for organizing free breakfast programs for schoolchildren. Additionally, Mexican, Filipino, and other migrant farm workers rallied in the state around Cesar Chavez for better pay in the 1960s and 1970s. During the 20th century, two great disasters happened in California. The 1906 San Francisco earthquake and 1928 St. Francis Dam flood remain the deadliest in U.S. history. Although air pollution problems have been reduced, health problems associated with pollution have continued. The brown haze known as "smog" has been substantially abated after the passage of federal and state restrictions on automobile exhaust. An energy crisis in 2001 led to rolling blackouts, soaring power rates, and the importation of electricity from neighboring states. Southern California Edison and Pacific Gas and Electric Company came under heavy criticism. Housing prices in urban areas continued to increase; a modest home which in the 1960s cost $25,000 would cost half a million dollars or more in urban areas by 2005. More people commuted longer hours to afford a home in more rural areas while earning larger salaries in the urban areas. Speculators bought houses they never intended to live in, expecting to make a huge profit in a matter of months, then rolling it over by buying more properties. Mortgage companies were compliant, as everyone assumed the prices would keep rising. The bubble burst in 2007–8 as housing prices began to crash and the boom years ended. Hundreds of billions in property values vanished and foreclosures soared as many financial institutions and investors were badly hurt. In the twenty-first century, droughts and frequent wildfires attributed to climate change have occurred in the state. From 2011 to 2017, a persistent drought was the worst in its recorded history. The 2018 wildfire season was the state's deadliest and most destructive, most notably Camp Fire. One of the first confirmed COVID-19 cases in the United States that occurred in California was first of which was confirmed on January 26, 2020. Meaning, all of the early confirmed cases were persons who had recently travelled to China in Asia, as testing was restricted to this group. On this January 29, 2020, as disease containment protocols were still being developed, the U.S. Department of State evacuated 195 persons from Wuhan, China aboard a chartered flight to March Air Reserve Base in Riverside County, and in this process, it may have granted and conferred to escalated within the land and the US at cosmic. On February 5, 2020, the U.S. evacuated 345 more citizens from Hubei Province to two military bases in California, Travis Air Force Base in Solano County and Marine Corps Air Station Miramar, San Diego, where they were quarantined for 14 days. A state of emergency was largely declared in this state of the nation on March 4, 2020, and as of February 24, 2021, remains in effect. A mandatory statewide stay-at-home order was issued on March 19, 2020, due to increase, which was ended on January 25, 2021, allowing citizens to return to normal life. On April 6, 2021, the state announced plans to fully reopen the economy by June 15, 2021. In 2019, the 40th governor of California, Gavin Newsom formally apologized to the indigenous peoples of California for the California genocide: "Genocide. No other way to describe it, and that's the way it needs to be described in the history books." Newsom further acknowledged that "the actions of the state 150 years ago have ongoing ramifications even today." Cultural and language revitalization efforts among indigenous Californians have progressed among several tribes as of 2022. Some land returns to indigenous stewardship have occurred throughout California. In 2022, the largest dam removal and river restoration project in US history was announced for the Klamath River as a win for California tribes. Geography Covering an area of , California is the third-largest state in the United States in area, after Alaska and Texas. California is one of the most geographically diverse states in the union and is often geographically bisected into two regions, Southern California, comprising the ten southernmost counties, and Northern California, comprising the 48 northernmost counties. It is bordered by Oregon to the north, Nevada to the east and northeast, Arizona to the southeast, the Pacific Ocean to the west and shares an international border with the Mexican state of Baja California to the south (with which it makes up part of The Californias region of North America, alongside Baja California Sur). In the middle of the state lies the California Central Valley, bounded by the Sierra Nevada in the east, the coastal mountain ranges in the west, the Cascade Range to the north and by the Tehachapi Mountains in the south. The Central Valley is California's productive agricultural heartland. Divided in two by the Sacramento-San Joaquin River Delta, the northern portion, the Sacramento Valley serves as the watershed of the Sacramento River, while the southern portion, the San Joaquin Valley is the watershed for the San Joaquin River. Both valleys derive their names from the rivers that flow through them. With dredging, the Sacramento and the San Joaquin Rivers have remained deep enough for several inland cities to be seaports. The Sacramento-San Joaquin River Delta is a critical water supply hub for the state. Water is diverted from the delta and through an extensive network of pumps and canals that traverse nearly the length of the state, to the Central Valley and the State Water Projects and other needs. Water from the Delta provides drinking water for nearly 23million people, almost two-thirds of the state's population as well as water for farmers on the west side of the San Joaquin Valley. Suisun Bay lies at the confluence of the Sacramento and San Joaquin Rivers. The water is drained by the Carquinez Strait, which flows into San Pablo Bay, a northern extension of San Francisco Bay, which then connects to the Pacific Ocean via the Golden Gate strait. The Channel Islands are located off the Southern coast, while the Farallon Islands lie west of San Francisco. The Sierra Nevada (Spanish for "snowy range") includes the highest peak in the contiguous 48 states, Mount Whitney, at . The range embraces Yosemite Valley, famous for its glacially carved domes, and Sequoia National Park, home to the giant sequoia trees, the largest living organisms on Earth, and the deep freshwater lake, Lake Tahoe, the largest lake in the state by volume. To the east of the Sierra Nevada are Owens Valley and Mono Lake, an essential migratory bird habitat. In the western part of the state is Clear Lake, the largest freshwater lake by area entirely in California. Although Lake Tahoe is larger, it is divided by the California/Nevada border. The Sierra Nevada falls to Arctic temperatures in winter and has several dozen small glaciers, including Palisade Glacier, the southernmost glacier in the United States. The Tulare Lake was the largest freshwater lake west of the Mississippi River. A remnant of Pleistocene-era Lake Corcoran, Tulare Lake dried up by the early 20th century after its tributary rivers were diverted for agricultural irrigation and municipal water uses. About 45 percent of the state's total surface area is covered by forests, and California's diversity of pine species is unmatched by any other state. California contains more forestland than any other state except Alaska. Many of the trees in the California White Mountains are the oldest in the world; an individual bristlecone pine is over 5,000 years old. In the south is a large inland salt lake, the Salton Sea. The south-central desert is called the Mojave; to the northeast of the Mojave lies Death Valley, which contains the lowest and hottest place in North America, the Badwater Basin at . The horizontal distance from the bottom of Death Valley to the top of Mount Whitney is less than . Indeed, almost all of southeastern California is arid, hot desert, with routine extreme high temperatures during the summer. The southeastern border of California with Arizona is entirely formed by the Colorado River, from which the southern part of the state gets about half of its water. A majority of California's cities are located in either the San Francisco Bay Area or the Sacramento metropolitan area in Northern California; or the Los Angeles area, the Inland Empire, or the San Diego metropolitan area in Southern California. The Los Angeles Area, the Bay Area, and the San Diego metropolitan area are among several major metropolitan areas along the California coast. As part of the Ring of Fire, California is subject to tsunamis, floods, droughts, Santa Ana winds, wildfires, and landslides on steep terrain; California also has several volcanoes. It has many earthquakes due to several faults running through the state, the largest being the San Andreas Fault. About 37,000 earthquakes are recorded each year; most are too small to be felt, but two-thirds of the human risk from earthquakes lies in California. Climate Most of the state has a Mediterranean climate. The cool California Current offshore often creates summer fog near the coast. Farther inland, there are colder winters and hotter summers. The maritime moderation results in the shoreline summertime temperatures of Los Angeles and San Francisco being the coolest of all major metropolitan areas of the United States and uniquely cool compared to areas on the same latitude in the interior and on the east coast of the North American continent. Even the San Diego shoreline bordering Mexico is cooler in summer than most areas in the contiguous United States. Just a few miles inland, summer temperature extremes are significantly higher, with downtown Los Angeles being several degrees warmer than at the coast. The same microclimate phenomenon is seen in the climate of the Bay Area, where areas sheltered from the ocean experience significantly hotter summers and colder winters in contrast with nearby areas closer to the ocean. Northern parts of the state have more rain than the south. California's mountain ranges also influence the climate: some of the rainiest parts of the state are west-facing mountain slopes. Coastal northwestern California has a temperate climate, and the Central Valley has a Mediterranean climate but with greater temperature extremes than the coast. The high mountains, including the Sierra Nevada, have an alpine climate with snow in winter and mild to moderate heat in summer. California's mountains produce rain shadows on the eastern side, creating extensive deserts. The higher elevation deserts of eastern California have hot summers and cold winters, while the low deserts east of the Southern California mountains have hot summers and nearly frostless mild winters. Death Valley, a desert with large expanses below sea level, is considered the hottest location in the world; the highest temperature in the world, , was recorded there on July 10, 1913. The lowest temperature in California was on January 20, 1937, in Boca. The table below lists average temperatures for January and August in a selection of places throughout the state; some highly populated and some not. This includes the relatively cool summers of the Humboldt Bay region around Eureka, the extreme heat of Death Valley, and the mountain climate of Mammoth in the Sierra Nevada. The wide range of climates leads to a high demand for water. Over time, droughts have been increasing due to climate change and overextraction, becoming less seasonal and more year-round, further straining California's electricity supply and water security and having an impact on California business, industry, and agriculture. In 2022, a new state program was created in collaboration with indigenous peoples of California to revive the practice of controlled burns as a way of clearing excessive forest debris and making landscapes more resilient to wildfires. Native American use of fire in ecosystem management was outlawed in 1911, yet has now been recognized. Ecology California is one of the ecologically richest and most diverse parts of the world, and includes some of the most endangered ecological communities. California is part of the Nearctic realm and spans a number of terrestrial ecoregions. California's large number of endemic species includes relict species, which have died out elsewhere, such as the Catalina ironwood (Lyonothamnus floribundus). Many other endemics originated through differentiation or adaptive radiation, whereby multiple species develop from a common ancestor to take advantage of diverse ecological conditions such as the California lilac (Ceanothus). Many California endemics have become endangered, as urbanization, logging, overgrazing, and the introduction of exotic species have encroached on their habitat. Flora and fauna California boasts several superlatives in its collection of flora: the largest trees, the tallest trees, and the oldest trees. California's native grasses are perennial plants, and there are close to hundred succulent species native to the state. After European contact, these were generally replaced by invasive species of European annual grasses; and, in modern times, California's hills turn a characteristic golden-brown in summer. Because California has the greatest diversity of climate and terrain, the state has six life zones which are the lower Sonoran Desert; upper Sonoran (foothill regions and some coastal lands), transition (coastal areas and moist northeastern counties); and the Canadian, Hudsonian, and Arctic Zones, comprising the state's highest elevations. Plant life in the dry climate of the lower Sonoran zone contains a diversity of native cactus, mesquite, and paloverde. The Joshua tree is found in the Mojave Desert. Flowering plants include the dwarf desert poppy and a variety of asters. Fremont cottonwood and valley oak thrive in the Central Valley. The upper Sonoran zone includes the chaparral belt, characterized by forests of small shrubs, stunted trees, and herbaceous plants. Nemophila, mint, Phacelia, Viola, and the California poppy (Eschscholzia californica, the state flower) also flourish in this zone, along with the lupine, more species of which occur here than anywhere else in the world. The transition zone includes most of California's forests with the redwood (Sequoia sempervirens) and the "big tree" or giant sequoia (Sequoiadendron giganteum), among the oldest living things on earth (some are said to have lived at least 4,000 years). Tanbark oak, California laurel, sugar pine, madrona, broad-leaved maple, and Douglas-fir also grow here. Forest floors are covered with swordfern, alumnroot, barrenwort, and trillium, and there are thickets of huckleberry, azalea, elder, and wild currant. Characteristic wild flowers include varieties of mariposa, tulip, and tiger and leopard lilies. The high elevations of the Canadian zone allow the Jeffrey pine, red fir, and lodgepole pine to thrive. Brushy areas are abundant with dwarf manzanita and ceanothus; the unique Sierra puffball is also found here. Right below the timberline, in the Hudsonian zone, the whitebark, foxtail, and silver pines grow. At about , begins the Arctic zone, a treeless region whose flora include a number of wildflowers, including Sierra primrose, yellow columbine, alpine buttercup, and alpine shooting star. Palm trees are a well-known feature of California, particularly in Southern California and Los Angeles; many species have been imported, though the Washington filifera (commonly known as the California fan palm) is native to the state, mainly growing in the Colorado Desert oases. Other common plants that have been introduced to the state include the eucalyptus, acacia, pepper tree, geranium, and Scotch broom. The species that are federally classified as endangered are the Contra Costa wallflower, Antioch Dunes evening primrose, Solano grass, San Clemente Island larkspur, salt marsh bird's beak, McDonald's rock-cress, and Santa Barbara Island liveforever. , 85 plant species were listed as threatened or endangered. In the deserts of the lower Sonoran zone, the mammals include the jackrabbit, kangaroo rat, squirrel, and opossum. Common birds include the owl, roadrunner, cactus wren, and various species of hawk. The area's reptilian life include the sidewinder viper, desert tortoise, and horned toad. The upper Sonoran zone boasts mammals such as the antelope, brown-footed woodrat, and ring-tailed cat. Birds unique to this zone are the California thrasher, bushtit, and California condor. In the transition zone, there are Colombian black-tailed deer, black bears, gray foxes, cougars, bobcats, and Roosevelt elk. Reptiles such as the garter snakes and rattlesnakes inhabit the zone. In addition, amphibians such as the water puppy and redwood salamander are common too. Birds such as the kingfisher, chickadee, towhee, and hummingbird thrive here as well. The Canadian zone mammals include the mountain weasel, snowshoe hare, and several species of chipmunks. Conspicuous birds include the blue-fronted jay, mountain chickadee, hermit thrush, American dipper, and Townsend's solitaire. As one ascends into the Hudsonian zone, birds become scarcer. While the gray-crowned rosy finch is the only bird native to the high Arctic region, other bird species such as Anna's hummingbird and Clark's nutcracker. Principal mammals found in this region include the Sierra coney, white-tailed jackrabbit, and the bighorn sheep. , the bighorn sheep was listed as endangered by the U.S. Fish and Wildlife Service. The fauna found throughout several zones are the mule deer, coyote, mountain lion, northern flicker, and several species of hawk and sparrow. Aquatic life in California thrives, from the state's mountain lakes and streams to the rocky Pacific coastline. Numerous trout species are found, among them rainbow, golden, and cutthroat. Migratory species of salmon are common as well. Deep-sea life forms include sea bass, yellowfin tuna, barracuda, and several types of whale. Native to the cliffs of northern California are seals, sea lions, and many types of shorebirds, including migratory species. , 118 California animals were on the federal endangered list; 181 plants were listed as endangered or threatened. Endangered animals include the San Joaquin kitfox, Point Arena mountain beaver, Pacific pocket mouse, salt marsh harvest mouse, Morro Bay kangaroo rat (and five other species of kangaroo rat), Amargosa vole, California least tern, California condor, loggerhead shrike, San Clemente sage sparrow, San Francisco garter snake, five species of salamander, three species of chub, and two species of pupfish. Eleven butterflies are also endangered and two that are threatened are on the federal list. Among threatened animals are the coastal California gnatcatcher, Paiute cutthroat trout, southern sea otter, and northern spotted owl. California has a total of of National Wildlife Refuges. , 123 California animals were listed as either endangered or threatened on the federal list. Also, , 178 species of California plants were listed either as endangered or threatened on this federal list. Rivers The most prominent river system within California is formed by the Sacramento River and San Joaquin River, which are fed mostly by snowmelt from the west slope of the Sierra Nevada, and respectively drain the north and south halves of the Central Valley. The two rivers join in the Sacramento–San Joaquin River Delta, flowing into the Pacific Ocean through San Francisco Bay. Many major tributaries feed into the Sacramento–San Joaquin system, including the Pit River, Feather River and Tuolumne River. The Klamath and Trinity Rivers drain a large area in far northwestern California. The Eel River and Salinas River each drain portions of the California coast, north and south of San Francisco Bay, respectively. The Mojave River is the primary watercourse in the Mojave Desert, and the Santa Ana River drains much of the Transverse Ranges as it bisects Southern California. The Colorado River forms the state's southeast border with Arizona. Most of California's major rivers are dammed as part of two massive water projects: the Central Valley Project, providing water for agriculture in the Central Valley, and the California State Water Project diverting water from Northern to Southern California. The state's coasts, rivers, and other bodies of water are regulated by the California Coastal Commission. Regions California is traditionally separated into Northern California and Southern California, divided by a straight border which runs across the state, separating the northern 48 counties from the southern 10 counties. Despite the persistence of the northern-southern divide, California is more precisely divided into many regions, multiple of which stretch across the northern-southern divide. Major divisions Northern California Southern California Regions Cities and towns The state has 482 incorporated cities and towns, of which 460 are cities and 22 are towns. Under California law, the terms "city" and "town" are explicitly interchangeable; the name of an incorporated municipality in the state can either be "City of (Name)" or "Town of (Name)". Sacramento became California's first incorporated city on February 27, 1850. San Jose, San Diego, and Benicia tied for California's second incorporated city, each receiving incorporation on March 27, 1850. Jurupa Valley became the state's most recent and 482nd incorporated municipality, on July 1, 2011. The majority of these cities and towns are within one of five metropolitan areas: the Los Angeles Metropolitan Area, the San Francisco Bay Area, the Riverside-San Bernardino Area, the San Diego metropolitan area, or the Sacramento metropolitan area. Demographics Population Nearly one out of every eight Americans lives in California. The United States Census Bureau reported that the population of California was 39,538,223 on April 1, 2020, a 6.13% increase since the 2010 census. The estimated state population in 2022 was 39.22 million. For over a century (1900–2020), California experienced steady population growth, adding an average of more than 300,000 people per year from 1940 onward. California's rate of growth began to slow by the 1990s, although it continued to experience population growth in the first two decades of the 21st century. The state experienced population declines in 2020 and 2021, attributable to declining birth rates, COVID-19 pandemic deaths, and less internal migration from other states to California. The Greater Los Angeles Area is the second-largest metropolitan area in the United States (U.S.), while Los Angeles is the second-largest city in the U.S. Conversely, San Francisco is the most densely-populated city in California and one of the most densely populated cities in the U.S.. Also, Los Angeles County has held the title of most populous U.S. county for decades, and it alone is more populous than 42 U.S. states. Including Los Angeles, four of the top 20 most populous cities in the U.S. are in California: Los Angeles (2nd), San Diego (8th), San Jose (10th), and San Francisco (17th). The center of population of California is located four miles west-southwest of the city of Shafter, Kern County. As of 2019, California ranked second among states by life expectancy, with a life expectancy of 80.9 years. Starting in the year 2010, for the first time since the California Gold Rush, California-born residents made up the majority of the state's population. Along with the rest of the United States, California's immigration pattern has also shifted over the course of the late 2000s to early 2010s. Immigration from Latin American countries has dropped significantly with most immigrants now coming from Asia. In total for 2011, there were 277,304 immigrants. Fifty-seven percent came from Asian countries versus 22% from Latin American countries. Net immigration from Mexico, previously the most common country of origin for new immigrants, has dropped to zero / less than zero since more Mexican nationals are departing for their home country than immigrating. The state's population of undocumented immigrants has been shrinking in recent years, due to increased enforcement and decreased job opportunities for lower-skilled workers. The number of migrants arrested attempting to cross the Mexican border in the Southwest decreased from a high of 1.1million in 2005 to 367,000 in 2011. Despite these recent trends, illegal aliens constituted an estimated 7.3 percent of the state's population, the third highest percentage of any state in the country, totaling nearly 2.6million. In particular, illegal immigrants tended to be concentrated in Los Angeles, Monterey, San Benito, Imperial, and Napa Counties—the latter four of which have significant agricultural industries that depend on manual labor. More than half of illegal immigrants originate from Mexico. The state of California and some California cities, including Los Angeles, Oakland and San Francisco, have adopted sanctuary policies. According to HUD's 2022 Annual Homeless Assessment Report, there were an estimated 171,521 homeless people in California. Race and ethnicity According to the United States Census Bureau in 2018 the population self-identified as (alone or in combination): 72.1% White (including Hispanic Whites), 36.8% non-Hispanic whites, 15.3% Asian, 6.5% Black or African American, 1.6% Native American and Alaska Native, 0.5% Native Hawaiian or Pacific Islander, and 3.9% two or more races. By ethnicity, in 2018 the population was 60.7% non-Hispanic (of any race) and 39.3% Hispanic or Latino (of any race). Hispanics are the largest single ethnic group in California. Non-Hispanic whites constituted 36.8% of the state's population. Californios are the Hispanic residents native to California, who make up the Spanish-speaking community that has existed in California since 1542, of varying Mexican American/Chicano, Criollo Spaniard, and Mestizo origin. , 75.1% of California's population younger than age 1 were minorities, meaning they had at least one parent who was not non-Hispanic white (white Hispanics are counted as minorities). In terms of total numbers, California has the largest population of White Americans in the United States, an estimated 22,200,000 residents. The state has the 5th largest population of African Americans in the United States, an estimated 2,250,000 residents. California's Asian American population is estimated at 4.4million, constituting a third of the nation's total. California's Native American population of 285,000 is the most of any state. According to estimates from 2011, California has the largest minority population in the United States by numbers, making up 60% of the state population. Over the past 25 years, the population of non-Hispanic whites has declined, while Hispanic and Asian populations have grown. Between 1970 and 2011, non-Hispanic whites declined from 80% of the state's population to 40%, while Hispanics grew from 32% in 2000 to 38% in 2011. It is currently projected that Hispanics will rise to 49% of the population by 2060, primarily due to domestic births rather than immigration. With the decline of immigration from Latin America, Asian Americans now constitute the fastest growing racial/ethnic group in California; this growth is primarily driven by immigration from China, India and the Philippines, respectively. Most of California's immigrant population are born in Mexico (3.9 million), the Philippines (825,200), China (768,400), India (556,500) and Vietnam (502,600). California has the largest multiracial population in the United States. California has the highest rate of interracial marriage. Languages English serves as California's de jure and de facto official language. According to the 2021 American Community Survey conducted by the United States Census Bureau, 56.08% (20,763,638) of California residents age5 and older spoke only English at home, while 43.92% spoke another language at home. 60.35% of people who speak a language other than English at home are able to speak English "well" or "very well", with this figure varying significantly across the different linguistic groups. Like most U.S. states (32 out of 50), California law enshrines English as its official language, and has done so since the passage of Proposition 63 by California voters in 1986. Various government agencies do, and are often required to, furnish documents in the various languages needed to reach their intended audiences. Spanish is the most commonly spoken language in California, behind English, spoken by 28.18% (10,434,308) of the population (in 2021). The Spanish language has been spoken in California since 1542 and is deeply intertwined with California's cultural landscape and history. Spanish was the official administrative language of California through the Spanish and Mexican eras, until 1848. Following the U.S. Conquest of California and the Treaty of Guadalupe-Hidalgo, the U.S. Government guaranteed the rights of Spanish speaking Californians. The first Constitution of California was written in both languages at the Monterey Constitutional Convention of 1849 and protected the rights of Spanish speakers to use their language in government proceedings and mandating that all government documents be published in both English and Spanish. Despite the initial recognition of Spanish by early American governments in California, the revised 1879 constitution stripped the rights of Spanish speakers and the official status of Spanish. The growth of the English-only movement by the mid-20th century led to the passage of 1986 California Proposition 63, which enshrined English as the only official language in California and ended Spanish language instruction in schools. 2016 California Proposition 58 reversed the prohibition on bilingual education, though there are still many barriers to the proliferation of Spanish bilingual education, including a shortage of teachers and lack of funding. The government of California has since made efforts to promote Spanish language access and bilingual education, as have private educational institutions in California. Many businesses in California promote the usage of Spanish by their employees, to better serve both California's Hispanic population and the larger Spanish-speaking world. California has historically been one of the most linguistically diverse areas in the world, with more than 70 indigenous languages derived from 64 root languages in six language families. A survey conducted between 2007 and 2009 identified 23 different indigenous languages among California farmworkers. All of California's indigenous languages are endangered, although there are now efforts toward language revitalization. California has the highest concentration nationwide of Chinese, Vietnamese and Punjabi speakers. As a result of the state's increasing diversity and migration from other areas across the country and around the globe, linguists began noticing a noteworthy set of emerging characteristics of spoken American English in California since the late 20th century. This variety, known as California English, has a vowel shift and several other phonological processes that are different from varieties of American English used in other regions of the United States. Religion The largest religious denominations by number of adherents as a percentage of California's population in 2014 were the Catholic Church with 28 percent, Evangelical Protestants with 20 percent, and Mainline Protestants with 10 percent. Together, all kinds of Protestants accounted for 32 percent. Those unaffiliated with any religion represented 27 percent of the population. The breakdown of other religions is 1% Muslim, 2% Hindu and 2% Buddhist. This is a change from 2008, when the population identified their religion with the Catholic Church with 31 percent; Evangelical Protestants with 18 percent; and Mainline Protestants with 14 percent. In 2008, those unaffiliated with any religion represented 21 percent of the population. The breakdown of other religions in 2008 was 0.5% Muslim, 1% Hindu and 2% Buddhist. The American Jewish Year Book placed the total Jewish population of California at about 1,194,190 in 2006. According to the Association of Religion Data Archives (ARDA) the largest denominations by adherents in 2010 were the Catholic Church with 10,233,334; The Church of Jesus Christ of Latter-day Saints with 763,818; and the Southern Baptist Convention with 489,953. The first priests to come to California were Catholic missionaries from Spain. Catholics founded 21 missions along the California coast, as well as the cities of Los Angeles and San Francisco. California continues to have a large Catholic population due to the large numbers of Mexicans and Central Americans living within its borders. California has twelve dioceses and two archdioceses, the Archdiocese of Los Angeles and the Archdiocese of San Francisco, the former being the largest archdiocese in the United States. A Pew Research Center survey revealed that California is somewhat less religious than the rest of the states: 62 percent of Californians say they are "absolutely certain" of their belief in God, while in the nation 71 percent say so. The survey also revealed 48 percent of Californians say religion is "very important", compared to 56 percent nationally. Culture The culture of California is a Western culture and most clearly has its modern roots in the culture of the United States, but also, historically, many Hispanic Californio and Mexican influences. As a border and coastal state, California culture has been greatly influenced by several large immigrant populations, especially those from Latin America and Asia. California has long been a subject of interest in the public mind and has often been promoted by its boosters as a kind of paradise. In the early 20th century, fueled by the efforts of state and local boosters, many Americans saw the Golden State as an ideal resort destination, sunny and dry all year round with easy access to the ocean and mountains. In the 1960s, popular music groups such as the Beach Boys promoted the image of Californians as laid-back, tanned beach-goers. The California Gold Rush of the 1850s is still seen as a symbol of California's economic style, which tends to generate technology, social, entertainment, and economic fads and booms and related busts. Media and entertainment Hollywood and the rest of the Los Angeles area is a major global center for entertainment, with the U.S. film industry's "Big Five" major film studios (Columbia, Disney, Paramount, Universal, and Warner Bros.) as well as many minor film studios being based in or around the area. Many animation studios are also headquartered in the state. The four major American television commercial broadcast networks (ABC, CBS, NBC, and Fox) as well as other networks all have production facilities and offices in the state. All the four major commercial broadcast networks, plus the two major Spanish-language networks (Telemundo and Univision) each have at least three owned-and-operated TV stations in California, including at least one in Los Angeles and at least one in San Francisco. One of the oldest radio stations in the United States still in existence, KCBS (AM) in the San Francisco Bay Area, was founded in 1909. Universal Music Group, one of the "Big Four" record labels, is based in Santa Monica, while Warner Records is based in Los Angeles. Many independent record labels, such as Mind of a Genius Records, are also headquartered in the state. California is also the birthplace of several international music genres, including the Bakersfield sound, Bay Area thrash metal, alternative rock, g-funk, nu metal, glam metal, thrash metal, psychedelic rock, stoner rock, punk rock, hardcore punk, metalcore, pop punk, surf music, third wave ska, west coast hip hop, west coast jazz, jazz rap, and many other genres. Other genres such as pop rock, indie rock, hard rock, hip hop, pop, rock, rockabilly, country, heavy metal, grunge, new wave and disco were popularized in the state. In addition, many British bands, such as Led Zeppelin, Deep Purple, Black Sabbath, and the Rolling Stones settled in the state after becoming internationally famous. As the home of Silicon Valley, the Bay Area is the headquarters of several prominent internet media, social media, and other technology companies. Three of the "Big Five" technology companies (Apple, Meta, and Google) are based in the area as well as other services such as Netflix, Pandora Radio, Twitter, Yahoo!, and YouTube. Other prominent companies that are headquartered here include HP inc. and Intel. Microsoft and Amazon also have offices in the area. California, particularly Southern California, is considered the birthplace of modern car culture. Several fast food, fast casual, and casual dining chains were also founded California, including some that have since expanded internationally like California Pizza Kitchen, Denny's, IHOP, McDonald's, Panda Express, and Taco Bell. Sports California has nineteen major professional sports league franchises, far more than any other state. The San Francisco Bay Area has six major league teams spread in its three major cities: San Francisco, San Jose, and Oakland, while the Greater Los Angeles Area is home to ten major league franchises. San Diego and Sacramento each have one major league team. The NFL Super Bowl has been hosted in California 12 times at five different stadiums: Los Angeles Memorial Coliseum, the Rose Bowl, Stanford Stadium, Levi's Stadium, and San Diego's Qualcomm Stadium. A thirteenth, Super Bowl LVI, was held at Sofi Stadium in Inglewood on February 13, 2022. California has long had many respected collegiate sports programs. California is home to the oldest college bowl game, the annual Rose Bowl, among others. The NFL has three teams in the state: the Los Angeles Rams, Los Angeles Chargers, and San Francisco 49ers. MLB has five teams in the state: the San Francisco Giants, Oakland Athletics, Los Angeles Dodgers, Los Angeles Angels, and San Diego Padres. The NBA has four teams in the state: the Golden State Warriors, Los Angeles Clippers, Los Angeles Lakers, and Sacramento Kings. Additionally, the WNBA also has one team in the state: the Los Angeles Sparks. The NHL has three teams in the state: the Anaheim Ducks, Los Angeles Kings, and San Jose Sharks. MLS has three teams in the state: the Los Angeles Galaxy, San Jose Earthquakes, and Los Angeles Football Club. MLR has one team in the state: the San Diego Legion. California is the only U.S. state to have hosted both the Summer and Winter Olympics. The 1932 and 1984 summer games were held in Los Angeles. Squaw Valley Ski Resort (now Palisades Tahoe) in the Lake Tahoe region hosted the 1960 Winter Olympics. Los Angeles will host the 2028 Summer Olympics, marking the fourth time that California will have hosted the Olympic Games. Multiple games during the 1994 FIFA World Cup took place in California, with the Rose Bowl hosting eight matches (including the final), while Stanford Stadium hosted six matches. In addition to the Olympic games, California also hosts the California State Games. Many sports, such as surfing, snowboarding, and skateboarding, were invented in California, while others like volleyball, beach soccer, and skiing were popularized in the state. Other sports that are big in the state include golf, rodeo, tennis, mountain climbing, marathon running, horse racing, bowling, mixed martial arts, boxing, and motorsports, especially NASCAR and Formula One. Education California has the most school students in the country, with over 6.2 million in the 2005–06 school year, giving California more students in school than 36 states have in total population and one of the highest projected enrollments in the country. Public secondary education consists of high schools that teach elective courses in trades, languages, and liberal arts with tracks for gifted, college-bound and industrial arts students. California's public educational system is supported by a unique constitutional amendment that requires a minimum annual funding level for grades K–12 and community colleges that grows with the economy and student enrollment figures. In 2016, California's K–12 public school per-pupil spending was ranked 22nd in the nation ($11,500 per student vs. $11,800 for the U.S. average). For 2012, California's K–12 public schools ranked 48th in the number of employees per student, at 0.102 (the U.S. average was 0.137), while paying the 7th most per employee, $49,000 (the U.S. average was $39,000). A 2007 study concluded that California's public school system was "broken" in that it suffered from overregulation. Higher education California public postsecondary education is organized into three separate systems: The state's public research university system is the University of California (UC). As of fall 2011, the University of California had a combined student body of 234,464 students. There are ten UC campuses; nine are general campuses offering both undergraduate and graduate programs which culminate in the award of bachelor's degrees, master's degrees, and doctorates; there is one specialized campus, UC San Francisco, which is entirely dedicated to graduate education in health care, and is home to the UCSF Medical Center, the highest-ranked hospital in California. The system was originally intended to accept the top one-eighth of California high school students, but several of the campuses have become even more selective. The UC system historically held exclusive authority to award the doctorate, but this has since changed and CSU now has limited statutory authorization to award a handful of types of doctoral degrees independently of UC. The California State University (CSU) system has almost 430,000 students. The CSU (which takes the definite article in its abbreviated form, while UC does not) was originally intended to accept the top one-third of California high school students, but several of the campuses have become much more selective. The CSU was originally authorized to award only bachelor's and master's degrees, and could award the doctorate only as part of joint programs with UC or private universities. Since then, CSU has been granted the authority to independently award several doctoral degrees (in specific academic fields that do not intrude upon UC's traditional jurisdiction). The California Community Colleges system provides lower-division coursework culminating in the associate degree, as well as basic skills and workforce training culminating in various kinds of certificates. (Fifteen California community colleges now award four-year bachelor's degrees in disciplines which are in high demand in their geographical area.) It is the largest network of higher education in the U.S., composed of 112 colleges serving a student population of over 2.6million. California is also home to notable private universities such as Stanford University, the California Institute of Technology (Caltech), the University of Southern California, the Claremont Colleges, Santa Clara University, Loyola Marymount University, the University of San Diego, the University of San Francisco, Chapman University, Pepperdine University, Occidental College, and University of the Pacific, among numerous other private colleges and universities, including many religious and special-purpose institutions. California has a particularly high density of arts colleges, including the California College of the Arts, California Institute of the Arts, San Francisco Art Institute, Art Center College of Design, and Academy of Art University, among others. Economy California's economy ranks among the largest in the world. , the gross state product (GSP) was $3.6trillion ($92,190 per capita), the largest in the United States. California is responsible for one seventh of the nation's gross domestic product (GDP). , California's nominal GDP is larger than all but four countries (the United States, China, Japan, and Germany). In terms of purchasing power parity (PPP), it is larger than all but eight countries (the United States, China, India, Japan, Germany, Russia, Brazil, and Indonesia). California's economy is larger than Africa and Australia and is almost as large as South America. The state recorded total, non-farm employment of 16,677,800 among 966,224 employer establishments. As the largest and second-largest U.S. ports respectively, the Port of Los Angeles and the Port of Long Beach in Southern California collectively play a pivotal role in the global supply chain, together hauling in about 40% of all imports to the United States by TEU volume. The Port of Oakland and Port of Hueneme are the 10th and 26th largest seaports in the U.S., respectively, by number of TEUs handled. The five largest sectors of employment in California are trade, transportation, and utilities; government; professional and business services; education and health services; and leisure and hospitality. In output, the five largest sectors are financial services, followed by trade, transportation, and utilities; education and health services; government; and manufacturing. California has an unemployment rate of 3.9% . California's economy is dependent on trade and international related commerce accounts for about one-quarter of the state's economy. In 2008, California exported $144billion worth of goods, up from $134billion in 2007 and $127billion in 2006. Computers and electronic products are California's top export, accounting for 42 percent of all the state's exports in 2008. Agriculture Agriculture is an important sector in California's economy. According to the USDA in 2011, the three largest California agricultural products by value were milk and cream, shelled almonds, and grapes. Farming-related sales more than quadrupled over the past three decades, from $7.3billion in 1974 to nearly $31billion in 2004. This increase has occurred despite a 15 percent decline in acreage devoted to farming during the period, and water supply suffering from chronic instability. Factors contributing to the growth in sales-per-acre include more intensive use of active farmlands and technological improvements in crop production. In 2008, California's 81,500 farms and ranches generated $36.2billion products revenue. In 2011, that number grew to $43.5billion products revenue. The agriculture sector accounts for two percent of the state's GDP and employs around three percent of its total workforce. Income Per capita GDP in 2007 was $38,956, ranking eleventh in the nation. Per capita income varies widely by geographic region and profession. The Central Valley is the most impoverished, with migrant farm workers making less than minimum wage. According to a 2005 report by the Congressional Research Service, the San Joaquin Valley was characterized as one of the most economically depressed regions in the United States, on par with the region of Appalachia. Using the supplemental poverty measure, California has a poverty rate of 23.5%, the highest of any state in the country. However, using the official measure the poverty rate was only 13.3% as of 2017. Many coastal cities include some of the wealthiest per-capita areas in the United States. The high-technology sectors in Northern California, specifically Silicon Valley, in Santa Clara and San Mateo counties, have emerged from the economic downturn caused by the dot-com bust. In 2019, there were 1,042,027 millionaire households in the state, more than any other state in the nation. In 2010, California residents were ranked first among the states with the best average credit score of 754. State finances State spending increased from $56billion in 1998 to $127billion in 2011. California has the third highest per capita spending on welfare among the states, as well as the highest spending on welfare at $6.67billion. In January 2011, California's total debt was at least $265billion. On June 27, 2013, Governor Jerry Brown signed a balanced budget (no deficit) for the state, its first in decades; however, the state's debt remains at $132billion. With the passage of Proposition 30 in 2012 and Proposition 55 in 2016, California now levies a 13.3% maximum marginal income tax rate with ten tax brackets, ranging from 1% at the bottom tax bracket of $0 annual individual income to 13.3% for annual individual income over $1,000,000 (though the top brackets are only temporary until Proposition 55 expires at the end of 2030). While Proposition 30 also enacted a minimum state sales tax of 7.5%, this sales tax increase was not extended by Proposition 55 and reverted to a previous minimum state sales tax rate of 7.25% in 2017. Local governments can and do levy additional sales taxes in addition to this minimum rate. All real property is taxable annually; the ad valorem tax is based on the property's fair market value at the time of purchase or the value of new construction. Property tax increases are capped at 2% annually or the rate of inflation (whichever is lower), per Proposition 13. Infrastructure Energy Because it is the most populous state in the United States, California is one of the country's largest users of energy. The state has extensive hydro-electric energy generation facilities, however, moving water is the single largest energy use in the state. Also, due to high energy rates, conservation mandates, mild weather in the largest population centers and strong environmental movement, its per capita energy use is one of the smallest of any state in the United States. Due to the high electricity demand, California imports more electricity than any other state, primarily hydroelectric power from states in the Pacific Northwest (via Path 15 and Path 66) and coal- and natural gas-fired production from the desert Southwest via Path 46. The state's crude oil and natural gas deposits are located in the Central Valley and along the coast, including the large Midway-Sunset Oil Field. Natural gas-fired power plants typically account for more than one-half of state electricity generation. As a result of the state's strong environmental movement, California has some of the most aggressive renewable energy goals in the United States. Senate Bill SB 1020 (the Clean Energy, Jobs and Affordability Act of 2022) commits the state to running its operations on clean, renewable energy resources by 2035, and SB 1203 also requires the state to achieve net-zero operations for all agencies. Currently, several solar power plants such as the Solar Energy Generating Systems facility are located in the Mojave Desert. California's wind farms include Altamont Pass, San Gorgonio Pass, and Tehachapi Pass. The Tehachapi area is also where the Tehachapi Energy Storage Project is located. Several dams across the state provide hydro-electric power. It would be possible to convert the total supply to 100% renewable energy, including heating, cooling and mobility, by 2050. California has one major nuclear power plant (Diablo Canyon) in operation. The San Onofre nuclear plant was shut down in 2013. More than 1,700tons of radioactive waste are stored at San Onofre, and sit on the coast where there is a record of past tsunamis. Voters banned the approval of new nuclear power plants since the late 1970s because of concerns over radioactive waste disposal. In addition, several cities such as Oakland, Berkeley and Davis have declared themselves as nuclear-free zones. Transportation Highways California's vast terrain is connected by an extensive system of controlled-access highways ('freeways'), limited-access roads ('expressways'), and highways. California is known for its car culture, giving California's cities a reputation for severe traffic congestion. Construction and maintenance of state roads and statewide transportation planning are primarily the responsibility of the California Department of Transportation, nicknamed "Caltrans". The rapidly growing population of the state is straining all of its transportation networks, and California has some of the worst roads in the United States. The Reason Foundation's 19th Annual Report on the Performance of State Highway Systems ranked California's highways the third-worst of any state, with Alaska second, and Rhode Island first. The state has been a pioneer in road construction. One of the state's more visible landmarks, the Golden Gate Bridge, was the longest suspension bridge main span in the world at between 1937 (when it opened) and 1964. With its orange paint and panoramic views of the bay, this highway bridge is a popular tourist attraction and also accommodates pedestrians and bicyclists. The San Francisco–Oakland Bay Bridge (often abbreviated the "Bay Bridge"), completed in 1936, transports about 280,000 vehicles per day on two-decks. Its two sections meet at Yerba Buena Island through the world's largest diameter transportation bore tunnel, at wide by high. The Arroyo Seco Parkway, connecting Los Angeles and Pasadena, opened in 1940 as the first freeway in the Western United States. It was later extended south to the Four Level Interchange in downtown Los Angeles, regarded as the first stack interchange ever built. The California Highway Patrol is the largest statewide police agency in the United States in employment with more than 10,000 employees. They are responsible for providing any police-sanctioned service to anyone on California's state-maintained highways and on state property. By the end of 2021, 30,610,058 people in California held a California Department of Motor Vehicles-issued driver's licenses or state identification card, and there were 36,229,205 registered vehicles, including 25,643,076 automobiles, 853,368 motorcycles, 8,981,787 trucks and trailers, and 121,716 miscellaneous vehicles (including historical vehicles and farm equipment). Air travel Los Angeles International Airport (LAX), the 4th busiest airport in the world in 2018, and San Francisco International Airport (SFO), the 25th busiest airport in the world in 2018, are major hubs for trans-Pacific and transcontinental traffic. There are about a dozen important commercial airports and many more general aviation airports throughout the state. Railroads Inter-city rail travel is provided by Amtrak California; the three routes, the Capitol Corridor, Pacific Surfliner, and San Joaquin, are funded by Caltrans. These services are the busiest intercity rail lines in the United States outside the Northeast Corridor and ridership is continuing to set records. The routes are becoming increasingly popular over flying, especially on the LAX-SFO route. Integrated subway and light rail networks are found in Los Angeles (Metro Rail) and San Francisco (MUNI Metro). Light rail systems are also found in San Jose (VTA), San Diego (San Diego Trolley), Sacramento (RT Light Rail), and Northern San Diego County (Sprinter). Furthermore, commuter rail networks serve the San Francisco Bay Area (ACE, BART, Caltrain, SMART), Greater Los Angeles (Metrolink), and San Diego County (Coaster). The California High-Speed Rail Authority was authorized in 1996 by the state legislature to plan a California High-Speed Rail system to put before the voters. The plan they devised, 2008 California Proposition 1A, connecting all the major population centers in the state, was approved by the voters at the November 2008 general election. The first phase of construction was begun in 2015, and the first segment long, is planned to be put into operation by the end of 2030. Planning and work on the rest of the system is continuing, with funding for completing it is an ongoing issue. California's 2023 integrated passenger rail master plan includes a high speed rail system. Busses Nearly all counties operate bus lines, and many cities operate their own city bus lines as well. Intercity bus travel is provided by Greyhound, Megabus, and Amtrak Thruway Motorcoach. Water California's interconnected water system is the world's largest, managing over of water per year, centered on six main systems of aqueducts and infrastructure projects. Water use and conservation in California is a politically divisive issue, as the state experiences periodic droughts and has to balance the demands of its large agricultural and urban sectors, especially in the arid southern portion of the state. The state's widespread redistribution of water also invites the frequent scorn of environmentalists. The California Water Wars, a conflict between Los Angeles and the Owens Valley over water rights, is one of the most well-known examples of the struggle to secure adequate water supplies. Former California Governor Arnold Schwarzenegger said: "We've been in crisis for quite some time because we're now 38million people and not anymore 18million people like we were in the late 60s. So it developed into a battle between environmentalists and farmers and between the south and the north and between rural and urban. And everyone has been fighting for the last four decades about water." Government and politics State government The capital city of California is Sacramento. The state is organized into three branches of government—the executive branch consisting of the governor and the other independently elected constitutional officers; the legislative branch consisting of the Assembly and Senate; and the judicial branch consisting of the Supreme Court of California and lower courts. The state also allows ballot propositions: direct participation of the electorate by initiative, referendum, recall, and ratification. Before the passage of Proposition 14 in 2010, California allowed each political party to choose whether to have a closed primary or a primary where only party members and independents vote. After June 8, 2010, when Proposition 14 was approved, excepting only the United States president and county central committee offices, all candidates in the primary elections are listed on the ballot with their preferred party affiliation, but they are not the official nominee of that party. At the primary election, the two candidates with the top votes will advance to the general election regardless of party affiliation. If at a special primary election, one candidate receives more than 50% of all the votes cast, they are elected to fill the vacancy and no special general election will be held. Executive branch The California executive branch consists of the governor and seven other elected constitutional officers: lieutenant governor, attorney general, secretary of state, state controller, state treasurer, insurance commissioner, and state superintendent of public instruction. They serve four-year terms and may be re-elected only once. The many California state agencies that are under the governor's cabinet are grouped together to form cabinet-level entities that are referred to by government officials as "superagencies". Those departments that are directly under the other independently elected officers work separately from these superagencies. Legislative branch The California State Legislature consists of a 40-member Senate and 80-member Assembly. Senators serve four-year terms and Assembly members two. Members of the Assembly are subject to term limits of six terms, and members of the Senate are subject to term limits of three terms. Judicial branch California's legal system is explicitly based upon English common law but carries many features from Spanish civil law, such as community property. California's prison population grew from 25,000 in 1980 to over 170,000 in 2007. Capital punishment is a legal form of punishment and the state has the largest "Death Row" population in the country (though Oklahoma and Texas are far more active in carrying out executions). California has performed 13 executions since 1976, with the last being in 2006. California's judiciary system is the largest in the United States with a total of 1,600 judges (the federal system has only about 840). At the apex is the seven-member Supreme Court of California, while the California Courts of Appeal serve as the primary appellate courts and the California Superior Courts serve as the primary trial courts. Justices of the Supreme Court and Courts of Appeal are appointed by the governor, but are subject to retention by the electorate every 12 years. The administration of the state's court system is controlled by the Judicial Council, composed of the chief justice of the California Supreme Court, 14 judicial officers, four representatives from the State Bar of California, and one member from each house of the state legislature. In fiscal year 2020–2021, the state judiciary's 2,000 judicial officers and 18,000 judicial branch employees processed approximately 4.4 million cases. Local government California has an extensive system of local government that manages public functions throughout the state. Like most states, California is divided into counties, of which there are 58 (including San Francisco) covering the entire state. Most urbanized areas are incorporated as cities. School districts, which are independent of cities and counties, handle public education. Many other functions, such as fire protection and water supply, especially in unincorporated areas, are handled by special districts. Counties California is divided into 58 counties. Per Article 11, Section 1, of the Constitution of California, they are the legal subdivisions of the state. The county government provides countywide services such as law enforcement, jails, elections and voter registration, vital records, property assessment and records, tax collection, public health, health care, social services, libraries, flood control, fire protection, animal control, agricultural regulations, building inspections, ambulance services, and education departments in charge of maintaining statewide standards. In addition, the county serves as the local government for all unincorporated areas. Each county is governed by an elected board of supervisors. City and town governments Incorporated cities and towns in California are either charter or general-law municipalities. General-law municipalities owe their existence to state law and are consequently governed by it; charter municipalities are governed by their own city or town charters. Municipalities incorporated in the 19th century tend to be charter municipalities. All ten of the state's most populous cities are charter cities. Most small cities have a council–manager form of government, where the elected city council appoints a city manager to supervise the operations of the city. Some larger cities have a directly elected mayor who oversees the city government. In many council-manager cities, the city council selects one of its members as a mayor, sometimes rotating through the council membership—but this type of mayoral position is primarily ceremonial. The Government of San Francisco is the only consolidated city-county in California, where both the city and county governments have been merged into one unified jurisdiction. School districts and special districts About 1,102 school districts, independent of cities and counties, handle California's public education. California school districts may be organized as elementary districts, high school districts, unified school districts combining elementary and high school grades, or community college districts. There are about 3,400 special districts in California. A special district, defined by California Government Code § 16271(d) as "any agency of the state for the local performance of governmental or proprietary functions within limited boundaries", provides a limited range of services within a defined geographic area. The geographic area of a special district can spread across multiple cities or counties, or could consist of only a portion of one. Most of California's special districts are single-purpose districts, and provide one service. Federal representation The state of California sends 52 members to the House of Representatives, the nation's largest congressional state delegation. Consequently, California also has the largest number of electoral votes in national presidential elections, with 54. The former speaker of the House of Representatives is the representative of California's 20th district, Kevin McCarthy. California is represented by U.S. senator Alex Padilla, a native and former secretary of state of California, its class 1 Senate seat is currently vacant following the death of Dianne Feinstein. Former U.S. senator Kamala Harris, a native, former district attorney from San Francisco, former attorney general of California, resigned on January 18, 2021, to assume her role as the current Vice President of the United States. In the 1992 U.S. Senate election, California became the first state to elect a Senate delegation entirely composed of women, due to the victories of Feinstein and Barbara Boxer. Set to follow the Vice President-Elect, Gov. Newsom appointed Secretary of State Alex Padilla to finish the rest of Harris's term which ends in 2022, Padilla has vowed to run for the full term in that election cycle. Padilla was sworn in on January 20, 2021, the same day as the inauguration of Joe Biden as well as Harris. Armed forces In California, , the U.S. Department of Defense had a total of 117,806 active duty servicemembers of which 88,370 were Sailors or Marines, 18,339 were Airmen, and 11,097 were Soldiers, with 61,365 Department of Defense civilian employees. Additionally, there were a total of 57,792 Reservists and Guardsman in California. In 2010, Los Angeles County was the largest origin of military recruits in the United States by county, with 1,437 individuals enlisting in the military. However, , Californians were relatively under-represented in the military as a proportion to its population. In 2000, California, had 2,569,340 veterans of United States military service: 504,010 served in World War II, 301,034 in the Korean War, 754,682 during the Vietnam War, and 278,003 during 1990–2000 (including the Persian Gulf War). , there were 1,942,775 veterans living in California, of which 1,457,875 served during a period of armed conflict, and just over four thousand served before World WarII (the largest population of this group of any state). California's military forces consist of the Army and Air National Guard, the naval and state military reserve (militia), and the California Cadet Corps. On August 5, 1950, a nuclear-capable United States Air Force Boeing B-29 Superfortress bomber carrying a nuclear bomb crashed shortly after takeoff from Fairfield-Suisun Air Force Base. Brigadier General Robert F. Travis, command pilot of the bomber, was among the dead. Ideology California has an idiosyncratic political culture compared to the rest of the country, and is sometimes regarded as a trendsetter. In socio-cultural mores and national politics, Californians are perceived as more liberal than other Americans, especially those who live in the inland states. In the 2016 United States presidential election, California had the third highest percentage of Democratic votes behind the District of Columbia and Hawaii. In the 2020 United States presidential election, it had the 6th highest behind the District of Columbia, Vermont, Massachusetts, Maryland, and Hawaii. According to the Cook Political Report, California contains five of the 15 most Democratic congressional districts in the United States. Among the political idiosyncrasies, California was the second state to recall their state governor (the first state being North Dakota in 1921), the second state to legalize abortion, and the only state to ban marriage for gay couples twice by vote (including Proposition8 in 2008). Voters also passed Proposition 71 in 2004 to fund stem cell research, making California the second state to legalize stem cell research after New Jersey, and Proposition 14 in 2010 to completely change the state's primary election process. California has also experienced disputes over water rights; and a tax revolt, culminating with the passage of Proposition 13 in 1978, limiting state property taxes. California voters have rejected affirmative action on multiple occasions, most recently in November 2020. The state's trend towards the Democratic Party and away from the Republican Party can be seen in state elections. From 1899 to 1939, California had Republican governors. Since 1990, California has generally elected Democratic candidates to federal, state and local offices, including current Governor Gavin Newsom; however, the state has elected Republican Governors, though many of its Republican Governors, such as Arnold Schwarzenegger, tend to be considered moderate Republicans and more centrist than the national party. Several political movements have advocated for California independence. The California National Party and the California Freedom Coalition both advocate for California independence along the lines of progressivism and civic nationalism. The Yes California movement attempted to organize an independence referendum via ballot initiative for 2019, which was then postponed. The Democrats also now hold a supermajority in both houses of the state legislature. There are 62 Democrats and 18 Republicans in the Assembly; and 32 Democrats and 8 Republicans in the Senate. The trend towards the Democratic Party is most obvious in presidential elections. From 1952 through 1988, California was a Republican leaning state, with the party carrying the state's electoral votes in nine of ten elections, with 1964 as the exception. Southern California Republicans Richard Nixon and Ronald Reagan were both elected twice as the 37th and 40th U.S. Presidents, respectively. However, Democrats have won all of California's electoral votes for the last eight elections, starting in 1992. In the United States House, the Democrats held a 34–19 edge in the CA delegation of the 110th United States Congress in 2007. As the result of gerrymandering, the districts in California were usually dominated by one or the other party, and few districts were considered competitive. In 2008, Californians passed Proposition 20 to empower a 14-member independent citizen commission to redraw districts for both local politicians and Congress. After the 2012 elections, when the new system took effect, Democrats gained four seats and held a 38–15 majority in the delegation. Following the 2018 midterm House elections, Democrats won 46 out of 53 congressional house seats in California, leaving Republicans with seven. In general, Democratic strength is centered in the populous coastal regions of the Los Angeles metropolitan area and the San Francisco Bay Area. Republican strength is still greatest in eastern parts of the state. Orange County had remained largely Republican until the 2016 and 2018 elections, in which a majority of the county's votes were cast for Democratic candidates. One study ranked Berkeley, Oakland, Inglewood and San Francisco in the top 20 most liberal American cities; and Bakersfield, Orange, Escondido, Garden Grove, and Simi Valley in the top 20 most conservative cities. In October 2022, out of the 26,876,800 people eligible to vote, 21,940,274 people were registered to vote. Of the people registered, the three largest registered groups were Democrats (10,283,258), Republicans (5,232,094), and No Party Preference (4,943,696). Los Angeles County had the largest number of registered Democrats (2,996,565) and Republicans (958,851) of any county in the state. California retains the death penalty, though it has not been used since 2006. There is currently a gubernatorial hold on executions. Authorized methods of execution include the gas chamber. Twinned regions California has region twinning arrangements with: Catalonia, autonomous community of Spain Alberta, province of Canada Jeju Province of South Korea Guangdong, province of China See also Index of California-related articles Outline of California List of people from California Notes References Citations Works cited Further reading Matthews, Glenna. The Golden State in the Civil War: Thomas Starr King, the Republican Party, and the Birth of Modern California. New York: Cambridge University Press, 2012. External links State of California California State Guide, from the Library of Congress data.ca.gov: open data portal from California state agencies California State Facts from USDA California Drought: Farm and Food Impacts from USDA, Economic Research Service 1973 documentary featuring aerial views of the California coastline from Mt. Shasta to Los Angeles Early City Views (Los Angeles) States and territories established in 1850 States of the United States States of the West Coast of the United States 1850 establishments in California Former Spanish colonies Western United States 1850 establishments in the United States Contiguous United States
5408
https://en.wikipedia.org/wiki/Columbia%20River
Columbia River
The Columbia River (Upper Chinook: or ; Sahaptin: Nch’i-Wàna or Nchi wana; Sinixt dialect ) is the largest river in the Pacific Northwest region of North America. The river forms in the Rocky Mountains of British Columbia, Canada. It flows northwest and then south into the U.S. state of Washington, then turns west to form most of the border between Washington and the state of Oregon before emptying into the Pacific Ocean. The river is long, and its largest tributary is the Snake River. Its drainage basin is roughly the size of France and extends into seven states of the United States and one Canadian province. The fourth-largest river in the United States by volume, the Columbia has the greatest flow of any North American river entering the Pacific. The Columbia has the 36th greatest discharge of any river in the world. The Columbia and its tributaries have been central to the region's culture and economy for thousands of years. They have been used for transportation since ancient times, linking the region's many cultural groups. The river system hosts many species of anadromous fish, which migrate between freshwater habitats and the saline waters of the Pacific Ocean. These fish—especially the salmon species—provided the core subsistence for native peoples. The first documented European discovery of the Columbia River occurred when Bruno de Heceta sighted the river's mouth in 1775. On May 11, 1792, a private American ship, Columbia Rediviva, under Captain Robert Gray from Boston became the first non-indigenous vessel to enter the river. Later in 1792, William Robert Broughton of the British Royal Navy commanding HMS Chatham as part of the Vancouver Expedition, navigated past the Oregon Coast Range and 100 miles upriver to what is now Vancouver, Washington. In the following decades, fur-trading companies used the Columbia as a key transportation route. Overland explorers entered the Willamette Valley through the scenic, but treacherous Columbia River Gorge, and pioneers began to settle the valley in increasing numbers. Steamships along the river linked communities and facilitated trade; the arrival of railroads in the late 19th century, many running along the river, supplemented these links. Since the late 19th century, public and private sectors have extensively developed the river. To aid ship and barge navigation, locks have been built along the lower Columbia and its tributaries, and dredging has opened, maintained, and enlarged shipping channels. Since the early 20th century, dams have been built across the river for power generation, navigation, irrigation, and flood control. The 14 hydroelectric dams on the Columbia's main stem and many more on its tributaries produce more than 44 percent of total U.S. hydroelectric generation. Production of nuclear power has taken place at two sites along the river. Plutonium for nuclear weapons was produced for decades at the Hanford Site, which is now the most contaminated nuclear site in the United States. These developments have greatly altered river environments in the watershed, mainly through industrial pollution and barriers to fish migration. Course The Columbia begins its journey in the southern Rocky Mountain Trench in British Columbia (BC). Columbia Lake above sea level and the adjoining Columbia Wetlands form the river's headwaters. The trench is a broad, deep, and long glacial valley between the Canadian Rockies and the Columbia Mountains in BC. For its first , the Columbia flows northwest along the trench through Windermere Lake and the town of Invermere, a region known in BC as the Columbia Valley, then northwest to Golden and into Kinbasket Lake. Rounding the northern end of the Selkirk Mountains, the river turns sharply south through a region known as the Big Bend Country, passing through Revelstoke Lake and the Arrow Lakes. Revelstoke, the Big Bend, and the Columbia Valley combined are referred to in BC parlance as the Columbia Country. Below the Arrow Lakes, the Columbia passes the cities of Castlegar, located at the Columbia's confluence with the Kootenay River, and Trail, two major population centers of the West Kootenay region. The Pend Oreille River joins the Columbia about north of the United States–Canada border. The Columbia enters eastern Washington flowing south and turning to the west at the Spokane River confluence. It marks the southern and eastern borders of the Colville Indian Reservation and the western border of the Spokane Indian Reservation. The river turns south after the Okanogan River confluence, then southeasterly near the confluence with the Wenatchee River in central Washington. This C-shaped segment of the river is also known as the "Big Bend". During the Missoula Floods 1015,000 years ago, much of the floodwater took a more direct route south, forming the ancient river bed known as the Grand Coulee. After the floods, the river found its present course, and the Grand Coulee was left dry. The construction of the Grand Coulee Dam in the mid-20th century impounded the river, forming Lake Roosevelt, from which water was pumped into the dry coulee, forming the reservoir of Banks Lake. The river flows past The Gorge Amphitheatre, a prominent concert venue in the Northwest, then through Priest Rapids Dam, and then through the Hanford Nuclear Reservation. Entirely within the reservation is Hanford Reach, the only U.S. stretch of the river that is completely free-flowing, unimpeded by dams, and not a tidal estuary. The Snake River and Yakima River join the Columbia in the Tri-Cities population center. The Columbia makes a sharp bend to the west at the Washington–Oregon border. The river defines that border for the final of its journey. The Deschutes River joins the Columbia near The Dalles. Between The Dalles and Portland, the river cuts through the Cascade Range, forming the dramatic Columbia River Gorge. No other rivers except for the Klamath and Pit River completely breach the Cascadesthe other rivers that flow through the range also originate in or very near the mountains. The headwaters and upper course of the Pit River are on the Modoc Plateau; downstream, the Pit cuts a canyon through the southern reaches of the Cascades. In contrast, the Columbia cuts through the range nearly a thousand miles from its source in the Rocky Mountains. The gorge is known for its strong and steady winds, scenic beauty, and its role as an important transportation link. The river continues west, bending sharply to the north-northwest near Portland and Vancouver, Washington, at the Willamette River confluence. Here the river slows considerably, dropping sediment that might otherwise form a river delta. Near Longview, Washington and the Cowlitz River confluence, the river turns west again. The Columbia empties into the Pacific Ocean just west of Astoria, Oregon, over the Columbia Bar, a shifting sandbar that makes the river's mouth one of the most hazardous stretches of water to navigate in the world. Because of the danger and the many shipwrecks near the mouth, it acquired a reputation as the "Graveyard of Ships". The Columbia drains an area of about . Its drainage basin covers nearly all of Idaho, large portions of British Columbia, Oregon, and Washington, and ultimately all of Montana west of the Continental Divide, and small portions of Wyoming, Utah, and Nevada; the total area is similar to the size of France. Roughly of the river's length and 85 percent of its drainage basin are in the US. The Columbia is the twelfth-longest river and has the sixth-largest drainage basin in the United States. In Canada, where the Columbia flows for and drains , the river ranks 23rd in length, and the Canadian part of its basin ranks 13th in size among Canadian basins. The Columbia shares its name with nearby places, such as British Columbia, as well as with landforms and bodies of water. Discharge With an average flow at the mouth of about , the Columbia is the largest river by discharge flowing into the Pacific from the Americas and is the fourth-largest by volume in the U.S. The average flow where the river crosses the international border between Canada and the United States is from a drainage basin of . This amounts to about 15 percent of the entire Columbia watershed. The Columbia's highest recorded flow, measured at The Dalles, was in June 1894, before the river was dammed. The lowest flow recorded at The Dalles was on April 16, 1968, and was caused by the initial closure of the John Day Dam, upstream. The Dalles is about from the mouth; the river at this point drains about or about 91 percent of the total watershed. Flow rates on the Columbia are affected by many large upstream reservoirs, many diversions for irrigation, and, on the lower stretches, reverse flow from the tides of the Pacific Ocean. The National Ocean Service observes water levels at six tide gauges and issues tide forecasts for twenty-two additional locations along the river between the entrance at the North Jetty and the base of Bonneville Dam, its head of tide. The Columbia River multiannual average discharge: Geology When the rifting of Pangaea, due to the process of plate tectonics, pushed North America away from Europe and Africa and into the Panthalassic Ocean (ancestor to the modern Pacific Ocean), the Pacific Northwest was not part of the continent. As the North American continent moved westward, the Farallon Plate subducted under its western margin. As the plate subducted, it carried along island arcs which were accreted to the North American continent, resulting in the creation of the Pacific Northwest between 150 and 90 million years ago. The general outline of the Columbia Basin was not complete until between 60 and 40 million years ago, but it lay under a large inland sea later subject to uplift. Between 50 and 20 million years ago, from the Eocene through the Miocene eras, tremendous volcanic eruptions frequently modified much of the landscape traversed by the Columbia. The lower reaches of the ancestral river passed through a valley near where Mount Hood later arose. Carrying sediments from erosion and erupting volcanoes, it built a thick delta that underlies the foothills on the east side of the Coast Range near Vernonia in northwestern Oregon. Between 17 million and 6 million years ago, huge outpourings of flood basalt lava covered the Columbia River Plateau and forced the lower Columbia into its present course. The modern Cascade Range began to uplift 5 to 4 million years ago. Cutting through the uplifting mountains, the Columbia River significantly deepened the Columbia River Gorge. The river and its drainage basin experienced some of the world's greatest known catastrophic floods toward the end of the last ice age. The periodic rupturing of ice dams at Glacial Lake Missoula resulted in the Missoula Floods, with discharges exceeding the combined flow of all the other rivers in the world, dozens of times over thousands of years. The exact number of floods is unknown, but geologists have documented at least 40; evidence suggests that they occurred between about 19,000 and 13,000 years ago. The floodwaters rushed across eastern Washington, creating the channeled scablands, which are a complex network of dry canyon-like channels, or coulees that are often braided and sharply gouged into the basalt rock underlying the region's deep topsoil. Numerous flat-topped buttes with rich soil stand high above the chaotic scablands. Constrictions at several places caused the floodwaters to pool into large temporary lakes, such as Lake Lewis, in which sediments were deposited. Water depths have been estimated at at Wallula Gap and over modern Portland, Oregon. Sediments were also deposited when the floodwaters slowed in the broad flats of the Quincy, Othello, and Pasco Basins. The floods' periodic inundation of the lower Columbia River Plateau deposited rich sediments; 21st-century farmers in the Willamette Valley "plow fields of fertile Montana soil and clays from Washington's Palouse". Over the last several thousand years a series of large landslides have occurred on the north side of the Columbia River Gorge, sending massive amounts of debris south from Table Mountain and Greenleaf Peak into the gorge near the present site of Bonneville Dam. The most recent and significant is known as the Bonneville Slide, which formed a massive earthen dam, filling of the river's length. Various studies have placed the date of the Bonneville Slide anywhere between 1060 and 1760 AD; the idea that the landslide debris present today was formed by more than one slide is relatively recent and may explain the large range of estimates. It has been suggested that if the later dates are accurate there may be a link with the 1700 Cascadia earthquake. The pile of debris resulting from the Bonneville Slide blocked the river until rising water finally washed away the sediment. It is not known how long it took the river to break through the barrier; estimates range from several months to several years. Much of the landslide's debris remained, forcing the river about south of its previous channel and forming the Cascade Rapids. In 1938, the construction of Bonneville Dam inundated the rapids as well as the remaining trees that could be used to refine the estimated date of the landslide. In 1980, the eruption of Mount St. Helens deposited large amounts of sediment in the lower Columbia, temporarily reducing the depth of the shipping channel by . Indigenous peoples Humans have inhabited the Columbia's watershed for more than 15,000 years, with a transition to a sedentary lifestyle based mainly on salmon starting about 3,500 years ago. In 1962, archaeologists found evidence of human activity dating back 11,230 years at the Marmes Rockshelter, near the confluence of the Palouse and Snake rivers in eastern Washington. In 1996 the skeletal remains of a 9,000-year-old prehistoric man (dubbed Kennewick Man) were found near Kennewick, Washington. The discovery rekindled debate in the scientific community over the origins of human habitation in North America and sparked a protracted controversy over whether the scientific or Native American community was entitled to possess and/or study the remains. Many different Native Americans and First Nations peoples have a historical and continuing presence on the Columbia. South of the Canada–US border, the Colville, Spokane, Coeur d'Alene, Yakama, Nez Perce, Cayuse, Palus, Umatilla, Cowlitz, and the Confederated Tribes of Warm Springs live along the US stretch. Along the upper Snake River and Salmon River, the Shoshone Bannock tribes are present. The Sinixt or Lakes people lived on the lower stretch of the Canadian portion, while above that the Shuswap people (Secwepemc in their own language) reckon the whole of the upper Columbia east to the Rockies as part of their territory. The Canadian portion of the Columbia Basin outlines the traditional homelands of the Canadian Kootenay–Ktunaxa. The Chinook tribe, which is not federally recognized, who live near the lower Columbia River, call it or in the Upper Chinook (Kiksht) language, and it is Nch’i-Wàna or Nchi wana to the Sahaptin (Ichishkíin Sɨ́nwit)-speaking peoples of its middle course in present-day Washington. The river is known as by the Sinixt people, who live in the area of the Arrow Lakes in the river's upper reaches in Canada. All three terms essentially mean "the big river". Oral histories describe the formation and destruction of the Bridge of the Gods, a land bridge that connected the Oregon and Washington sides of the river in the Columbia River Gorge. The bridge, which aligns with geological records of the Bonneville Slide, was described in some stories as the result of a battle between gods, represented by Mount Adams and Mount Hood, in their competition for the affection of a goddess, represented by Mount St. Helens. Native American stories about the bridge differ in their details but agree in general that the bridge permitted increased interaction between tribes on the north and south sides of the river. Horses, originally acquired from Spanish New Mexico, spread widely via native trade networks, reaching the Shoshone of the Snake River Plain by 1700. The Nez Perce, Cayuse, and Flathead people acquired their first horses around 1730. Along with horses came aspects of the emerging plains culture, such as equestrian and horse training skills, greatly increased mobility, hunting efficiency, trade over long distances, intensified warfare, the linking of wealth and prestige to horses and war, and the rise of large and powerful tribal confederacies. The Nez Perce and Cayuse kept large herds and made annual long-distance trips to the Great Plains for bison hunting, adopted the plains culture to a significant degree, and became the main conduit through which horses and the plains culture diffused into the Columbia River region. Other peoples acquired horses and aspects of the plains culture unevenly. The Yakama, Umatilla, Palus, Spokane, and Coeur d'Alene maintained sizable herds of horses and adopted some of the plains cultural characteristics, but fishing and fish-related economies remained important. Less affected groups included the Molala, Klickitat, Wenatchi, Okanagan, and Sinkiuse-Columbia peoples, who owned small numbers of horses and adopted few plains culture features. Some groups remained essentially unaffected, such as the Sanpoil and Nespelem people, whose culture remained centered on fishing. Natives of the region encountered foreigners at several times and places during the 18th and 19th centuries. European and American vessels explored the coastal area around the mouth of the river in the late 18th century, trading with local natives. The contact would prove devastating to the Indian tribes; a large portion of their population was wiped out by a smallpox epidemic. Canadian explorer Alexander Mackenzie crossed what is now interior British Columbia in 1793. From 1805 to 1806, the Lewis and Clark Expedition entered the Oregon Country along the Clearwater and Snake rivers, and encountered numerous small settlements of natives. Their records recount tales of hospitable traders who were not above stealing small items from the visitors. They also noted brass teakettles, a British musket, and other artifacts that had been obtained in trade with coastal tribes. From the earliest contact with westerners, the natives of the mid- and lower Columbia were not tribal, but instead congregated in social units no larger than a village, and more often at a family level; these units would shift with the season as people moved about, following the salmon catch up and down the river's tributaries. Sparked by the 1847 Whitman Massacre, a number of violent battles were fought between American settlers and the region's natives. The subsequent Indian Wars, especially the Yakima War, decimated the native population and removed much land from native control. As years progressed, the right of natives to fish along the Columbia became the central issue of contention with the states, commercial fishers, and private property owners. The US Supreme Court upheld fishing rights in landmark cases in 1905 and 1918, as well as the 1974 case United States v. Washington, commonly called the Boldt Decision. Fish were central to the culture of the region's natives, both as sustenance and as part of their religious beliefs. Natives drew fish from the Columbia at several major sites, which also served as trading posts. Celilo Falls, located east of the modern city of The Dalles, was a vital hub for trade and the interaction of different cultural groups, being used for fishing and trading for 11,000 years. Prior to contact with westerners, villages along this stretch may have at times had a population as great as 10,000. The site drew traders from as far away as the Great Plains. The Cascades Rapids of the Columbia River Gorge, and Kettle Falls and Priest Rapids in eastern Washington, were also major fishing and trading sites. In prehistoric times the Columbia's salmon and steelhead runs numbered an estimated annual average of 10 to 16 million fish. In comparison, the largest run since 1938 was in 1986, with 3.2 million fish entering the Columbia. The annual catch by natives has been estimated at . The most important and productive native fishing site was located at Celilo Falls, which was perhaps the most productive inland fishing site in North America. The falls were located at the border between Chinookan- and Sahaptian-speaking peoples and served as the center of an extensive trading network across the Pacific Plateau. Celilo was the oldest continuously inhabited community on the North American continent. Salmon canneries established by white settlers beginning in 1866 had a strong negative impact on the salmon population, and in 1908 US President Theodore Roosevelt observed that the salmon runs were but a fraction of what they had been 25 years prior. As river development continued in the 20th century, each of these major fishing sites was flooded by a dam, beginning with Cascades Rapids in 1938. The development was accompanied by extensive negotiations between natives and US government agencies. The Confederated Tribes of Warm Springs, a coalition of various tribes, adopted a constitution and incorporated after the 1938 completion of the Bonneville Dam flooded Cascades Rapids; Still, in the 1930s, there were natives who lived along the river and fished year round, moving along with the fish's migration patterns throughout the seasons. The Yakama were slower to do so, organizing a formal government in 1944. In the 21st century, the Yakama, Nez Perce, Umatilla, and Warm Springs tribes all have treaty fishing rights along the Columbia and its tributaries. In 1957 Celilo Falls was submerged by the construction of The Dalles Dam, and the native fishing community was displaced. The affected tribes received a $26.8 million settlement for the loss of Celilo and other fishing sites submerged by The Dalles Dam. The Confederated Tribes of Warm Springs used part of its $4 million settlement to establish the Kah-Nee-Ta resort south of Mount Hood. New waves of explorers Some historians believe that Japanese or Chinese vessels blown off course reached the Northwest Coast long before Europeans—possibly as early as 219 BCE. Historian Derek Hayes claims that "It is a near certainty that Japanese or Chinese people arrived on the northwest coast long before any European." It is unknown whether they landed near the Columbia. Evidence exists that Spanish castaways reached the shore in 1679 and traded with the Clatsop; if these were the first Europeans to see the Columbia, they failed to send word home to Spain. In the 18th century, there was strong interest in discovering a Northwest Passage that would permit navigation between the Atlantic (or inland North America) and the Pacific Ocean. Many ships in the area, especially those under Spanish and British command, searched the northwest coast for a large river that might connect to Hudson Bay or the Missouri River. The first documented European discovery of the Columbia River was that of Bruno de Heceta, who in 1775 sighted the river's mouth. On the advice of his officers, he did not explore it, as he was short-staffed and the current was strong. He considered it a bay, and called it Ensenada de Asunción (Assumption Cove). Later Spanish maps, based on his sighting, showed a river, labeled Río de San Roque (The Saint Roch River), or an entrance, called Entrada de Hezeta, named for Bruno de Hezeta, who sailed the region. Following Hezeta's reports, British maritime fur trader Captain John Meares searched for the river in 1788 but concluded that it did not exist. He named Cape Disappointment for the non-existent river, not realizing the cape marks the northern edge of the river's mouth. What happened next would form the basis for decades of both cooperation and dispute between British and American exploration of, and ownership claim to, the region. Royal Navy commander George Vancouver sailed past the mouth in April 1792 and observed a change in the water's color, but he accepted Meares' report and continued on his journey northward. Later that month, Vancouver encountered the American captain Robert Gray at the Strait of Juan de Fuca. Gray reported that he had seen the entrance to the Columbia and had spent nine days trying but failing to enter. On May 12, 1792, Gray returned south and crossed the Columbia Bar, becoming the first known explorer of European descent to enter the river. Gray's fur trading mission had been financed by Boston merchants, who outfitted him with a private vessel named Columbia Rediviva; he named the river after the ship on May 18. Gray spent nine days trading near the mouth of the Columbia, then left without having gone beyond upstream. The farthest point reached was Grays Bay at the mouth of Grays River. Gray's discovery of the Columbia River was later used by the United States to support its claim to the Oregon Country, which was also claimed by Russia, Great Britain, Spain and other nations. In October 1792, Vancouver sent Lieutenant William Robert Broughton, his second-in-command, up the river. Broughton got as far as the Sandy River at the western end of the Columbia River Gorge, about upstream, sighting and naming Mount Hood. Broughton formally claimed the river, its drainage basin, and the nearby coast for Britain. In contrast, Gray had not made any formal claims on behalf of the United States. Because the Columbia was at the same latitude as the headwaters of the Missouri River, there was some speculation that Gray and Vancouver had discovered the long-sought Northwest Passage. A 1798 British map showed a dotted line connecting the Columbia with the Missouri. When the American explorers Meriwether Lewis and William Clark charted the vast, unmapped lands of the American West in their overland expedition (1803–1805), they found no passage between the rivers. After crossing the Rocky Mountains, Lewis and Clark built dugout canoes and paddled down the Snake River, reaching the Columbia near the present-day Tri-Cities, Washington. They explored a few miles upriver, as far as Bateman Island, before heading down the Columbia, concluding their journey at the river's mouth and establishing Fort Clatsop, a short-lived establishment that was occupied for less than three months. Canadian explorer David Thompson, of the North West Company, spent the winter of 180708 at Kootanae House near the source of the Columbia at present-day Invermere, BC. Over the next few years he explored much of the river and its northern tributaries. In 1811 he traveled down the Columbia to the Pacific Ocean, arriving at the mouth just after John Jacob Astor's Pacific Fur Company had founded Astoria. On his return to the north, Thompson explored the one remaining part of the river he had not yet seen, becoming the first Euro-descended person to travel the entire length of the river. In 1825, the Hudson's Bay Company (HBC) established Fort Vancouver on the bank of the Columbia, in what is now Vancouver, Washington, as the headquarters of the company's Columbia District, which encompassed everything west of the Rocky Mountains, north of California, and south of Russian-claimed Alaska. Chief Factor John McLoughlin, a physician who had been in the fur trade since 1804, was appointed superintendent of the Columbia District. The HBC reoriented its Columbia District operations toward the Pacific Ocean via the Columbia, which became the region's main trunk route. In the early 1840s Americans began to colonize the Oregon country in large numbers via the Oregon Trail, despite the HBC's efforts to discourage American settlement in the region. For many the final leg of the journey involved travel down the lower Columbia River to Fort Vancouver. This part of the Oregon Trail, the treacherous stretch from The Dalles to below the Cascades, could not be traversed by horses or wagons (only watercraft, at great risk). This prompted the 1846 construction of the Barlow Road. In the Treaty of 1818 the United States and Britain agreed that both nations were to enjoy equal rights in Oregon Country for 10 years. By 1828, when the so-called "joint occupation" was renewed indefinitely, it seemed probable that the lower Columbia River would in time become the border between the two nations. For years the Hudson's Bay Company successfully maintained control of the Columbia River and American attempts to gain a foothold were fended off. In the 1830s, American religious missions were established at several locations in the lower Columbia River region. In the 1840s a mass migration of American settlers undermined British control. The Hudson's Bay Company tried to maintain dominance by shifting from the fur trade, which was in decline, to exporting other goods such as salmon and lumber. Colonization schemes were attempted, but failed to match the scale of American settlement. Americans generally settled south of the Columbia, mainly in the Willamette Valley. The Hudson's Bay Company tried to establish settlements north of the river, but nearly all the British colonists moved south to the Willamette Valley. The hope that the British colonists might dilute the American presence in the valley failed in the face of the overwhelming number of American settlers. These developments rekindled the issue of "joint occupation" and the boundary dispute. While some British interests, especially the Hudson's Bay Company, fought for a boundary along the Columbia River, the Oregon Treaty of 1846 set the boundary at the 49th parallel. As part of the treaty, the British retained all areas north of the line while the United States acquired the south. The Columbia River became much of the border between the U.S. territories of Oregon and Washington. Oregon became a U.S. state in 1859, while Washington later entered into the Union in 1889. By the turn of the 20th century, the difficulty of navigating the Columbia was seen as an impediment to the economic development of the Inland Empire region east of the Cascades. The dredging and dam building that followed would permanently alter the river, disrupting its natural flow but also providing electricity, irrigation, navigability and other benefits to the region. Navigation American captain Robert Gray and British captain George Vancouver, who explored the river in 1792, proved that it was possible to cross the Columbia Bar. Many of the challenges associated with that feat remain today; even with modern engineering alterations to the mouth of the river, the strong currents and shifting sandbar make it dangerous to pass between the river and the Pacific Ocean. The use of steamboats along the river, beginning with the British Beaver in 1836 and followed by American vessels in 1850, contributed to the rapid settlement and economic development of the region. Steamboats operated in several distinct stretches of the river: on its lower reaches, from the Pacific Ocean to Cascades Rapids; from the Cascades to the Dalles-Celilo Falls; from Celilo to Priests Rapids; on the Wenatchee Reach of eastern Washington; on British Columbia's Arrow Lakes; and on tributaries like the Willamette, the Snake and Kootenay Lake. The boats, initially powered by burning wood, carried passengers and freight throughout the region for many years. Early railroads served to connect steamboat lines interrupted by waterfalls on the river's lower reaches. In the 1880s, railroads maintained by companies such as the Oregon Railroad and Navigation Company began to supplement steamboat operations as the major transportation links along the river. Opening the passage to Lewiston As early as 1881, industrialists proposed altering the natural channel of the Columbia to improve navigation. Changes to the river over the years have included the construction of jetties at the river's mouth, dredging, and the construction of canals and navigation locks. Today, ocean freighters can travel upriver as far as Portland and Vancouver, and barges can reach as far inland as Lewiston, Idaho. The shifting Columbia Bar makes passage between the river and the Pacific Ocean difficult and dangerous, and numerous rapids along the river hinder navigation. Pacific Graveyard, a 1964 book by James A. Gibbs, describes the many shipwrecks near the mouth of the Columbia. Jetties, first constructed in 1886, extend the river's channel into the ocean. Strong currents and the shifting sandbar remain a threat to ships entering the river and necessitate continuous maintenance of the jetties. In 1891, the Columbia was dredged to enhance shipping. The channel between the ocean and Portland and Vancouver was deepened from to . The Columbian called for the channel to be deepened to as early as 1905, but that depth was not attained until 1976. Cascade Locks and Canal were first constructed in 1896 around the Cascades Rapids, enabling boats to travel safely through the Columbia River Gorge. The Celilo Canal, bypassing Celilo Falls, opened to river traffic in 1915. In the mid-20th century, the construction of dams along the length of the river submerged the rapids beneath a series of reservoirs. An extensive system of locks allowed ships and barges to pass easily between reservoirs. A navigation channel reaching Lewiston, Idaho, along the Columbia and Snake rivers, was completed in 1975. Among the main commodities are wheat and other grains, mainly for export. As of 2016, the Columbia ranked third, behind the Mississippi and Paraná rivers, among the world's largest export corridors for grain. The 1980 eruption of Mount St. Helens caused mudslides in the area, which reduced the Columbia's depth by for a stretch, disrupting Portland's economy. Deeper shipping channel Efforts to maintain and improve the navigation channel have continued to the present day. In 1990 a new round of studies examined the possibility of further dredging on the lower Columbia. The plans were controversial from the start because of economic and environmental concerns. In 1999, Congress authorized deepening the channel between Portland and Astoria from , which will make it possible for large container and grain ships to reach Portland and Vancouver. The project has met opposition because of concerns about stirring up toxic sediment on the riverbed. Portland-based Northwest Environmental Advocates brought a lawsuit against the Army Corps of Engineers, but it was rejected by the Ninth U.S. Circuit Court of Appeals in August 2006. The project includes measures to mitigate environmental damage; for instance, the US Army Corps of Engineers must restore 12 times the area of wetland damaged by the project. In early 2006, the Corps spilled of hydraulic oil into the Columbia, drawing further criticism from environmental organizations. Work on the project began in 2005 and concluded in 2010. The project's cost is estimated at $150 million. The federal government is paying 65 percent, Oregon and Washington are paying $27 million each, and six local ports are also contributing to the cost. Dams In 1902, the United States Bureau of Reclamation was established to aid in the economic development of arid western states. One of its major undertakings was building Grand Coulee Dam to provide irrigation for the of the Columbia Basin Project in central Washington. With the onset of World War II, the focus of dam construction shifted to production of hydroelectricity. Irrigation efforts resumed after the war. River development occurred within the structure of the 1909 International Boundary Waters Treaty between the United States and Canada. The United States Congress passed the Rivers and Harbors Act of 1925, which directed the U.S. Army Corps of Engineers and the Federal Power Commission to explore the development of the nation's rivers. This prompted agencies to conduct the first formal financial analysis of hydroelectric development; the reports produced by various agencies were presented in House Document 308. Those reports, and subsequent related reports, are referred to as 308 Reports. In the late 1920s, political forces in the Northwestern United States generally favored the private development of hydroelectric dams along the Columbia. But the overwhelming victories of gubernatorial candidate George W. Joseph in the 1930 Republican primary, and later his law partner Julius Meier, were understood to demonstrate strong public support for public ownership of dams. In 1933, President Franklin D. Roosevelt signed a bill that enabled the construction of the Bonneville and Grand Coulee dams as public works projects. The legislation was attributed to the efforts of Oregon Senator Charles McNary, Washington Senator Clarence Dill, and Oregon Congressman Charles Martin, among others. In 1948, floods swept through the Columbia watershed, destroying Vanport, then the second largest city in Oregon, and impacting cities as far north as Trail, BC. The flooding prompted the U.S. Congress to pass the Flood Control Act of 1950, authorizing the federal development of additional dams and other flood control mechanisms. By that time local communities had become wary of federal hydroelectric projects, and sought local control of new developments; a public utility district in Grant County, Washington, ultimately began construction of the dam at Priest Rapids. In the 1960s, the United States and Canada signed the Columbia River Treaty, which focused on flood control and the maximization of downstream power generation. Canada agreed to build dams and provide reservoir storage, and the United States agreed to deliver to Canada one-half of the increase in United States downstream power benefits as estimated five years in advance. Canada's obligation was met by building three dams (two on the Columbia, and one on the Duncan River), the last of which was completed in 1973. Today the main stem of the Columbia River has fourteen dams, of which three are in Canada and eleven in the United States. Four mainstem dams and four lower Snake River dams contain navigation locks to allow ship and barge passage from the ocean as far as Lewiston, Idaho. The river system as a whole has more than 400 dams for hydroelectricity and irrigation. The dams address a variety of demands, including flood control, navigation, stream flow regulation, storage, and delivery of stored waters, reclamation of public lands and Indian reservations, and the generation of hydroelectric power. The larger U.S. dams are owned and operated by the federal government (some by the Army Corps of Engineers and some by the Bureau of Reclamation), while the smaller dams are operated by public utility districts and private power companies. The federally operated system is known as the Federal Columbia River Power System, which includes 31 dams on the Columbia and its tributaries. The system has altered the seasonal flow of the river to meet higher electricity demands during the winter. At the beginning of the 20th century, roughly 75 percent of the Columbia's flow occurred in the summer, between April and September. By 1980, the summer proportion had been lowered to about 50 percent, essentially eliminating the seasonal pattern. The installation of dams dramatically altered the landscape and ecosystem of the river. At one time, the Columbia was one of the top salmon-producing river systems in the world. Previously active fishing sites, such as Celilo Falls in the eastern Columbia River Gorge, have exhibited a sharp decline in fishing along the Columbia in the last century, and salmon populations have been dramatically reduced. Fish ladders have been installed at some dam sites to help the fish journey to spawning waters. Chief Joseph Dam has no fish ladders and completely blocks fish migration to the upper half of the Columbia River system. Irrigation The Bureau of Reclamation's Columbia Basin Project focused on the generally dry region of central Washington known as the Columbia Basin, which features rich loess soil. Several groups developed competing proposals, and in 1933, President Franklin D. Roosevelt authorized the Columbia Basin Project. The Grand Coulee Dam was the project's central component; upon completion, it pumped water up from the Columbia to fill the formerly dry Grand Coulee, forming Banks Lake. By 1935, the intended height of the dam was increased from a range between to , a height that would extend the lake impounded by the dam to the Canada–United States border; the project had grown from a local New Deal relief measure to a major national project. The project's initial purpose was irrigation, but the onset of World War II created a high electricity demand, mainly for aluminum production and for the development of nuclear weapons at the Hanford Site. Irrigation began in 1951. The project provides water to more than of fertile but arid land in central Washington, transforming the region into a major agricultural center. Important crops include orchard fruit, potatoes, alfalfa, mint, beans, beets, and wine grapes. Since 1750, the Columbia has experienced six multi-year droughts. The longest, lasting 12 years in the mid‑19th century, reduced the river's flow to 20 percent below average. Scientists have expressed concern that a similar drought would have grave consequences in a region so dependent on the Columbia. In 1992–1993, a lesser drought affected farmers, hydroelectric power producers, shippers, and wildlife managers. Many farmers in central Washington build dams on their property for irrigation and to control frost on their crops. The Washington Department of Ecology, using new techniques involving aerial photographs, estimated there may be as many as a hundred such dams in the area, most of which are illegal. Six such dams have failed in recent years, causing hundreds of thousands of dollars of damage to crops and public roads. Fourteen farms in the area have gone through the permitting process to build such dams legally. Hydroelectricity The Columbia's heavy flow and large elevation drop over a short distance, , give it tremendous capacity for hydroelectricity generation. In comparison, the Mississippi drops less than . The Columbia alone possesses one-third of the United States's hydroelectric potential. In 2012, the river and its tributaries accounted for 29 GW of hydroelectric generating capacity, contributing 44 percent of the total hydroelectric generation in the nation. The largest of the 150 hydroelectric projects, the Grand Coulee Dam and Chief Joseph Dam are also the largest in the United States. As of 2017, Grand Coulee is the fifth largest hydroelectric plant in the world. Inexpensive hydropower supported the location of a large aluminum industry in the region because its reduction from bauxite requires large amounts of electricity. Until 2000, the Northwestern United States produced up to 17 percent of the world's aluminum and 40 percent of the aluminum produced in the United States. The commoditization of power in the early 21st century, coupled with a drought that reduced the generation capacity of the river, damaged the industry and by 2001, Columbia River aluminum producers had idled 80 percent of its production capacity. By 2003, the entire United States produced only 15 percent of the world's aluminum and many smelters along the Columbia had gone dormant or out of business. Power remains relatively inexpensive along the Columbia, and since the mid-2000 several global enterprises have moved server farm operations into the area to avail themselves of cheap power. Downriver of Grand Coulee, each dam's reservoir is closely regulated by the Bonneville Power Administration (BPA), the U.S. Army Corps of Engineers, and various Washington public utility districts to ensure flow, flood control, and power generation objectives are met. Increasingly, hydro-power operations are required to meet standards under the U.S. Endangered Species Act and other agreements to manage operations to minimize impacts on salmon and other fish, and some conservation and fishing groups support removing four dams on the lower Snake River, the largest tributary of the Columbia. In 1941, the BPA hired Oklahoma folksinger Woody Guthrie to write songs for a documentary film promoting the benefits of hydropower. In the month he spent traveling the region Guthrie wrote 26 songs, which have become an important part of the cultural history of the region. Ecology and environment Fish migration The Columbia supports several species of anadromous fish that migrate between the Pacific Ocean and freshwater tributaries of the river. Sockeye salmon, Coho and Chinook ("king") salmon, and steelhead, all of the genus Oncorhynchus, are ocean fish that migrate up the rivers at the end of their life cycles to spawn. White sturgeon, which take 15 to 25 years to mature, typically migrate between the ocean and the upstream habitat several times during their lives. Salmon populations declined dramatically after the establishment of canneries in 1867. In 1879 it was reported that 545,450 salmon, with an average weight of were caught (in a recent season) and mainly canned for export to England. A can weighing could be sold for 8d or 9d. By 1908, there was widespread concern about the decline of salmon and sturgeon. In that year, the people of Oregon passed two laws under their newly instituted program of citizens' initiatives limiting fishing on the Columbia and other rivers. Then in 1948, another initiative banned the use of seine nets (devices already used by Native Americans, and refined by later settlers) altogether. Dams interrupt the migration of anadromous fish. Salmon and steelhead return to the streams in which they were born to spawn; where dams prevent their return, entire populations of salmon die. Some of the Columbia and Snake River dams employ fish ladders, which are effective to varying degrees at allowing these fish to travel upstream. Another problem exists for the juvenile salmon headed downstream to the ocean. Previously, this journey would have taken two to three weeks. With river currents slowed by the dams, and the Columbia converted from a wild river to a series of slackwater pools, the journey can take several months, which increases the mortality rate. In some cases, the Army Corps of Engineers transports juvenile fish downstream by truck or river barge. The Chief Joseph Dam and several dams on the Columbia's tributaries entirely block migration, and there are no migrating fish on the river above these dams. Sturgeons have different migration habits and can survive without ever visiting the ocean. In many upstream areas cut off from the ocean by dams, sturgeon simply live upstream of the dam. Not all fish have suffered from the modifications to the river; the northern pikeminnow (formerly known as the squawfish) thrives in the warmer, slower water created by the dams. Research in the mid-1980s found that juvenile salmon were suffering substantially from the predatory pikeminnow, and in 1990, in the interest of protecting salmon, a "bounty" program was established to reward anglers for catching pikeminnow. In 1994, the salmon catch was smaller than usual in the rivers of Oregon, Washington, and British Columbia, causing concern among commercial fishermen, government agencies, and tribal leaders. US government intervention, to which the states of Alaska, Idaho, and Oregon objected, included an 11-day closure of an Alaska fishery. In April 1994 the Pacific Fisheries Management Council unanimously approved the strictest regulations in 18 years, banning all commercial salmon fishing for that year from Cape Falcon north to the Canada–US border. In the winter of 1994, the return of coho salmon far exceeded expectations, which was attributed in part to the fishing ban. Also in 1994, United States Secretary of the Interior Bruce Babbitt proposed the removal of several Pacific Northwest dams because of their impact on salmon spawning. The Northwest Power Planning Council approved a plan that provided more water for fish and less for electricity, irrigation, and transportation. Environmental advocates have called for the removal of certain dams in the Columbia system in the years since. Of the 227 major dams in the Columbia River drainage basin, the four Washington dams on the lower Snake River are often identified for removal, for example in an ongoing lawsuit concerning a Bush administration plan for salmon recovery. These dams and reservoirs limit the recovery of upriver salmon runs to Idaho's Salmon and Clearwater rivers. Historically, the Snake produced over 1.5 million spring and summer Chinook salmon, a number that has dwindled to several thousand in recent years. Idaho Power Company's Hells Canyon dams have no fish ladders (and do not pass juvenile salmon downstream), and thus allow no steelhead or salmon to migrate above Hells Canyon. In 2007, the destruction of the Marmot Dam on the Sandy River was the first dam removal in the system. Other Columbia Basin dams that have been removed include Condit Dam on Washington's White Salmon River, and the Milltown Dam on the Clark Fork in Montana. Pollution In southeastern Washington, a stretch of the river passes through the Hanford Site, established in 1943 as part of the Manhattan Project. The site served as a plutonium production complex, with nine nuclear reactors and related facilities along the banks of the river. From 1944 to 1971, pump systems drew cooling water from the river and, after treating this water for use by the reactors, returned it to the river. Before being released back into the river, the used water was held in large tanks known as retention basins for up to six hours. Longer-lived isotopes were not affected by this retention, and several terabecquerels entered the river every day. By 1957, the eight plutonium production reactors at Hanford dumped a daily average of 50,000 curies of radioactive material into the Columbia. These releases were kept secret by the federal government until the release of declassified documents in the late 1980s. Radiation was measured downstream as far west as the Washington and Oregon coasts. The nuclear reactors were decommissioned at the end of the Cold War, and the Hanford site is the focus of one of the world's largest environmental cleanup, managed by the Department of Energy under the oversight of the Washington Department of Ecology and the Environmental Protection Agency. Nearby aquifers contain an estimated 270 billion US gallons (1 billion m3) of groundwater contaminated by high-level nuclear waste that has leaked out of Hanford's underground storage tanks. , 1 million US gallons (3,785 m3) of highly radioactive waste is traveling through groundwater toward the Columbia River. This waste is expected to reach the river in 12 to 50 years if cleanup does not proceed on schedule. In addition to concerns about nuclear waste, numerous other pollutants are found in the river. These include chemical pesticides, bacteria, arsenic, dioxins, and polychlorinated biphenyls (PCB). Studies have also found significant levels of toxins in fish and the waters they inhabit within the basin. Accumulation of toxins in fish threatens the survival of fish species, and human consumption of these fish can lead to health problems. Water quality is also an important factor in the survival of other wildlife and plants that grow in the Columbia River drainage basin. The states, Indian tribes, and federal government are all engaged in efforts to restore and improve the water, land, and air quality of the Columbia River drainage basin and have committed to work together to accomplish critical ecosystem restoration efforts. Several cleanup efforts are underway, including Superfund projects at Portland Harbor, Hanford, and Lake Roosevelt. Timber industry activity further contaminates river water, for example in the increased sediment runoff that results from clearcuts. The Northwest Forest Plan, a piece of federal legislation from 1994, mandated that timber companies consider the environmental impacts of their practices on rivers like the Columbia. On July 1, 2003, Christopher Swain became the first person to swim the Columbia River's entire length, to raise public awareness about the river's environmental health. Nutrient cycle Both natural and anthropogenic processes are involved in the cycling of nutrients in the Columbia River basin. Natural processes in the system include estuarine mixing of fresh and ocean waters, and climate variability patterns such as the Pacific Decadal Oscillation and the El Nino Southern Oscillation (both climatic cycles that affect the amount of regional snowpack and river discharge). Natural sources of nutrients in the Columbia River include weathering, leaf litter, salmon carcasses, runoff from its tributaries, and ocean estuary exchange. Major anthropogenic impacts on nutrients in the basin are due to fertilizers from agriculture, sewage systems, logging, and the construction of dams. Nutrient dynamics vary in the river basin from the headwaters to the main river and dams, to finally reaching the Columbia River estuary and ocean. Upstream in the headwaters, salmon runs are the main source of nutrients. Dams along the river impact nutrient cycling by increasing residence time of nutrients, and reducing the transport of silicate to the estuary, which directly impacts diatoms, a type of phytoplankton. The dams are also a barrier to salmon migration and can increase the amount of methane locally produced. The Columbia River estuary exports high rates of nutrients into the Pacific, except for nitrogen, which is delivered into the estuary by ocean upwelling sources. Watershed Most of the Columbia's drainage basin (which, at , is about the size of France) lies roughly between the Rocky Mountains on the east and the Cascade Mountains on the west. In the United States and Canada the term watershed is often used to mean drainage basin. The term Columbia Basin is used to refer not only to the entire drainage basin but also to subsets of the river's watershed, such as the relatively flat and unforested area in eastern Washington bounded by the Cascades, the Rocky Mountains, and the Blue Mountains. Within the watershed are diverse landforms including mountains, arid plateaus, river valleys, rolling uplands, and deep gorges. Grand Teton National Park lies in the watershed, as well as parts of Yellowstone National Park, Glacier National Park, Mount Rainier National Park, and North Cascades National Park. Canadian National Parks in the watershed include Kootenay National Park, Yoho National Park, Glacier National Park, and Mount Revelstoke National Park. Hells Canyon, the deepest gorge in North America, and the Columbia Gorge are in the watershed. Vegetation varies widely, ranging from western hemlock and western redcedar in the moist regions to sagebrush in the arid regions. The watershed provides habitat for 609 known fish and wildlife species, including the bull trout, bald eagle, gray wolf, grizzly bear, and Canada lynx. The World Wide Fund for Nature (WWF) divides the waters of the Columbia and its tributaries into three freshwater ecoregions: Columbia Glaciated, Columbia Unglaciated, and Upper Snake. The Columbia Glaciated ecoregion, about a third of the total watershed, lies in the north and was covered with ice sheets during the Pleistocene. The ecoregion includes the mainstem Columbia north of the Snake River and tributaries such as the Yakima, Okanagan, Pend Oreille, Clark Fork, and Kootenay rivers. The effects of glaciation include a number of large lakes and a relatively low diversity of freshwater fish. The Upper Snake ecoregion is defined as the Snake River watershed above Shoshone Falls, which totally blocks fish migration. This region has 14 species of fish, many of which are endemic. The Columbia Unglaciated ecoregion makes up the rest of the watershed. It includes the mainstem Columbia below the Snake River and tributaries such as the Salmon, John Day, Deschutes, and lower Snake Rivers. Of the three ecoregions it is the richest in terms of freshwater species diversity. There are 35 species of fish, of which four are endemic. There are also high levels of mollusk endemism. In 2016, over eight million people lived within the Columbia's drainage basin. Of this total about 3.5 million people lived in Oregon, 2.1 million in Washington, 1.7 million in Idaho, half a million in British Columbia, and 0.4 million in Montana. Population in the watershed has been rising for many decades and is projected to rise to about 10 million by 2030. The highest population densities are found west of the Cascade Mountains along the I-5 corridor, especially in the Portland-Vancouver urban area. High densities are also found around Spokane, Washington, and Boise, Idaho. Although much of the watershed is rural and sparsely populated, areas with recreational and scenic values are growing rapidly. The central Oregon county of Deschutes is the fastest-growing in the state. Populations have also been growing just east of the Cascades in central Washington around the city of Yakima and the Tri-Cities area. Projections for the coming decades assume growth throughout the watershed. The Canadian part of the Okanagan subbasin is also growing rapidly. Climate varies greatly within the watershed. Elevation ranges from sea level at the river mouth to more than in the mountains, and temperatures vary with elevation. The highest peak is Mount Rainier, at . High elevations have cold winters and short cool summers; interior regions are subject to great temperature variability and severe droughts. Over some of the watershed, especially west of the Cascade Mountains, precipitation maximums occur in winter, when Pacific storms come ashore. Atmospheric conditions block the flow of moisture in summer, which is generally dry except for occasional thunderstorms in the interior. In some of the eastern parts of the watershed, especially shrub-steppe regions with Continental climate patterns, precipitation maximums occur in early summer. Annual precipitation varies from more than a year in the Cascades to less than in the interior. Much of the watershed gets less than a year. Several major North American drainage basins and many minor ones border the Columbia River's drainage basin. To the east, in northern Wyoming and Montana, the Continental Divide separates the Columbia watershed from the Mississippi-Missouri watershed, which empties into the Gulf of Mexico. To the northeast, mostly along the southern border between British Columbia and Alberta, the Continental Divide separates the Columbia watershed from the Nelson-Lake Winnipeg-Saskatchewan watershed, which empties into Hudson Bay. The Mississippi and Nelson watersheds are separated by the Laurentian Divide, which meets the Continental Divide at Triple Divide Peak near the headwaters of the Columbia's Flathead River tributary. This point marks the meeting of three of North America's main drainage patterns, to the Pacific Ocean, to Hudson Bay, and to the Atlantic Ocean via the Gulf of Mexico. Further north along the Continental Divide, a short portion of the combined Continental and Laurentian divides separate the Columbia watershed from the MacKenzie-Slave-Athabasca watershed, which empties into the Arctic Ocean. The Nelson and Mackenzie watersheds are separated by a divide between streams flowing to the Arctic Ocean and those of the Hudson Bay watershed. This divide meets the Continental Divide at Snow Dome (also known as Dome), near the northernmost bend of the Columbia River. To the southeast, in western Wyoming, another divide separates the Columbia watershed from the Colorado–Green watershed, which empties into the Gulf of California. The Columbia, Colorado, and Mississippi watersheds meet at Three Waters Mountain in the Wind River Range of . To the south, in Oregon, Nevada, Utah, Idaho, and Wyoming, the Columbia watershed is divided from the Great Basin, whose several watersheds are endorheic, not emptying into any ocean but rather drying up or sinking into sumps. Great Basin watersheds that share a border with the Columbia watershed include Harney Basin, Humboldt River, and Great Salt Lake. The associated triple divide points are Commissary Ridge North, Wyoming, and Sproats Meadow Northwest, Oregon. To the north, mostly in British Columbia, the Columbia watershed borders the Fraser River watershed. To the west and southwest the Columbia watershed borders a number of smaller watersheds that drain to the Pacific Ocean, such as the Klamath River in Oregon and California and the Puget Sound Basin in Washington. Major tributaries The Columbia receives more than 60 significant tributaries. The four largest that empty directly into the Columbia (measured either by discharge or by size of watershed) are the Snake River (mostly in Idaho), the Willamette River (in northwest Oregon), the Kootenay River (mostly in British Columbia), and the Pend Oreille River (mostly in northern Washington and Idaho, also known as the lower part of the Clark Fork). Each of these four averages more than and drains an area of more than . The Snake is by far the largest tributary. Its watershed of is larger than the state of Idaho. Its discharge is roughly a third of the Columbia's at the rivers' confluence but compared to the Columbia upstream of the confluence the Snake is longer (113%) and has a larger drainage basin (104%). The Pend Oreille River system (including its main tributaries, the Clark Fork and Flathead rivers) is also similar in size to the Columbia at their confluence. Compared to the Columbia River above the two rivers' confluence, the Pend Oreille-Clark-Flathead is nearly as long (about 86%), its basin about three-fourths as large (76%), and its discharge over a third (37%). {| class="wikitable collapsible sortable state = uncollapsed" |- !Tributary !colspan=2|Average discharge !colspan=2|Drainage basin |- ! !ft3/s !m3/s !mi2 !km2 |- |Snake River | |<ref> Sum of Subregion 1704, Upper Snake, Subregion 1705, Middle Snake, and Subregion 1706, Lower Snake.</ref> |- |Willamette River | | |- |Kootenay River (Kootenai) | | |- |Pend Oreille River | | |- |Cowlitz River | | |- |Spokane River | | |- |Lewis River | | |- |Deschutes River | | |- |Yakima River | | |- |Wenatchee River | | |- |Okanogan River | | |- |Kettle River | | |- |Sandy River | | |- |John Day River | | |} See also Columbia Park (Kennewick, Washington), a recreational area Columbia River Estuary Columbia River Maritime Museum, Astoria, Oregon Empire Builder, an Amtrak rail line that follows the river from Portland to Pasco, Washington Estella Mine, an abandoned mine with a view of the Columbia River Valley Historic Columbia River Highway, a scenic highway on the Oregon side List of crossings of the Columbia River List of dams in the Columbia River watershed List of longest rivers of Canada List of longest rivers of the United States (by main stem) List of longest streams of Oregon Lists of ecoregions in North America and Oregon Lists of rivers of British Columbia, Oregon, and Washington Okanagan Trail, a historic trail that followed the Columbia and Okanagan rivers Robert Gray's Columbia River expedition Notes References Sources (see here for full online transcription) Further reading White, Richard. The Organic Machine: The Remaking of the Columbia River (Hill and Wang, 1996) External links BC Hydro Bibliography on Water Resources and International Law Peace Palace Library Columbia River US Environmental Protection Agency Columbia River Gorge National Scenic Area from the US Forest Service Columbia River Inter-Tribal Fish Commission , dating to the 17th century University of Washington Libraries Digital Collections – Tollman and Canaris Photographs Photographs document the salmon fishing industry on the southern Washington coast and in the lower Columbia River around the year 1897 and offer insights about commercial salmon fishing and the techniques used at the beginning of the 20th century. Virtual World: Columbia River National Geographic'' via Internet Archive Borders of Oregon Borders of Washington (state) Drainage basins of the Pacific Ocean International rivers of North America Rivers of Benton County, Washington Rivers of British Columbia Rivers of Chelan County, Washington Rivers of Clark County, Washington Rivers of Clatsop County, Oregon Rivers of Cowlitz County, Washington Rivers of Franklin County, Washington Rivers of Hood River County, Oregon Rivers of Multnomah County, Oregon Rivers of Oregon Rivers of Wasco County, Oregon Rivers of Washington (state) Rivers of Douglas County, Washington Rivers with fish ladders
5409
https://en.wikipedia.org/wiki/Commelinales
Commelinales
Commelinales is an order of flowering plants. It comprises five families: Commelinaceae, Haemodoraceae, Hanguanaceae, Philydraceae, and Pontederiaceae. All the families combined contain over 885 species in about 70 genera; the majority of species are in the Commelinaceae. Plants in the order share a number of synapomorphies that tie them together, such as a lack of mycorrhizal associations and tapetal raphides. Estimates differ as to when the Commelinales evolved, but most suggest an origin and diversification sometime during the mid- to late Cretaceous. Depending on the methods used, studies suggest a range of origin between 123 and 73 million years, with diversification occurring within the group 110 to 66 million years ago. The order's closest relatives are in the Zingiberales, which includes ginger, bananas, cardamom, and others. Taxonomy According to the most recent classification scheme, the APG IV of 2016, the order includes five families: Commelinaceae Haemodoraceae Hanguanaceae Philydraceae Pontederiaceae This is unchanged from the APG III of 2009 and the APG II of 2003, but different from the older APG system of 1998, which did not include Hanguanaceae. Previous classification systems The older Cronquist system of 1981, which was based purely on morphological data, placed the order in subclass Commelinidae of class Liliopsida and included the families Commelinaceae, Mayacaceae, Rapateaceae and Xyridaceae. These families are now known to be only distantly related. In the classification system of Dahlgren the Commelinales were one of four orders in the superorder Commeliniflorae (also called Commelinanae), and contained five families, of which only Commelinaceae has been retained by the Angiosperm Phylogeny Group. References Angiosperm orders Extant Campanian first appearances
5411
https://en.wikipedia.org/wiki/Cucurbitales
Cucurbitales
The Cucurbitales are an order of flowering plants, included in the rosid group of dicotyledons. This order mostly belongs to tropical areas, with limited presence in subtropical and temperate regions. The order includes shrubs and trees, together with many herbs and climbers. One major characteristic of the Cucurbitales is the presence of unisexual flowers, mostly pentacyclic, with thick pointed petals (whenever present). The pollination is usually performed by insects, but wind pollination is also present (in Coriariaceae and Datiscaceae). The order consists of roughly 2600 species in eight families. The largest families are Begoniaceae (begonia family) with around 1500 species and Cucurbitaceae (gourd family) with around 900 species. These two families include the only economically important plants. Specifically, the Cucurbitaceae (gourd family) include some food species, such as squash, pumpkin (both from Cucurbita), watermelon (Citrullus vulgaris), and cucumber and melons (Cucumis). The Begoniaceae are known for their horticultural species, of which there are over 130 with many more varieties. Overview The Cucurbitales are an order of plants with a cosmopolitan distribution, particularly diverse in the tropics. Most are herbs, climber herbs, woody lianas or shrubs but some genera include canopy-forming evergreen lauroid trees. Members of the Cucurbitales form an important component of low to montane tropical forest with greater representation in terms of the number of species. Although not known with certainty the total number of species in the order, conservative estimates indicate about 2600 species worldwide, distributed in 109 genera. Compared to other flowering plant orders, the taxonomy is poorly understood due to their great diversity, difficulty in identification, and limited study. The order Cucurbitales in the eurosid I clade comprises almost 2600 species in 109 or 110 genera in eight families, tropical and temperate, of very different sizes, morphology, and ecology. It is a case of divergent evolution. In contrast, there is convergent evolution with other groups not related due to ecological or physical drivers toward a similar solution, including analogous structures. Some species are trees that have similar foliage to the true laurels due to convergent evolution. The patterns of speciation in the Cucurbitales are diversified in a high number of species. They have a pantropical distribution with centers of diversity in Africa, South America, and Southeast Asia. They most likely originated in West Gondwana 67–107 million years ago, so the oldest split could relate to the break-up of Gondwana in the middle Eocene to late Oligocene, 45–24 million years ago. The group reached their current distribution by multiple intercontinental dispersal events. One factor was product of aridification, other groups responded to favorable climatic periods and expanded across the available habitat, occurring as opportunistic species across wide distribution; other groups diverged over long periods within isolated areas. The Cucurbitales comprise the families: Apodanthaceae, Anisophylleaceae, Begoniaceae, Coriariaceae, Corynocarpaceae, Cucurbitaceae, Tetramelaceae, and Datiscaceae. Some of the synapomorphies of the order are: leaves in spiral with secondary veins palmated, calyx or perianth valvate, and the elevated stomatal calyx/perianth bearing separate styles. The two whorls are similar in texture. Tetrameles nudiflora is a tree of immense proportions of height and width; Tetramelaceae, Anisophylleaceae, and Corynocarpaceae are tall canopy trees in temperate and tropical forests. The genus Dendrosicyos, with the only species being the cucumber tree, is adapted to the arid semidesert island of Socotra. Deciduous perennial Cucurbitales lose all of their leaves for part of the year depending on variations in rainfall. The leaf loss coincides with the dry season in tropical, subtropical and arid regions. In temperate or polar climates, the dry season is due to the inability of the plant to absorb water available in the form of ice. Apodanthaceae are obligatory endoparasites that only emerge once a year in the form of small flowers that develop into small berries, however taxonomists have not agreed on the exact placement of this family within the Cucurbitales. Over half of the known members of this order belong to the greatly diverse begonia family Begoniaceae, with around 1500 species in two genera. Before modern DNA-molecular classifications, some Cucurbitales species were assigned to orders as diverse as Ranunculales, Malpighiales, Violales, and Rafflesiales. Early molecular studies revealed several surprises, such as the nonmonophyly of the traditional Datiscaceae, including Tetrameles and Octomeles, but the exact relationships among the families remain unclear. The lack of knowledge about the order in general is due to many species being found in countries with limited economic means or unstable political environments, factors unsuitable for plant collection and detailed study. Thus the vast majority of species remain poorly determined, and a future increase in the number of species is expected. Classification Under the Cronquist system, the families Begoniaceae, Cucurbitaceae, and Datiscaceae were placed in the order Violales, within the subclass Dilleniidae, with the Tetramelaceae subsumed into the Datiscaceae. Corynocarpaceae was placed in order Celastrales, and Anisophylleaceae in order Rosales, both under subclass Rosidae. Coriariaceae was placed in Ranunculaceae, subclass Magnoliidae. Apodanthaceae was not recognised as a family, its genera being assigned to another parasitic plant family, the Rafflesiaceae. The present classification is due to APG III (2009). Systematics Modern molecular phylogenetics suggest the following relationships: References Further reading External links Extant Albian first appearances Angiosperm orders
5412
https://en.wikipedia.org/wiki/Contra%20dance
Contra dance
Contra dance (also contradance, contra-dance and other variant spellings) is a form of folk dancing made up of long lines of couples. It has mixed origins from English country dance, Scottish country dance, and French dance styles in the 17th century. Sometimes described as New England folk dance or Appalachian folk dance, contra dances can be found around the world, but are most common in the United States (periodically held in nearly every state), Canada, and other Anglophone countries. A contra dance event is a social dance that one can attend without a partner. The dancers form couples, and the couples form sets of two couples in long lines starting from the stage and going down the length of the dance hall. Throughout the course of a dance, couples progress up and down these lines, dancing with each other couple in the line. The dance is led by a caller who teaches the sequence of moves, called "figures," in the dance before the music starts. In a single dance, a caller may include anywhere from six to twelve figures, which are repeated as couples progress up and down the lines. Each time through the dance takes 64 beats, after which the pattern is repeated. The essence of the dance is in following the pattern with your set and your line; since there is no required footwork, many people find contra dance easier to learn than other forms of social dancing. Almost all contra dances are danced to live music. The music played includes, but is not limited to, Irish, Scottish, old-time, bluegrass and French-Canadian folk tunes. The fiddle is considered the core instrument, though other stringed instruments can be used, such as the guitar, banjo, bass and mandolin, as well as the piano, accordion, flute, clarinet and more. Techno contra dances are done to techno music, typically accompanied by DJ lighting. Music in a dance can consist of a single tune or a medley of tunes, and key changes during the course of a dance are common. Many callers and bands perform for local contra dances, and some are hired to play for dances around the U.S. and Canada. Many dancers travel regionally (or even nationally) to contra dance weekends and week-long contra dance camps, where they can expect to find other dedicated and skilled dancers, callers, and bands. History Contra dance has European origins, and over 100 years of cultural influences from many different sources. At the end of the 17th century, English country dances were taken up by French dance masters. The French called these dances contredanses (which roughly translated by sound "countrydance" to "contredanse"), as indicated in a 1706 dance book called Recueil de Contredances. As time progressed, these dances returned to England and were spread and reinterpreted in the United States, and eventually the French form of the name came to be associated with the American folk dances, where they were alternatively called "country dances" or in some parts of New England such as New Hampshire, "contradances". Contra dances were fashionable in the United States and were considered one of the most popular social dances across class lines in the late 18th century, though these events were usually referred to as "country dances" until the 1780s, when the term contra dance became more common to describe these events. In the mid-19th century, group dances started to decline in popularity in favor of quadrilles, lancers, and couple dances such as the waltz and polka. By the late 19th century, contras were mostly confined to rural settings. This began to change with the square dance revival of the 1920s, pioneered by Henry Ford, founder of the Ford Motor Company, in part as a response in opposition to modern jazz influences in the United States. In the 1920s, Ford asked his friend Benjamin Lovett, a dance coordinator in Massachusetts, to come to Michigan to begin a dance program. Initially, Lovett could not as he was under contract at a local inn; consequently, Ford bought the property rights to the inn. Lovett and Ford initiated a dance program in Dearborn, Michigan that included several folk dances, including contras. Ford also published a book titled Good Morning: After a Sleep of Twenty-Five Years, Old-Fashioned Dancing Is Being Revived in 1926 detailing steps for some contra dances. In the 1930s and 1940s, the popularity of Jazz, Swing, and "Big Band" music caused contra dance to decline in several parts of the US; the tradition carried on primarily in towns within the northeastern portions of North America, such as Ohio, the Maritime provinces of Canada, and particularly in New England. Ralph Page almost single-handedly maintained the New England tradition until it was revitalized in the 1950s and 1960s, particularly by Ted Sannella and Dudley Laufman. The New England contra dance tradition was also maintained in Vermont by the Ed Larkin Old Time Contra Dancers, formed by Edwin Loyal Larkin in 1934. The group Larkin founded is still performing, teaching the dances, and holding monthly open house dances in Tunbridge, Vermont. By then, early dance camps, retreats, and weekends had emerged, such as Pinewoods Camp, in Plymouth, Massachusetts, which became primarily a music and dance camp in 1933, and NEFFA, the New England Folk Festival, also in Massachusetts, which began in 1944. Pittsburgh Contra Dance celebrated its 100th anniversary in 2015. These and others continue to be popular and some offer other dances and activities besides contra dancing. In the 1970s, Sannella and other callers introduced dance moves from English Country Dance, such as heys and gypsies, to the contra dances. New dances, such as Shadrack's Delight by Tony Parkes, featured symmetrical dancing by all couples. (Previously, the actives and inactives – see Progression – had significantly different roles). Double progression dances, popularized by Herbie Gaudreau, added to the aerobic nature of the dances, and one caller, Gene Hubert, wrote a quadruple progression dance, Contra Madness. Becket formation was introduced, with partners starting the dance next to each other in the line instead of opposite each other. The Brattleboro Dawn Dance started in 1976, and continues to run semiannually. In the early 1980s, Tod Whittemore started the first Saturday dance in the Peterborough Town House, which remains one of the more popular regional dances. The Peterborough dance influenced Bob McQuillen, who became a notable musician in New England. As musicians and callers moved to other locations, they founded contra dances in Michigan, Washington, Oregon, California, Texas, and elsewhere. Events Contra dances take place in more than 200 cities and towns across the U.S. (), as well as other countries. Contra dance events are open to all, regardless of experience, unless explicitly labeled otherwise. It is common to see dancers with a wide range of ages, from children to the elderly. Most dancers are white and middle or upper-middle class. Contra dances are family-friendly, and alcohol consumption is not part of the culture. Many events offer beginner-level instructions prior to the dance. A typical evening of contra dance is three hours long, including an intermission. The event consists of a number of individual contra dances, each lasting about 15 minutes, and typically a band intermission with some waltzes, schottisches, polkas, or Swedish hambos. In some places, square dances are thrown into the mix, sometimes at the discretion of the caller. Music for the evening is typically performed by a live band, playing jigs and reels from Ireland, Scotland, Canada, or the USA. The tunes may range from traditional originating a century ago, to modern compositions including electric guitar, synth keyboard, and driving percussion – so long as the music fits the timing for contra dance patterns. Sometimes, a rock tune will be woven in. Generally, a leader, known as a caller, will teach each individual dance just before the music for that dance begins. During this introductory walk-through, participants learn the dance by walking through the steps and formations, following the caller's instructions. The caller gives the instructions orally, and sometimes augments them with demonstrations of steps by experienced dancers in the group. The walk-through usually proceeds in the order of the moves as they will be done with the music; in some dances, the caller may vary the order of moves during the dance, a fact that is usually explained as part of the caller's instructions. After the walk-through, the music begins and the dancers repeat that sequence some number of times before that dance ends, often 10 to 15 minutes, depending on the length of the contra lines. Calls are normally given at least the first few times through, and often for the last. At the end of each dance, the dancers thank their partners. The contra dance tradition in North America is to change partners for every dance, while in the United Kingdom typically people dance with the same partner the entire evening. One who attends an evening of contra dances in North America does not need to bring his or her own partner. In the short break between individual dances, the dancers invite each other to dance. Booking ahead by asking partner or partners ahead of time for each individual dance is common at some venues, but has been discouraged by some. Most contra dances do not have an expected dress code. No special outfits are worn, but comfortable and loose-fitting clothing that does not restrict movement is usually recommended. Women usually wear skirts or dresses as they are cooler than wearing trousers; some men also dance in kilts or skirts. Low heeled, broken-in, soft-soled, non-marking shoes, such as dance shoes, sneakers, or sandals, are recommended and, in some places, required. As dancing can be aerobic, dancers are sometimes encouraged to bring a change of clothes. As in any social dance, cooperation is vital to contra dancing. Since over the course of any single dance, individuals interact with not just their partners but everyone else in the set, contra dancing might be considered a group activity. As will necessarily be the case when beginners are welcomed in by more practiced dancers, mistakes are made; most dancers are willing to help beginners in learning the steps. However, because the friendly, social nature of the dances can be misinterpreted or even abused, some groups have created anti-harassment policies. Form Formations Contra dances are arranged in long lines of couples. A pair of lines is called a set. Sets are generally arranged so they run the length of the hall, with the top of the set being the end closest to the band and caller, and the bottom of the set being the end farthest from the caller. Couples consist of two people, traditionally one male and one female, though same-sex pairs are increasingly common. Traditionally the dancers are referred to as the lady and gent, though various other terms have been used: some dances have used men and women, rejecting ladies and gents as elitist; others have used gender-neutral role terms including bares and bands, jets and rubies, and larks and ravens or robins. Couples interact primarily with an adjacent couple for each round of the dance. Each sub-group of two interacting couples is known to choreographers as a minor set and to dancers as a foursome or hands four. Couples in the same minor set are neighbors. Minor sets originate at the head of the set, starting with the topmost dancers as the ones (the active couple or actives); the other couple are twos (or inactives). The ones are said to be above their neighboring twos; twos are below. If there is an uneven number of couples dancing, the bottom-most couple will wait out the first time through the dance. There are four common ways of arranging couples in the minor sets: proper, improper, Becket, and triple formations. Traditionally, most dances were in the proper formation, with all the gents in one line and all the ladies in the other. Until the end of the nineteenth century, minor sets were most commonly triples. In the twentieth century, duple-minor dances became more common. Since the mid twentieth century, there has been a shift towards improper dances, in which gents and ladies alternate on each side of the set, being the most common formation. Triple dances have also lost popularity in modern contras, while Becket formation, in which dancers stand next to their partners, facing another couple, is a modern innovation. Progression A fundamental aspect of contra dancing is that, during a single dance, each dancer has one partner, but interacts with many different people. During a single dance, the same pattern is repeated over and over (one time through lasts roughly 30 seconds), but each time, a pair of dancers will dance with new neighbors (moving on to new neighbors is called progressing). Dancers do not need to memorize these patterns in advance, since the dance leader, or caller, will generally explain the pattern for this dance before the music begins, and give people a chance to walk through the pattern so dancers can learn the moves. The walk through also helps dancers understand how the dance pattern leads them toward new people each time. Once the music starts, the caller continues to describe each move until the dancers are comfortable with that dance pattern. The dance progression is built into the contra dance pattern as continuous motion with the music, and does not interrupt the dancing. While all dancers in the room are part of the same dance pattern, half of the couples in the room are moving toward the band at any moment and half are moving away, so when everybody steps forward, they find new people to dance with. Once a couple reaches the end of the set, they switch direction, dancing back along the set the other way. A single dance runs around ten minutes, long enough to progress at least 15–20 times. If the sets are short to medium length the caller often tries to run the dance until each couple has danced with every other couple both as a one and a two and returned to where they started. A typical room of contra dancers may include about 120 people; but this varies from 30 people in smaller towns, to over 300 people in cities like Washington DC, Los Angeles, or New York. With longer sets (more than 60 people), one dance typically does not allow dancing with every dancer in the group. Choreography Contra dance choreography specifies the dance formation, the figures, and the sequence of those figures in a dance. Contra dance figures (with a few exceptions) do not have defined footwork; within the limits of the music and the comfort of their fellow dancers, individuals move according to their own taste. Most contra dances consist of a sequence of about 6 to 12 individual figures, prompted by the caller in time to the music as the figures are danced. As the sequence repeats, the caller may cut down his or her prompting, and eventually drop out, leaving the dancers to each other and the music. A figure is a pattern of movement that typically takes eight counts, although figures with four or 16 counts are also common. Each dance is a collection of figures assembled to allow the dancers to progress along the set (see "Progression", above). A count (as used above) is one half of a musical measure, such as one quarter note in time or three eighth notes in time. A count may also be called a step, as contra dance is a walking form, and each count of a dance typically matches a single physical step in a figure. Typical contra dance choreography comprises four parts, each 16 counts (8 measures) long. The parts are called A1, A2, B1 and B2. This nomenclature stems from the music: Most contra dance tunes (as written) have two parts (A and B), each 8 measures long, and each fitting one part of the dance. The A and B parts are each played twice in a row, hence, A1, A2, B1, B2. While the same music is generally played in, for example, parts A1 and A2, distinct choreography is followed in those parts. Thus, a contra dance is typically 64 counts, and goes with a 32 measure tune. Tunes of this form are called "square"; tunes that deviate from this form are called "crooked". Sample contra dances: Traditional – the actives do most of the movement Chorus jig (proper duple minor) A1 (16) Actives down the outside and back. (The inactives stand still or substitute a swing). A2 (16) Actives down the center, turn individually, come back, and cast off. (The inactives stand still for the first , take a step up the hall, and then participate in the cast). B1 (16) Actives turn contra corners. (The inactives participate in half the turns.) B2 (16) Actives meet in the middle for a balance and swing, end swing facing up. (The inactives stand still.) Note: inactives will often clog in place or otherwise participate in the dance, even though the figures do not call for them to move. Modern – the dance is symmetrical for actives and inactives "Hay in the Barn" by Chart Guthrie (improper duple minor) A1 (16) Neighbors balance and swing A2 (8) Ladies chain across, (8) half hey, ladies pass right shoulders to start. B1 (16) Partners balance and swing. B2 (8) Ladies chain across, (8) half hey, ladies pass right shoulders to start. Many modern contra dances have these characteristics: longways for as many as will first couples improper, or Becket formation flowing choreography no-one stationary for more than 16 beats (e.g. first couple balance and swing, finish facing down to make lines of four) containing at least one swing and normally both a partner swing and a neighbor swing the vast majority of the moves from a set of well-known moves that the dancers know already composed mostly of moves that keep all dancers connected generally danced to 32 bar jigs or reels played at between 110 and 130 bpm danced with a smooth walk with many spins and twirls An event which consists primarily (or solely) of dances in this style is sometimes referred to as a "modern urban contra dance". Music The most common contra dance repertoire is rooted in the Anglo-Celtic tradition as it developed in North America. Irish, Scottish, French Canadian, and Old-time tunes are common, and Klezmer tunes have also been used. The old-time repertoire includes very few of the jigs common in the others. Tunes used for a contra dance are nearly always "square" 64-beat tunes, in which one time through the tune is each of two 16-beat parts played twice (this is notated AABB). However, any 64-beat tune will do; for instance, three 8-beat parts could be played AABB AACC, or two 8-beat parts and one 16-beat part could be played AABB CC. Tunes not 64 beats long are called "crooked" and are almost never used for contra dancing, although a few crooked dances have been written as novelties. Contra tunes are played at a narrow range of tempos, between 108 and 132 bpm. Fiddles are considered to be the primary melody instrument in contra dancing, though other stringed instruments can also be used, such as the mandolin or banjo, in addition to a few wind instruments, for example, the accordion. The piano, guitar, and double bass are frequently found in the rhythm section of a contra dance band. Occasionally, percussion instruments are also used in contra dancing, such as the Irish bodhran or less frequently, the dumbek or washboard. The last few years have seen some of the bands incorporate the Quebecois practice of tapping feet on a board while playing an instrument (often the fiddle). Until the 1970s it was traditional to play a single tune for the duration of a contra dance (about 5 to 10 minutes). Since then, contra dance musicians have typically played tunes in sets of two or three related (and sometimes contrasting) tunes, though single-tune dances are again becoming popular with some northeastern bands. In the Celtic repertoires it is common to change keys with each tune. A set might start with a tune in G, switch to a tune in D, and end with a tune in Bm. Here, D is related to G as its dominant (5th), while D and Bm share a key signature of two sharps. In the old-time tradition the musicians will either play the same tune for the whole dance, or switch to tunes in the same key. This is because the tunings of the five-string banjo are key-specific. An old-time band might play a set of tunes in D, then use the time between dances to retune for a set of tunes in A. (Fiddlers also may take this opportunity to retune; tune- or key-specific fiddle tunings are uncommon in American Anglo-Celtic traditions other than old-time.) In the Celtic repertoires it is most common for bands to play sets of reels and sets of jigs. However, since the underlying beat structure of jigs and reels is the same (two "counts" per bar) bands will occasionally mix jigs and reels in a set. Some of the most popular contra dance bands in recent years are Great Bear, Perpetual E-Motion, Buddy System, Crowfoot, Elixir, the Mean Lids, Nor'easter, Nova, Pete's Posse, the Stringrays, the Syncopaths, and Wild Asparagus. Techno contras In recent years, younger contra dancers have begun establishing "crossover contra" or "techno contra" – contra dancing to techno, hip-hop, and other modern forms of music. While challenging for DJs and callers, the fusion of contra patterns with moves from hip-hop, tango, and other forms of dance has made this form of contra dance a rising trend since 2008. Techno differs from other contra dancing in that it is usually done to recorded music, although there are some bands that play live for techno dances. Techno has become especially prevalent in Asheville, North Carolina, but regular techno contra dance series are spreading up the East Coast to locales such as Charlottesville, Virginia; Washington, D.C.; Amherst, Massachusetts; Greenfield, Massachusetts; and various North Carolina dance communities, with one-time or annual events cropping up in locations farther west, including California, Portland, Oregon, and Washington state. They also sometimes appear as late night events during contra dance weekends. In response to the demand for techno contra, a number of contra dance callers have developed repertoires of recorded songs to play that go well with particular contra dances; these callers are known as DJs. A kind of techno/traditional contra fusion has arisen, with at least one band, Buddy System, playing live music melded with synth sounds for techno contra dances. See also Ceili dance Country Dance and Song Society Dutch crossing International folk dance Quadrille Citations General and cited references See chapter VI, "Frolics for Fun: Dances, Weddings and Dinner Parties, pages 109 – 124. (Reprint: first published in 1956 by American Squares as a part of the American Squares Dance Series) See chapter entitled "Country Dancing," Pages 57 – 120. (The first edition was published in 1939.) External links Contra dance associations Country Dance and Song Society (CDSS) preserves a variety of Anglo-American folk traditions in North America, including folk music, folk song, English country dance, contra dance and morris dance. Anglo-American Dance Service Based in Belgium, promoting contra dance and English dance in Western Europe. Descriptions & definitions Gary Shapiro's What Is Contra Dance? Hamilton Contra Dances A Contra Dance Primer Hamilton Contra Dances Contraculture: An introduction to contradancing Sharon Barrett Kennedy's "Now, What in the World is Contra Dancing?" Different traditions and cultures in contra dance Colin Hume's Advice to Americans in England Research resources University of New Hampshire Special Collections: New Hampshire Library of Traditional Music and Dance Finding contra dances CDSS Dance Map – interactive, crowd sourced map of contra and folk dances around the world Contra Dance Links – comprehensive, up-to-date lists of local dances, weekend dances, musicians, callers, etc. The Dance Gypsy – locations of contra dances, and many other folk dances, around the world Try Contra – Find contra dances using ZIP Code search. National Contra Grid – Look up dances by day-of-week & City. ContraDance.org – Description, Links, videos, and local schedule. In the United Kingdom UK Contra Clubs Are You Dancing – calendar of social dance events in the UK, including contras English Folk Dance and Song Society dance calendar – calendar of folk dance events in the UK, including contras In France Paris Contra Dance Video Contra dance in Oswego, New York, with music by the Great Bear Trio. 2013. Two American country dance films on DVD: "Country Corners" (1976), and "Full of Life A-Dancin'" (1978). Contra dance in Tacoma, Washington, with music by Crowfoot. 2009. Welcome to the Contra Dance – dancers discuss their experiences contra dancing, set over photographs of contras The New Contra Dance Commercial (2 minute look at contra in a few dance halls, see playlist) Why We Contra Dance (dancers discuss why they enjoy contra dance, with video of dancing) Dancing Community (dancers from Louisville talk about their contra dancing experiences, with video of dancing) Contra Dancing and New Dancers (new contra dancers in Atlanta, Georgia, discuss their experience) A History of Contra (documentary of contra dancing, spanning 150+ years of dance culture) Contra dance in Chattanooga, Tennessee with music by Buddy System and calling by Seth Tepfer, 2019 The Contra Dance (Doug Plummer's 3 minute slide + video set, with Ed Howe's fiddle music from May 2019) Contra dance in Glen Echo, Maryland with music by Elixir and calling by Nils Fredland, Contrastock 4, 2014. Contra dance in Pinellas, Florida with music by ContraForce and calling by Charlotte Crittenden, 2017) Example Contra Dance Lesson (caller Cis Hinkle explains the basics, with contra vocabulary) Contra Nils Walkthrough and Dance Articles containing video clips Country dance Folk dance Social dance
5413
https://en.wikipedia.org/wiki/Coin%20collecting
Coin collecting
Coin collecting is the collecting of coins or other forms of minted legal tender. Coins of interest to collectors often include those that were in circulation for only a brief time, coins with mint errors, and especially beautiful or historically significant pieces. Coin collecting can be differentiated from numismatics, in that the latter is the systematic study of currency as a whole, though the two disciplines are closely interlinked. Many factors determine a coin's value including grade, rarity, and popularity. Commercial organizations offer grading services and will grade, authenticate, attribute, and encapsulate most coins. History People have hoarded coins for their bullion value for as long as coins have been minted. However, the collection of coins for their artistic value was a later development. Evidence from the archaeological and historical record of Ancient Rome and medieval Mesopotamia indicates that coins were collected and catalogued by scholars and state treasuries. It also seems probable that individual citizens collected old, exotic or commemorative coins as an affordable, portable form of art. According to Suetonius in his De vita Caesarum (The Lives of the Twelve Caesars), written in the first century AD, the emperor Augustus sometimes presented old and exotic coins to friends and courtiers during festivals and other special occasions. While the literary sources are scarce, it's evident that collecting of ancient coins persisted in the Western World during the Middle Ages among rulers and high nobility. Contemporary coin collecting and appreciation began around the fourteenth century. During the Renaissance, it became a fad among some members of the privileged classes, especially kings and queens. The Italian scholar and poet Petrarch is credited with being the pursuit's first and most famous aficionado. Following his lead, many European kings, princes, and other nobility kept collections of ancient coins. Some notable collectors were Pope Boniface VIII, Emperor Maximilian I of the Holy Roman Empire, Louis XIV of France, Ferdinand I of Spain and Holy Roman Emperor, Henry IV of France and Elector Joachim II of Brandenburg, who started the Berlin Coin Cabinet (German: Münzkabinett Berlin). Perhaps because only the very wealthy could afford the pursuit, in Renaissance times coin collecting became known as the "Hobby of Kings." During the 17th and 18th centuries coin collecting remained a pursuit of the well-to-do. But rational, Enlightenment thinking led to a more systematic approach to accumulation and study. Numismatics as an academic discipline emerged in these centuries at the same time as a growing middle class, eager to prove their wealth and sophistication, began to collect coins. During the 19th and 20th centuries, coin collecting increased further in popularity. The market for coins expanded to include not only antique coins, but foreign or otherwise exotic currency. Coin shows, trade associations, and regulatory bodies emerged during these decades. The first international convention for coin collectors was held 15–18 August 1962, in Detroit, Michigan, and was sponsored by the American Numismatic Association and the Royal Canadian Numismatic Association. Attendance was estimated at 40,000. As one of the oldest and most popular world pastimes, coin collecting is now often referred to as the "King of Hobbies". Motivations The motivations for collecting vary from one person to another. Possibly the most common type of collectors are the hobbyists, who amass a collection purely for the pleasure of it with no real expectation of profit. Another frequent reason for purchasing coins is as an investment. As with stamps, precious metals, or other commodities, coin prices are periodical based on supply and demand. Prices drop for coins that are not in long-term demand, and increase along with a coin's perceived or intrinsic value. Investors buy with the expectation that the value of their purchase will increase over the long term. As with all types of investment, the principle of caveat emptor applies, and study is recommended before buying. Likewise, as with most collectibles, a coin collection does not produce income until it is sold, and may even incur costs (for example, the cost of safe deposit box storage) in the interim. Some people collect coins for patriotic reasons. One example of a patriotic coin was minted in 1813 by the United Provinces of Rio de Plata. The country was founded after a successful revolution that freed it from Spain’s rule. One of the first legislation the new country enacted was to mint coins to replace the Spanish currency that had been in use. Many countries, before and after the founding of the United Provinces of Rio de Plata, have issued coins to replace the coins of other countries. Mints from various countries also create coins specifically for patriotic collectors. Patriotic coins can be found at U.S. mints. An example of these is the 2022 Purple Heart Commemorative Coin Program. Collector types Some coin collectors are generalists and accumulate examples from a broad variety of historical or geographically significant coins, but most collectors focus on a narrower, specialist interest. For example, some collectors focus on coins based on a common theme, such as coins from a country (often the collector's own), a coin each year from a series, or coins with a common mint mark. There are also completists who seek an example of every type of coin within a certain category. One of the most famous of this type of collector is Louis E. Eliasberg, the only collector thus far to assemble a complete set of known coins of the United States. Foreign coin collecting is another type of collection that numismatics enjoy collecting. Coin hoarders are similar to investors in the sense that they accumulate coins for potential long-term profit. However, they typically do not take into account aesthetic considerations. This is most common with coins whose metal value exceeds their spending value. Speculators, be they amateurs or commercial buyers, may purchase coins in bulk or in small batches, and often act with the expectation of delayed profit. They may wish to take advantage of a spike in demand for a particular coin (for example, during the annual release of Canadian numismatic collectibles from the Royal Canadian Mint). The speculator might hope to buy the coin in large lots and sell at a profit within weeks or months. Speculators may also buy common circulation coins for their intrinsic metal value. Coins without collectible value may be melted down or distributed as bullion for commercial purposes. Typically they purchase coins that are composed of rare or precious metals, or coins that have a high purity of a specific metal. A final type of collector is the inheritor, an accidental collector who acquires coins from another person as part of an inheritance. The inheritor type may not necessarily have an interest in or know anything about numismatics at the time of the acquisition. Grade and value In coin collecting, the condition of a coin (its grade) is key to its value; a high-quality example with minimal wear is often worth many times more than a poor example. Collectors have created systems to describe the overall condition of coins. Any damage, such as wear or cleaning, can substantially decrease a coin's value. By the mid 20th century, with the growing market for rare coins, the American Numismatic Association helps identify most coins in North America, numbering coins from 1 (poor) to 70 (mint state), and setting aside a separate category for proof coinage. This system is often shunned by coin experts in Europe and elsewhere, who prefer to use adjectival grades. Nevertheless, most grading systems use similar terminology, and values and remain mutually intelligible. Certification services Third-party grading (TPG), aka coin certification services, emerged in the 1980s with the goals of standardizing grading, exposing alterations, and eliminating counterfeits. For tiered fees, certification services grade, authenticate, attribute, and encapsulate coins in clear plastic holders. Coin certification has greatly reduced the number of counterfeits and grossly over graded coins, and improved buyer confidence. Certification services can sometimes be controversial because grading is subjective; coins may be graded differently by different services or even upon resubmission to the same service. The numeric grade alone does not represent all of a coin's characteristics, such as toning, strike, brightness, color, luster, and attractiveness. Due to potentially large differences in value over slight differences in a coin's condition, some submitters will repeatedly resubmit a coin to a grading service in the hope of receiving a higher grade. Because fees are charged for certification, submitters must funnel money away from purchasing additional coins. Clubs Coin collector clubs offer a variety of benefits to members. They usually serve as a source of information and unification of people interested in coins. Collector clubs are popular both offline and online. See also Challenge coin Coin Coin catalog Coin grading Coin slab Exonumia Numismatics Regular issue coinage Seigniorage List of most expensive coins Examples Byron Reed Collection Collection at Ibn Sina Academy References Exonumia sv:Myntsamlande
5415
https://en.wikipedia.org/wiki/Crokinole
Crokinole
Crokinole ( ) is a disk-flicking dexterity board game, possibly of Canadian origin, similar to the games of pitchnut, carrom, and pichenotte, with elements of shuffleboard and curling reduced to table-top size. Players take turns shooting discs across the circular playing surface, trying to land their discs in the higher-scoring regions of the board, particularly the recessed center hole of 20 points, while also attempting to knock opposing discs off the board, and into the 'ditch'. In crokinole, the shooting is generally towards the center of the board, unlike carroms and pitchnut, where the shooting is towards the four outer corner pockets, as in pool. Crokinole is also played using cue sticks, and there is a special category for cue stick participants at the World Crokinole Championships in Tavistock, Ontario, Canada. Equipment Board dimensions vary with a playing surface typically of polished wood or laminate approximately in diameter. The arrangement is 3 concentric rings worth 5, 10, and 15 points as you move in from the outside. There is a shallow 20-point hole at the center. The inner 15-point ring is guarded with 8 small bumpers or posts. The outer ring of the board is divided into four quadrants. The outer edge of the board is raised slightly to keep errant shots from flying out, with a gutter between the playing surface and the edge to collect discarded pieces. Crokinole boards are typically octagonal or round in shape. The wooden discs are roughly checker-sized, slightly smaller in diameter than the board's central hole, and typically have one side slightly concave and one side slightly convex, mainly due to the inherent features of wood, more than a planned design. Alternatively, the game may be played with ring-shaped pieces with a central hole. Powder The use of any lubricating powder in crokinole is controversial, with some purists reviling the practice. Powder is sometimes used to ensure pieces slide smoothly on the surface. Boric acid was popular for a long time, but is now considered toxic and has been replaced with safer substitutes. The EU has classified Boric acid as a "Serious Health Hazard". In the UK, many players use a version of anti-set-off spray powder, from the printing industry, which has specific electrostatic properties, with particles of 50-micrometre diameter (). The powder is made of pure food-grade plant/vegetable starch. The World Crokinole Championships in Tavistock, Ontario, Canada, states: "The WCC waxes boards, as required, with paste wax. On tournament day powdered shuffleboard wax (CAPO fast speed, yellow and white container) is placed in the ditch. Only tournament organizers will apply quality granular shuffleboard wax. Wax will be placed in the ditch area so that players can rub their discs in the wax prior to shooting, if they desire. Contestants are not allowed to apply lubricants of any type to the board. Absolutely no other lubricant will be allowed". Gameplay Crokinole is most commonly played by two players, or by four players in teams of two, with partners sitting across the board from each other. Players take turns flicking their discs from the outer edge of their quadrant of the board onto the playfield. Shooting is usually done by flicking the disc with a finger, though sometimes small cue sticks may be used. If there are any enemy discs on the board, a player must make contact, directly or indirectly, with an enemy disc during the shot. If unsuccessful, the shot disc is "fouled" and removed from the board, along with any of the player's other discs that were moved during the shot. When there are no enemy discs on the board, many (but not all) rules also state that a player must shoot for the centre of the board, and a shot disc must finish either completely inside the 15-point guarded ring line, or (depending on the specifics of the rules) be inside or touching this line. This is often called the "no hiding" rule, since it prevents players from placing their first shots where their opponent must traverse completely through the guarded centre ring to hit them and avoid fouling. When playing without this rule, a player may generally make any shot desired, and as long as a disc remains completely inside the outer line of the playfield, it remains on the board. During any shot, any disc that falls completely into the recessed central "20" hole (a.k.a. the "Toad" or "Dukie") is removed from play, and counts as twenty points for the owner of the disc at the end of the round, assuming the shot is valid. Scoring occurs after all pieces (generally 12 per player or team) have been played, and is differential: i.e., the player or team with higher score is awarded the difference between the higher and lower scores for the round, thus only one team or player each round gains points. Play continues until a predetermined winning score is reached. History of the game After 30 years of research, Wayne Kelly published his assessment of the first origins of crokinole, in The Crokinole Book, Third Edition, page 28, which leaves the door open to future research and discovery of the origins of the game of crokinole: "The earliest American crokinole board and reference to the game is M. B. Ross's patented New York board of 1880. The earliest Canadian reference is 1867 (Sports and Games in Canadian Life: 1700 to the Present by Howell and Howell, Toronto, MacMillan Company of Canada, 1969, p.61), and the oldest piece dated at 1875 by Ekhardt Wettlaufer. Could Ekhardt Wettlaufer have visited friends in New York state, noticed an unusual and entertaining parlour game being played, and upon arrival at home, made an imitation as a gift for his son? After all, he was a talented, and no doubt resourceful, painter and woodworker. Or was it the other way around? Did Mr. M. B. Ross travel to Ontario, take note of a quaint piece of rural folk art, and upon return to New York, put his American entrepreneurial skills to work - complete with patent name - on his new crokinole board? As the trail is more than 100 years old and no other authoritative source can be found, it appears, at the moment, that Eckhardt Wettlaufer or M. B. Ross are as close as we can get to answering the question WHO (made the first crokinole board.)" The earliest known crokinole board was made by craftsman Eckhardt Wettlaufer in 1876 in Perth County, Ontario, Canada. It is said Wettlaufer crafted the board as a fifth birthday present for his son Adam, which is now part of the collection at the Joseph Schneider Haus, a national historic site in Kitchener, Ontario, with a focus on Germanic folk art. Several other home-made boards dating from southwestern Ontario in the 1870s have been discovered since the 1990s. A board game similar to crokinole was patented on 20 April 1880 by Joshua K. Ingalls (US Patent No. 226,615) Crokinole is often believed to be of Mennonite or Amish origins, but there is no factual data to support such a claim. The reason for this misconception may be due to its popularity in Mennonite and Amish groups. The game was viewed as a rather innocuous pastime – unlike the perception that diversions such as card playing or dancing were considered "works of the Devil" as held by many 19th-century Protestant groups. The oldest roots of crokinole, from the 1860s, suggest the British and South Asian games, such as carrom, are the most likely antecedents of what became crokinole. In 2006, a documentary film called Crokinole was released. The world premiere occurred at the Princess Cinema in Waterloo, Ontario, in early 2006. The movie follows some of the competitors of the 2004 World Crokinole Championship as they prepare for the event. Origins of the name The name "crokinole" derives from , a French word today designating: in France, a kind of cookie (or biscuit in British English), similar to a biscotto; in French Canada, a pastry somewhat similar to a doughnut (except for the shape). It also used to designate the action of flicking with the finger (Molière, Le malade imaginaire; or Voltaire, Lettre à Frédéric II Roi de Prusse; etc.), and this seems the most likely origin of the name of the game. was also a synonym of , a word that gave its name to the different but related games of pichenotte and pitchnut. From The Crokinole Book 3rd Edition by Wayne S. Kelly "Is it possible that the English word 'crokinole' is simply an etymological offspring of the French word 'croquignole'? It would appear so for the following reasons. Going back to the entry for Crokinole in Webster's Third New International Dictionary, within the etymological brackets, it says: [French croquignole, fillip]. This is a major clue. The word fillip, according to Webster's, has two definitions: "1. a blow or gesture made by the sudden forcible release of a finger curled up against the thumb; a short sharp blow. 2. to strike by holding the nail of a finger curled up against the ball of the thumb and then suddenly releasing it from that position". So it seems evident, then, that our game of crokinole derives its name from the verb form (of croquignole) defining the principle action in the game, that of flicking or 'filliping' a playing piece across the board". The word Crokinole is generally acknowledged to have been derived from the French Canadian word "Croquignole", a word with several meanings, such as fillip, snap, biscuit, bun and a woman's wavy hairstyle popular at the turn of the century. The US state of New York shares border crossings with both of the Canadian provinces of Ontario and Quebec, all three of which are popular "hotbeds" of Crokinole playing. Crokinole is called ('flick-board') (and occasionally knipsdesh (flick-table)) in the Plautdietsch spoken by Russian Mennonites. World Crokinole Championship The World Crokinole Championship (WCC) tournament has been held annually since 1999 on the first Saturday of June in Tavistock, Ontario. Tavistock was chosen as the host city because it was the home of Eckhardt Wettlaufer, the maker of the earliest known board. The tournament has seen registration from every Canadian province, several American states, Germany, Australia, Spain and the UK. The WCC singles competition begins with a qualifying round in which competitors play 10 matches against randomly assigned opponents. The qualifying round is played in a large randomly determined competition. At the end of the opening round, the top 16 competitors move on to the playoffs. The top four in the playoffs advance to a final round robin to play each other, and the top two compete in the finals. The WCC doubles competition begins with a qualifying round of 8 matches against randomly assigned opponents with the top six teams advancing to a playoff round robin to determine the champions. The WCC has multiple divisions, including a singles finger-shooting category for competitive players (adult singles), novices (recreational), and younger players (intermediate, 11–14 yrs; junior, 6–10 yrs), as well as a division for cue-shooters (cues singles). The WCC also awards a prize for the top 20-hole shooter in the qualifying round of competitive singles, recreational singles, cues singles, intermediate singles, and in the junior singles. The tournament also holds doubles divisions for competitive fingers-shooting (competitive doubles), novices (recreational doubles), younger players (youth doubles, 6–16yrs), and cues-shooting (cues doubles). The official board builder of the World Crokinole Championships is Jeremy Tracey. National Crokinole Association The National Crokinole Association (NCA) is an association that supports existing, and the development of new, crokinole clubs and tournaments. While the majority of NCA events are based in Ontario, Canada, the NCA has held sanctioned events in the Canadian provinces of PEI and BC, as well as in New York State. The collection of NCA tournaments is referred to as the NCA Tour. Each NCA Tour season begins at the Tavistock World Crokinole Championships in June, and concludes at the Ontario Singles Crokinole Championship in May of the following years. The results of each tournament award points for each player, as they compete for their season-ending ranking classification. See also Chapayev Novuss Pichenotte Table shuffleboard References External links Crokinole FAQ by Wayne and Caleb Kelly World Crokinole Championships in Tavistock, Ontario National Crokinole Association The Crokinole Post Crokinole Skills Competition Videos Our Canada Magazine Article about Crokinole Crokinole Friends of the Pichenotte Guys Crokinole Canada by Ted Fuller Crokinole Game Boards by Jeremy Tracey The Crokinole Depot by The Beierling Brothers Board games introduced in the 1870s Disk-flicking games Tabletop cue games Sports originating in Canada Canadian board games
5416
https://en.wikipedia.org/wiki/Capitalism
Capitalism
Capitalism is an economic system based on the private ownership of the means of production and their operation for profit. Central characteristics of capitalism include capital accumulation, competitive markets, price systems, private property, property rights recognition, voluntary exchange, and wage labor. In a market economy, decision-making and investments are determined by owners of wealth, property, or ability to maneuver capital or production ability in capital and financial markets—whereas prices and the distribution of goods and services are mainly determined by competition in goods and services markets. Economists, historians, political economists, and sociologists have adopted different perspectives in their analyses of capitalism and have recognized various forms of it in practice. These include laissez-faire or free-market capitalism, anarcho-capitalism, state capitalism, and welfare capitalism. Different forms of capitalism feature varying degrees of free markets, public ownership, obstacles to free competition, and state-sanctioned social policies. The degree of competition in markets and the role of intervention and regulation, as well as the scope of state ownership, vary across different models of capitalism. The extent to which different markets are free and the rules defining private property are matters of politics and policy. Most of the existing capitalist economies are mixed economies that combine elements of free markets with state intervention and in some cases economic planning. Capitalism in its modern form emerged from agrarianism in 16th century England and mercantilist practices by European countries in the 16th to 18th centuries. The Industrial Revolution of the 18th century established capitalism as a dominant mode of production, characterized by factory work and a complex division of labor. Through the process of globalization, capitalism spread across the world in the 19th and 20th centuries, especially before World War I and after the end of the Cold War. During the 19th century, capitalism was largely unregulated by the state, but became more regulated in the post-World War II period through Keynesianism, followed by a return of more unregulated capitalism starting in the 1980s through neoliberalism. Market economies have existed under many forms of government and in many different times, places and cultures. Modern industrial capitalist societies developed in Western Europe in a process that led to the Industrial Revolution. Economic growth is a characteristic tendency of capitalist economies. Etymology The term "capitalist", meaning an owner of capital, appears earlier than the term "capitalism" and dates to the mid-17th century. "Capitalism" is derived from capital, which evolved from , a late Latin word based on , meaning "head"—which is also the origin of "chattel" and "cattle" in the sense of movable property (only much later to refer only to livestock). emerged in the 12th to 13th centuries to refer to funds, stock of merchandise, sum of money or money carrying interest. By 1283, it was used in the sense of the capital assets of a trading firm and was often interchanged with other words—wealth, money, funds, goods, assets, property and so on. The Hollantse () Mercurius uses "capitalists" in 1633 and 1654 to refer to owners of capital. In French, Étienne Clavier referred to capitalistes in 1788, four years before its first recorded English usage by Arthur Young in his work Travels in France (1792). In his Principles of Political Economy and Taxation (1817), David Ricardo referred to "the capitalist" many times. English poet Samuel Taylor Coleridge used "capitalist" in his work Table Talk (1823). Pierre-Joseph Proudhon used the term in his first work, What is Property? (1840), to refer to the owners of capital. Benjamin Disraeli used the term in his 1845 work Sybil. The initial use of the term "capitalism" in its modern sense is attributed to Louis Blanc in 1850 ("What I call 'capitalism' that is to say the appropriation of capital by some to the exclusion of others") and Pierre-Joseph Proudhon in 1861 ("Economic and social regime in which capital, the source of income, does not generally belong to those who make it work through their labor"). Karl Marx frequently referred to the "capital" and to the "capitalist mode of production" in Das Kapital (1867). Marx did not use the form capitalism but instead used capital, capitalist and capitalist mode of production, which appear frequently. Due to the word being coined by socialist critics of capitalism, economist and historian Robert Hessen stated that the term "capitalism" itself is a term of disparagement and a misnomer for economic individualism. Bernard Harcourt agrees with the statement that the term is a misnomer, adding that it misleadingly suggests that there is such as a thing as "capital" that inherently functions in certain ways and is governed by stable economic laws of its own. In the English language, the term "capitalism" first appears, according to the Oxford English Dictionary (OED), in 1854, in the novel The Newcomes by novelist William Makepeace Thackeray, where the word meant "having ownership of capital". Also according to the OED, Carl Adolph Douai, a German American socialist and abolitionist, used the term "private capitalism" in 1863. History Capitalism, in its modern form, can be traced to the emergence of agrarian capitalism and mercantilism in the early Renaissance, in city-states like Florence. Capital has existed incipiently on a small scale for centuries in the form of merchant, renting and lending activities and occasionally as small-scale industry with some wage labor. Simple commodity exchange and consequently simple commodity production, which is the initial basis for the growth of capital from trade, have a very long history. During the Islamic Golden Age, Arabs promulgated capitalist economic policies such as free trade and banking. Their use of Indo-Arabic numerals facilitated bookkeeping. These innovations migrated to Europe through trade partners in cities such as Venice and Pisa. Italian mathematicians traveled the Mediterranean talking to Arab traders and returned to popularize the use of Indo-Arabic numerals in Europe. Agrarianism The economic foundations of the feudal agricultural system began to shift substantially in 16th-century England as the manorial system had broken down and land began to become concentrated in the hands of fewer landlords with increasingly large estates. Instead of a serf-based system of labor, workers were increasingly employed as part of a broader and expanding money-based economy. The system put pressure on both landlords and tenants to increase the productivity of agriculture to make profit; the weakened coercive power of the aristocracy to extract peasant surpluses encouraged them to try better methods, and the tenants also had incentive to improve their methods in order to flourish in a competitive labor market. Terms of rent for land were becoming subject to economic market forces rather than to the previous stagnant system of custom and feudal obligation. Mercantilism The economic doctrine prevailing from the 16th to the 18th centuries is commonly called mercantilism. This period, the Age of Discovery, was associated with the geographic exploration of foreign lands by merchant traders, especially from England and the Low Countries. Mercantilism was a system of trade for profit, although commodities were still largely produced by non-capitalist methods. Most scholars consider the era of merchant capitalism and mercantilism as the origin of modern capitalism, although Karl Polanyi argued that the hallmark of capitalism is the establishment of generalized markets for what he called the "fictitious commodities", i.e. land, labor and money. Accordingly, he argued that "not until 1834 was a competitive labor market established in England, hence industrial capitalism as a social system cannot be said to have existed before that date". England began a large-scale and integrative approach to mercantilism during the Elizabethan Era (1558–1603). A systematic and coherent explanation of balance of trade was made public through Thomas Mun's argument England's Treasure by Forraign Trade, or the Balance of our Forraign Trade is The Rule of Our Treasure. It was written in the 1620s and published in 1664. European merchants, backed by state controls, subsidies and monopolies, made most of their profits by buying and selling goods. In the words of Francis Bacon, the purpose of mercantilism was "the opening and well-balancing of trade; the cherishing of manufacturers; the banishing of idleness; the repressing of waste and excess by sumptuary laws; the improvement and husbanding of the soil; the regulation of prices...". After the period of the proto-industrialization, the British East India Company and the Dutch East India Company, after massive contributions from the Mughal Bengal, inaugurated an expansive era of commerce and trade. These companies were characterized by their colonial and expansionary powers given to them by nation-states. During this era, merchants, who had traded under the previous stage of mercantilism, invested capital in the East India Companies and other colonies, seeking a return on investment. Industrial Revolution In the mid-18th century a group of economic theorists, led by David Hume (1711–1776) and Adam Smith (1723–1790), challenged fundamental mercantilist doctrines—such as the belief that the world's wealth remained constant and that a state could only increase its wealth at the expense of another state. During the Industrial Revolution, industrialists replaced merchants as a dominant factor in the capitalist system and effected the decline of the traditional handicraft skills of artisans, guilds and journeymen. Industrial capitalism marked the development of the factory system of manufacturing, characterized by a complex division of labor between and within work process and the routine of work tasks; and eventually established the domination of the capitalist mode of production. Industrial Britain eventually abandoned the protectionist policy formerly prescribed by mercantilism. In the 19th century, Richard Cobden (1804–1865) and John Bright (1811–1889), who based their beliefs on the Manchester School, initiated a movement to lower tariffs. In the 1840s Britain adopted a less protectionist policy, with the 1846 repeal of the Corn Laws and the 1849 repeal of the Navigation Acts. Britain reduced tariffs and quotas, in line with David Ricardo's advocacy of free trade. Modernity Broader processes of globalization carried capitalism across the world. By the beginning of the nineteenth century, a series of loosely connected market systems had come together as a relatively integrated global system, in turn intensifying processes of economic and other globalization. Late in the 20th century, capitalism overcame a challenge by centrally-planned economies and is now the encompassing system worldwide, with the mixed economy as its dominant form in the industrialized Western world. Industrialization allowed cheap production of household items using economies of scale, while rapid population growth created sustained demand for commodities. The imperialism of the 18th-century decisively shaped globalization in this period. After the First and Second Opium Wars (1839–1860) and the completion of the British conquest of India, vast populations of Asia became ready consumers of European exports. Also in this period, Europeans colonized areas of sub-Saharan Africa and the Pacific islands. The conquest of new parts of the globe, notably sub-Saharan Africa, by Europeans yielded valuable natural resources such as rubber, diamonds and coal and helped fuel trade and investment between the European imperial powers, their colonies and the United States: From the 1870s to the early 1920s, the global financial system was mainly tied to the gold standard. The United Kingdom first formally adopted this standard in 1821. Soon to follow were Canada in 1853, Newfoundland in 1865, the United States and Germany (de jure) in 1873. New technologies, such as the telegraph, the transatlantic cable, the radiotelephone, the steamship and railways allowed goods and information to move around the world to an unprecedented degree. In the United States, the term "capitalist" primarily referred to powerful businessmen until the 1920s due to widespread societal skepticism and criticism of capitalism and its most ardent supporters. Contemporary capitalist societies developed in the West from 1950 to the present and this type of system continues to expand throughout different regions of the world—relevant examples started in the United States after the 1950s, France after the 1960s, Spain after the 1970s, Poland after 2015, and others. At this stage capitalist markets are considered developed and are characterized by developed private and public markets for equity and debt, a high standard of living (as characterized by the World Bank and the IMF), large institutional investors and a well-funded banking system. A significant managerial class has emerged and decides on a significant proportion of investments and other decisions. A different future than that envisioned by Marx has started to emerge—explored and described by Anthony Crosland in the United Kingdom in his 1956 book The Future of Socialism and by John Kenneth Galbraith in North America in his 1958 book The Affluent Society, 90 years after Marx's research on the state of capitalism in 1867. The postwar boom ended in the late 1960s and early 1970s and the economic situation grew worse with the rise of stagflation. Monetarism, a modification of Keynesianism that is more compatible with laissez-faire analyses, gained increasing prominence in the capitalist world, especially under the years in office of Ronald Reagan in the United States (1981–1989) and of Margaret Thatcher in the United Kingdom (1979–1990). Public and political interest began shifting away from the so-called collectivist concerns of Keynes's managed capitalism to a focus on individual choice, called "remarketized capitalism". The end of the Cold War and the dissolution of the Soviet Union allowed for capitalism to become a truly global system in a way not seen since before World War I. The development of the neoliberal global economy would have been impossible without the fall of communism. Harvard Kennedy School economist Dani Rodrik distinguishes between three historical variants of capitalism: Capitalism 1.0 during the 19th century entailed largely unregulated markets with a minimal role for the state (aside from national defense, and protecting property rights) Capitalism 2.0 during the post-World War II years entailed Keynesianism, a substantial role for the state in regulating markets, and strong welfare states Capitalism 2.1 entailed a combination of unregulated markets, globalization, and various national obligations by states Relationship to democracy The relationship between democracy and capitalism is a contentious area in theory and in popular political movements. The extension of adult-male suffrage in 19th-century Britain occurred along with the development of industrial capitalism and representative democracy became widespread at the same time as capitalism, leading capitalists to posit a causal or mutual relationship between them. However, according to some authors in the 20th-century, capitalism also accompanied a variety of political formations quite distinct from liberal democracies, including fascist regimes, absolute monarchies and single-party states. Democratic peace theory asserts that democracies seldom fight other democracies, but critics of that theory suggest that this may be because of political similarity or stability rather than because they are "democratic" or "capitalist". Moderate critics argue that though economic growth under capitalism has led to democracy in the past, it may not do so in the future as authoritarian régimes have been able to manage economic growth using some of capitalism's competitive principles without making concessions to greater political freedom. Political scientists Torben Iversen and David Soskice see democracy and capitalism as mutually supportive. Robert Dahl argued in On Democracy that capitalism was beneficial for democracy because economic growth and a large middle class were good for democracy. He also argued that a market economy provided a substitute for government control of the economy, which reduces the risks of tyranny and authoritarianism. In his book The Road to Serfdom (1944), Friedrich Hayek (1899–1992) asserted that the free-market understanding of economic freedom as present in capitalism is a requisite of political freedom. He argued that the market mechanism is the only way of deciding what to produce and how to distribute the items without using coercion. Milton Friedman and Ronald Reagan also promoted this view. Friedman claimed that centralized economic operations are always accompanied by political repression. In his view, transactions in a market economy are voluntary and that the wide diversity that voluntary activity permits is a fundamental threat to repressive political leaders and greatly diminishes their power to coerce. Some of Friedman's views were shared by John Maynard Keynes, who believed that capitalism was vital for freedom to survive and thrive. Freedom House, an American think-tank that conducts international research on, and advocates for, democracy, political freedom and human rights, has argued that "there is a high and statistically significant correlation between the level of political freedom as measured by Freedom House and economic freedom as measured by the Wall Street Journal/Heritage Foundation survey". In Capital in the Twenty-First Century (2013), Thomas Piketty of the Paris School of Economics asserted that inequality is the inevitable consequence of economic growth in a capitalist economy and the resulting concentration of wealth can destabilize democratic societies and undermine the ideals of social justice upon which they are built. States with capitalistic economic systems have thrived under political regimes deemed to be authoritarian or oppressive. Singapore has a successful open market economy as a result of its competitive, business-friendly climate and robust rule of law. Nonetheless, it often comes under fire for its style of government which, though democratic and consistently one of the least corrupt, operates largely under a one-party rule. Furthermore, it does not vigorously defend freedom of expression as evidenced by its government-regulated press, and its penchant for upholding laws protecting ethnic and religious harmony, judicial dignity and personal reputation. The private (capitalist) sector in the People's Republic of China has grown exponentially and thrived since its inception, despite having an authoritarian government. Augusto Pinochet's rule in Chile led to economic growth and high levels of inequality by using authoritarian means to create a safe environment for investment and capitalism. Similarly, Suharto's authoritarian reign and extirpation of the Communist Party of Indonesia allowed for the expansion of capitalism in Indonesia. The term "capitalism" in its modern sense is often attributed to Karl Marx. In his Das Kapital, Marx analyzed the "capitalist mode of production" using a method of understanding today known as Marxism. However, Marx himself rarely used the term "capitalism" while it was used twice in the more political interpretations of his work, primarily authored by his collaborator Friedrich Engels. In the 20th century, defenders of the capitalist system often replaced the term "capitalism" with phrases such as free enterprise and private enterprise and replaced "capitalist" with rentier and investor in reaction to the negative connotations associated with capitalism. Characteristics In general, capitalism as an economic system and mode of production can be summarized by the following: Capital accumulation: production for profit and accumulation as the implicit purpose of all or most of production, constriction or elimination of production formerly carried out on a common social or private household basis. Commodity production: production for exchange on a market; to maximize exchange-value instead of use-value. Private ownership of the means of production: High levels of wage labor. The investment of money to make a profit. The use of the price mechanism to allocate resources between competing uses. Economically efficient use of the factors of production and raw materials due to maximization of value added in the production process. Freedom of capitalists to act in their self-interest in managing their business and investments. Capital suppliance by "the single owner of a firm, or by shareholders in the case of a joint-stock company." Market In free market and laissez-faire forms of capitalism, markets are used most extensively with minimal or no regulation over the pricing mechanism. In mixed economies, which are almost universal today, markets continue to play a dominant role, but they are regulated to some extent by the state in order to correct market failures, promote social welfare, conserve natural resources, fund defense and public safety or other rationale. In state capitalist systems, markets are relied upon the least, with the state relying heavily on state-owned enterprises or indirect economic planning to accumulate capital. Competition arises when more than one producer is trying to sell the same or similar products to the same buyers. Adherents of the capitalist theory believe that competition leads to innovation and more affordable prices. Monopolies or cartels can develop, especially if there is no competition. A monopoly occurs when a firm has exclusivity over a market. Hence, the firm can engage in rent seeking behaviors such as limiting output and raising prices because it has no fear of competition. Governments have implemented legislation for the purpose of preventing the creation of monopolies and cartels. In 1890, the Sherman Antitrust Act became the first legislation passed by the United States Congress to limit monopolies. Wage labor Wage labor, usually referred to as paid work, paid employment, or paid labor, refers to the socioeconomic relationship between a worker and an employer in which the worker sells their labor power under a formal or informal employment contract. These transactions usually occur in a labor market where wages or salaries are market-determined. In exchange for the money paid as wages (usual for short-term work-contracts) or salaries (in permanent employment contracts), the work product generally becomes the undifferentiated property of the employer. A wage laborer is a person whose primary means of income is from the selling of their labor in this way. Profit motive The profit motive, in the theory of capitalism, is the desire to earn income in the form of profit. Stated differently, the reason for a business's existence is to turn a profit. The profit motive functions according to rational choice theory, or the theory that individuals tend to pursue what is in their own best interests. Accordingly, businesses seek to benefit themselves and/or their shareholders by maximizing profit. In capitalist theoretics, the profit motive is said to ensure that resources are being allocated efficiently. For instance, Austrian economist Henry Hazlitt explains: "If there is no profit in making an article, it is a sign that the labor and capital devoted to its production are misdirected: the value of the resources that must be used up in making the article is greater than the value of the article itself". Socialist theorists note that, unlike merchantilists, capitalists accumulate their profits while expecting their profit rates to remain the same. This causes problems as earnings in the rest of society do not increase in the same proportion. <ref>"What is capitalism" Australian Socialist' https://search.informit.org/doi/10.3316/informit.818838886883514 </ref> Private property The relationship between the state, its formal mechanisms, and capitalist societies has been debated in many fields of social and political theory, with active discussion since the 19th century. Hernando de Soto is a contemporary Peruvian economist who has argued that an important characteristic of capitalism is the functioning state protection of property rights in a formal property system where ownership and transactions are clearly recorded. According to de Soto, this is the process by which physical assets are transformed into capital, which in turn may be used in many more ways and much more efficiently in the market economy. A number of Marxian economists have argued that the Enclosure Acts in England and similar legislation elsewhere were an integral part of capitalist primitive accumulation and that specific legal frameworks of private land ownership have been integral to the development of capitalism. Private property rights are not absolute, as in many countries the state has the power to seize private property, typically for public use, under the powers of eminent domain. Market competition In capitalist economics, market competition is the rivalry among sellers trying to achieve such goals as increasing profits, market share and sales volume by varying the elements of the marketing mix: price, product, distribution and promotion. Merriam-Webster defines competition in business as "the effort of two or more parties acting independently to secure the business of a third party by offering the most favourable terms". It was described by Adam Smith in The Wealth of Nations (1776) and later economists as allocating productive resources to their most highly valued uses and encouraging efficiency. Smith and other classical economists before Antoine Augustine Cournot were referring to price and non-price rivalry among producers to sell their goods on best terms by bidding of buyers, not necessarily to a large number of sellers nor to a market in final equilibrium. Competition is widespread throughout the market process. It is a condition where "buyers tend to compete with other buyers, and sellers tend to compete with other sellers". In offering goods for exchange, buyers competitively bid to purchase specific quantities of specific goods which are available, or might be available if sellers were to choose to offer such goods. Similarly, sellers bid against other sellers in offering goods on the market, competing for the attention and exchange resources of buyers. Competition results from scarcity, as it is not possible to satisfy all conceivable human wants, and occurs as people try to meet the criteria being used to determine allocation. In the works of Adam Smith, the idea of capitalism is made possible through competition which creates growth. Although capitalism has not entered mainstream economics at the time of Smith, it is vital to the construction of his ideal society. One of the foundational blocks of capitalism is competition. Smith believed that a prosperous society is one where "everyone should be free to enter and leave the market and change trades as often as he pleases." He believed that the freedom to act in one's self-interest is essential for the success of a capitalist society. The fear arises that if all participants focus on their own goals, society's well-being will be water under the bridge. Smith maintains that despite the concerns of intellectuals, "global trends will hardly be altered if they refrain from pursuing their personal ends." He insisted that the actions of a few participants cannot alter the course of society. Instead, Smith maintained that they should focus on personal progress instead and that this will result in overall growth to the whole. Competition between participants, "who are all endeavoring to justle one another out of employment, obliges every man to endeavor to execute his work" through competition towards growth. Economic growth Economic growth is a characteristic tendency of capitalist economies. As a mode of production The capitalist mode of production refers to the systems of organising production and distribution within capitalist societies. Private money-making in various forms (renting, banking, merchant trade, production for profit and so on) preceded the development of the capitalist mode of production as such. The term capitalist mode of production is defined by private ownership of the means of production, extraction of surplus value by the owning class for the purpose of capital accumulation, wage-based labor and, at least as far as commodities are concerned, being market-based. Capitalism in the form of money-making activity has existed in the shape of merchants and money-lenders who acted as intermediaries between consumers and producers engaging in simple commodity production (hence the reference to "merchant capitalism") since the beginnings of civilisation. What is specific about the "capitalist mode of production" is that most of the inputs and outputs of production are supplied through the market (i.e. they are commodities) and essentially all production is in this mode. By contrast, in flourishing feudalism most or all of the factors of production, including labor, are owned by the feudal ruling class outright and the products may also be consumed without a market of any kind, it is production for use within the feudal social unit and for limited trade. This has the important consequence that, under capitalism, the whole organisation of the production process is reshaped and re-organised to conform with economic rationality as bounded by capitalism, which is expressed in price relationships between inputs and outputs (wages, non-labor factor costs, sales and profits) rather than the larger rational context faced by society overall—that is, the whole process is organised and re-shaped in order to conform to "commercial logic". Essentially, capital accumulation comes to define economic rationality in capitalist production. A society, region or nation is capitalist if the predominant source of incomes and products being distributed is capitalist activity, but even so this does not yet mean necessarily that the capitalist mode of production is dominant in that society. Mixed economies rely on the nation they are in to provide some goods or services, while the free market produces and maintains the rest. Role of government Government agencies regulate the standards of service in many industries, such as airlines and broadcasting, as well as financing a wide range of programs. In addition, the government regulates the flow of capital and uses financial tools such as the interest rate to control such factors as inflation and unemployment. Supply and demand In capitalist economic structures, supply and demand is an economic model of price determination in a market. It postulates that in a perfectly competitive market, the unit price for a particular good will vary until it settles at a point where the quantity demanded by consumers (at the current price) will equal the quantity supplied by producers (at the current price), resulting in an economic equilibrium for price and quantity. The "basic laws" of supply and demand, as described by David Besanko and Ronald Braeutigam, are the following four: If demand increases (demand curve shifts to the right) and supply remains unchanged, then a shortage occurs, leading to a higher equilibrium price. If demand decreases (demand curve shifts to the left) and supply remains unchanged, then a surplus occurs, leading to a lower equilibrium price. If demand remains unchanged and supply increases (supply curve shifts to the right), then a surplus occurs, leading to a lower equilibrium price. If demand remains unchanged and supply decreases (supply curve shifts to the left), then a shortage occurs, leading to a higher equilibrium price. Supply schedule A supply schedule is a table that shows the relationship between the price of a good and the quantity supplied. Demand schedule A demand schedule, depicted graphically as the demand curve, represents the amount of some goods that buyers are willing and able to purchase at various prices, assuming all determinants of demand other than the price of the good in question, such as income, tastes and preferences, the price of substitute goods and the price of complementary goods, remain the same. According to the law of demand, the demand curve is almost always represented as downward sloping, meaning that as price decreases, consumers will buy more of the good. Just like the supply curves reflect marginal cost curves, demand curves are determined by marginal utility curves. Equilibrium In the context of supply and demand, economic equilibrium refers to a state where economic forces such as supply and demand are balanced and in the absence of external influences the (equilibrium) values of economic variables will not change. For example, in the standard text-book model of perfect competition equilibrium occurs at the point at which quantity demanded and quantity supplied are equal. Market equilibrium, in this case, refers to a condition where a market price is established through competition such that the amount of goods or services sought by buyers is equal to the amount of goods or services produced by sellers. This price is often called the competitive price or market clearing price and will tend not to change unless demand or supply changes. Partial equilibrium Partial equilibrium, as the name suggests, takes into consideration only a part of the market to attain equilibrium. Jain proposes (attributed to George Stigler): "A partial equilibrium is one which is based on only a restricted range of data, a standard example is price of a single product, the prices of all other products being held fixed during the analysis". History According to Hamid S. Hosseini, the "power of supply and demand" was discussed to some extent by several early Muslim scholars, such as fourteenth century Mamluk scholar Ibn Taymiyyah, who wrote: "If desire for goods increases while its availability decreases, its price rises. On the other hand, if availability of the good increases and the desire for it decreases, the price comes down". John Locke's 1691 work Some Considerations on the Consequences of the Lowering of Interest and the Raising of the Value of Money includes an early and clear description of supply and demand and their relationship. In this description, demand is rent: "The price of any commodity rises or falls by the proportion of the number of buyer and sellers" and "that which regulates the price... [of goods] is nothing else but their quantity in proportion to their rent". David Ricardo titled one chapter of his 1817 work Principles of Political Economy and Taxation "On the Influence of Demand and Supply on Price". In Principles of Political Economy and Taxation, Ricardo more rigorously laid down the idea of the assumptions that were used to build his ideas of supply and demand. In his 1870 essay "On the Graphical Representation of Supply and Demand", Fleeming Jenkin in the course of "introduc[ing] the diagrammatic method into the English economic literature" published the first drawing of supply and demand curves therein, including comparative statics from a shift of supply or demand and application to the labor market. The model was further developed and popularized by Alfred Marshall in the 1890 textbook Principles of Economics. Types There are many variants of capitalism in existence that differ according to country and region. They vary in their institutional makeup and by their economic policies. The common features among all the different forms of capitalism are that they are predominantly based on the private ownership of the means of production and the production of goods and services for profit; the market-based allocation of resources; and the accumulation of capital. They include advanced capitalism, corporate capitalism, finance capitalism, free-market capitalism, mercantilism, social capitalism, state capitalism and welfare capitalism. Other theoretical variants of capitalism include anarcho-capitalism, community capitalism, humanistic capitalism, neo-capitalism, state monopoly capitalism, and technocapitalism. Advanced Advanced capitalism is the situation that pertains to a society in which the capitalist model has been integrated and developed deeply and extensively for a prolonged period. Various writers identify Antonio Gramsci as an influential early theorist of advanced capitalism, even if he did not use the term himself. In his writings, Gramsci sought to explain how capitalism had adapted to avoid the revolutionary overthrow that had seemed inevitable in the 19th century. At the heart of his explanation was the decline of raw coercion as a tool of class power, replaced by use of civil society institutions to manipulate public ideology in the capitalists' favour.Holub, Renate (2005) Antonio Gramsci: Beyond Marxism and Postmodernism Jürgen Habermas has been a major contributor to the analysis of advanced-capitalistic societies. Habermas observed four general features that characterise advanced capitalism: Concentration of industrial activity in a few large firms. Constant reliance on the state to stabilise the economic system. A formally democratic government that legitimises the activities of the state and dissipates opposition to the system. The use of nominal wage increases to pacify the most restless segments of the work force. Corporate Corporate capitalism is a free or mixed-market capitalist economy characterized by the dominance of hierarchical, bureaucratic corporations. Finance Finance capitalism is the subordination of processes of production to the accumulation of money profits in a financial system. In their critique of capitalism, Marxism and Leninism both emphasise the role of finance capital as the determining and ruling-class interest in capitalist society, particularly in the latter stages. Rudolf Hilferding is credited with first bringing the term finance capitalism into prominence through Finance Capital, his 1910 study of the links between German trusts, banks and monopolies—a study subsumed by Vladimir Lenin into Imperialism, the Highest Stage of Capitalism (1917), his analysis of the imperialist relations of the great world powers. Lenin concluded that the banks at that time operated as "the chief nerve centres of the whole capitalist system of national economy". For the Comintern (founded in 1919), the phrase "dictatorship of finance capitalism" became a regular one. Fernand Braudel would later point to two earlier periods when finance capitalism had emerged in human history—with the Genoese in the 16th century and with the Dutch in the 17th and 18th centuries—although at those points it developed from commercial capitalism. Giovanni Arrighi extended Braudel's analysis to suggest that a predominance of finance capitalism is a recurring, long-term phenomenon, whenever a previous phase of commercial/industrial capitalist expansion reaches a plateau. Free market A capitalist free-market economy is an economic system where prices for goods and services are set entirely by the forces of supply and demand and are expected, by its adherents, to reach their point of equilibrium without intervention by government policy. It typically entails support for highly competitive markets and private ownership of the means of production. Laissez-faire capitalism is a more extensive form of this free-market economy, but one in which the role of the state is limited to protecting property rights. In anarcho-capitalist theory, property rights are protected by private firms and market-generated law. According to anarcho-capitalists, this entails property rights without statutory law through market-generated tort, contract and property law, and self-sustaining private industry. Fernand Braudel argued that free market exchange and capitalism are to some degree opposed; free market exchange involves transparent public transactions and a large number of equal competitors, while capitalism involves a small number of participants using their capital to control the market via private transactions, control of information, and limitation of competition. Mercantile Mercantilism is a nationalist form of early capitalism that came into existence approximately in the late 16th century. It is characterized by the intertwining of national business interests with state-interest and imperialism. Consequently, the state apparatus is used to advance national business interests abroad. An example of this is colonists living in America who were only allowed to trade with and purchase goods from their respective mother countries (e.g., Britain, France and Portugal). Mercantilism was driven by the belief that the wealth of a nation is increased through a positive balance of trade with other nations—it corresponds to the phase of capitalist development sometimes called the primitive accumulation of capital. Social A social market economy is a free-market or mixed-market capitalist system, sometimes classified as a coordinated market economy, where government intervention in price formation is kept to a minimum, but the state provides significant services in areas such as social security, health care, unemployment benefits and the recognition of labor rights through national collective bargaining arrangements. This model is prominent in Western and Northern European countries as well as Japan, albeit in slightly different configurations. The vast majority of enterprises are privately owned in this economic model. Rhine capitalism is the contemporary model of capitalism and adaptation of the social market model that exists in continental Western Europe today. State State capitalism is a capitalist market economy dominated by state-owned enterprises, where the state enterprises are organized as commercial, profit-seeking businesses. The designation has been used broadly throughout the 20th century to designate a number of different economic forms, ranging from state-ownership in market economies to the command economies of the former Eastern Bloc. According to Aldo Musacchio, a professor at Harvard Business School, state capitalism is a system in which governments, whether democratic or autocratic, exercise a widespread influence on the economy either through direct ownership or various subsidies. Musacchio notes a number of differences between today's state capitalism and its predecessors. In his opinion, gone are the days when governments appointed bureaucrats to run companies: the world's largest state-owned enterprises are now traded on the public markets and kept in good health by large institutional investors. Contemporary state capitalism is associated with the East Asian model of capitalism, dirigisme and the economy of Norway. Alternatively, Merriam-Webster defines state capitalism as "an economic system in which private capitalism is modified by a varying degree of government ownership and control". In Socialism: Utopian and Scientific, Friedrich Engels argued that state-owned enterprises would characterize the final stage of capitalism, consisting of ownership and management of large-scale production and communication by the bourgeois state. In his writings, Vladimir Lenin characterized the economy of Soviet Russia as state capitalist, believing state capitalism to be an early step toward the development of socialism.V.I. Lenin. To the Russian Colony in North America . Lenin Collected Works, Progress Publishers, 1971, Moscow, vol. 42, pp. 425c–27a. Some economists and left-wing academics including Richard D. Wolff and Noam Chomsky, as well as many Marxist philosophers and revolutionaries such as Raya Dunayevskaya and C.L.R. James, argue that the economies of the former Soviet Union and Eastern Bloc represented a form of state capitalism because their internal organization within enterprises and the system of wage labor remained intact.Noam Chomsky (1986). The Soviet Union Versus Socialism . Our Generation. Retrieved 9 July 2015. The term is not used by Austrian School economists to describe state ownership of the means of production. The economist Ludwig von Mises argued that the designation of state capitalism was a new label for the old labels of state socialism and planned economy and differed only in non-essentials from these earlier designations. Welfare Welfare capitalism is capitalism that includes social welfare policies. Today, welfare capitalism is most often associated with the models of capitalism found in Central Mainland and Northern Europe such as the Nordic model, social market economy and Rhine capitalism. In some cases, welfare capitalism exists within a mixed economy, but welfare states can and do exist independently of policies common to mixed economies such as state interventionism and extensive regulation. A mixed economy is a largely market-based capitalist economy consisting of both private and public ownership of the means of production and economic interventionism through macroeconomic policies intended to correct market failures, reduce unemployment and keep inflation low. The degree of intervention in markets varies among different countries. Some mixed economies such as France under dirigisme also featured a degree of indirect economic planning over a largely capitalist-based economy. Most modern capitalist economies are defined as mixed economies to some degree, however French economist Thomas Piketty state that capitalist economies might shift to a much more laissez-faire approach in the near future. Eco-capitalism Eco-capitalism, also known as "environmental capitalism" or (sometimes) "green capitalism", is the view that capital exists in nature as "natural capital" (ecosystems that have ecological yield) on which all wealth depends. Therefore, governments should use market-based policy-instruments (such as a carbon tax) to resolve environmental problems. The term "Blue Greens" is often applied to those who espouse eco-capitalism. Eco-capitalism can be thought of as the right-wing equivalent to Red Greens. Sustainable capitalism Sustainable capitalism is a conceptual form of capitalism based upon sustainable practices that seek to preserve humanity and the planet, while reducing externalities and bearing a resemblance of capitalist economic policy. A capitalistic economy must expand to survive and find new markets to support this expansion. Capitalist systems are often destructive to the environment as well as certain individuals without access to proper representation. However, sustainability provides quite the opposite; it implies not only a continuation, but a replenishing of resources. Sustainability is often thought of to be related to environmentalism, and sustainable capitalism applies sustainable principles to economic governance and social aspects of capitalism as well. The importance of sustainable capitalism has been more recently recognized, but the concept is not new. Changes to the current economic model would have heavy social environmental and economic implications and require the efforts of individuals, as well as compliance of local, state and federal governments. Controversy surrounds the concept as it requires an increase in sustainable practices and a marked decrease in current consumptive behaviors. This is a concept of capitalism described in Al Gore and David Blood's manifesto for the Generation Investment Management to describe a long-term political, economic and social structure which would mitigate current threats to the planet and society. According to their manifesto, sustainable capitalism would integrate the environmental, social and governance (ESG) aspects into risk assessment in attempt to limit externalities. Most of the ideas they list are related to economic changes, and social aspects, but strikingly few are explicitly related to any environmental policy change. Capital accumulation The accumulation of capital is the process of "making money" or growing an initial sum of money through investment in production. Capitalism is based on the accumulation of capital, whereby financial capital is invested in order to make a profit and then reinvested into further production in a continuous process of accumulation. In Marxian economic theory, this dynamic is called the law of value. Capital accumulation forms the basis of capitalism, where economic activity is structured around the accumulation of capital, defined as investment in order to realize a financial profit. In this context, "capital" is defined as money or a financial asset invested for the purpose of making more money (whether in the form of profit, rent, interest, royalties, capital gain or some other kind of return). In mainstream economics, accounting and Marxian economics, capital accumulation is often equated with investment of profit income or savings, especially in real capital goods. The concentration and centralisation of capital are two of the results of such accumulation. In modern macroeconomics and econometrics, the phrase "capital formation" is often used in preference to "accumulation", though the United Nations Conference on Trade and Development (UNCTAD) refers nowadays to "accumulation". The term "accumulation" is occasionally used in national accounts. Wage labor Wage labor refers to the sale of labor under a formal or informal employment contract to an employer. These transactions usually occur in a labor market where wages are market determined. In Marxist economics, these owners of the means of production and suppliers of capital are generally called capitalists. The description of the role of the capitalist has shifted, first referring to a useless intermediary between producers, then to an employer of producers, and finally to the owners of the means of production. Labor includes all physical and mental human resources, including entrepreneurial capacity and management skills, which are required to produce products and services. Production is the act of making goods or services by applying labor power.Robbins, Richard H. Global problems and the culture of capitalism. Boston: Allyn & Bacon, 2007. Print. Criticism Criticism of capitalism comes from various political and philosophical approaches, including anarchist, socialist, religious and nationalist viewpoints. Of those who oppose it or want to modify it, some believe that capitalism should be removed through revolution while others believe that it should be changed slowly through political reforms. Prominent critiques of capitalism allege that it is inherently exploitative, alienating, unstable, unsustainable, and economically inefficient—and that it creates massive economic inequality, commodifies people, degrades the environment, is anti-democratic, and leads to an erosion of human rights because of its incentivization of imperialist expansion and war. Other critics argue that such inequities are not due to the ethic-neutral construct of the economic system commonly known as capitalism, but to the ethics of those who shape and execute the system. For example, some contend that Milton Friedman's (human) ethic of 'maximizing shareholder value' creates a harmful form of capitalism, while a Millard Fuller or John Bogle (human) ethic of 'enough' creates a sustainable form. Equitable ethics and unified ethical decision-making is theorized to create a less damaging form of capitalism. See also Anti-capitalism Advanced capitalism Ancient economic thought Bailout Capitalism Capitalism (disambiguation) Christian views on poverty and wealth Communism Corporatocracy Crony capitalism Economic sociology Free market Global financial crisis in September 2008 Humanistic economics Invisible hand Late capitalism Le Livre noir du capitalisme Market socialism Perspectives on capitalism by school of thought Post-capitalism Post-Fordism Racial capitalism Rent-seeking State monopoly capitalism Surveillance capitalism Perestroika References Notes Bibliography Krahn, Harvey J., and Graham S. Lowe (1993). Work, Industry, and Canadian Society. Second ed. Scarborough, Ont.: Nelson Canada. xii, 430 p. Further reading Alperovitz, Gar (2011). America Beyond Capitalism: Reclaiming Our Wealth, Our Liberty, and Our Democracy, 2nd Edition. Democracy Collaborative Press. . Ascher, Ivan. Portfolio Society: On the Capitalist Mode of Prediction. Zone Books, 2016. Baptist, Edward E. The Half Has Never Been Told: Slavery and the Making of American Capitalism. New York, Basic Books, 2014. . Braudel, Fernand. Civilization and Capitalism. Callinicos, Alex. "Wage Labour and State Capitalism – A reply to Peter Binns and Mike Haynes", International Socialism, second series, 12, Spring 1979. Farl, Erich. "The Genealogy of State Capitalism". In: International London, vol. 2, no. 1, 1973. Gough, Ian. State Expenditure in Advanced Capitalism New Left Review. Habermas, J. [1973] Legitimation Crisis (eng. translation by T. McCarthy). Boston, Beacon. From Google books ; excerpt. Hyman, Louis and Edward E. Baptist (2014). American Capitalism: A Reader. Simon & Schuster. . Jameson, Fredric (1991). Postmodernism, or, the Cultural Logic of Late Capitalism. Kotler, Philip (2015). Confronting Capitalism: Real Solutions for a Troubled Economic System. AMACOM. Mandel, Ernest (1999). Late Capitalism. Marcel van der Linden, Western Marxism and the Soviet Union. New York, Brill Publishers, 2007. Mayfield, Anthony. "Economics", in his On the Brink: Resource Depletion, Debt Collapse, and Super-technology ([Vancouver, B.C., Canada]: On the Brink Publishing, 2013), pp. 50–104. Panitch, Leo, and Sam Gindin (2012). The Making of Global Capitalism: the Political Economy of American Empire. London, Verso. . Polanyi, Karl (2001). The Great Transformation: The Political and Economic Origins of Our Time. Beacon Press; 2nd ed. Richards, Jay W. (2009). Money, Greed, and God: Why Capitalism is the Solution and Not the Problem. New York: HarperOne. Roberts, Paul Craig (2013). The Failure of Laissez-faire Capitalism: towards a New Economics for a Full World. Atlanta, Ga.: Clarity Press. Robinson, William I. Global Capitalism and the Crisis of Humanity. Cambridge University Press, 2014. Hoevet, Ocean. "Capital as a Social Relation" (New Palgrave article) Sombart, Werner (1916) Der moderne Kapitalismus. Historisch-systematische Darstellung des gesamteuropäischen Wirtschaftslebens von seinen Anfängen bis zur Gegenwart. Final edn. 1916, repr. 1969, paperback edn. (3 vols. in 6): 1987 Munich: dtv. (Also in Spanish; no English translation yet.) Tarnoff, Ben, "Better, Faster, Stronger" (review of John Tinnell, The Philosopher of Palo Alto: Mark Weisner, Xerox PARC, and the Original Internet of Things, University of Chicago Press, 347 pp.; and Malcolm Harris, Palo Alto: A History of California, Capitalism, and the World, Little, Brown, 708 pp.), The New York Review of Books, vol. LXX, no. 14 (21 September 2023), pp. 38–40. "[Palo Alto is] a place where the [United States'] contradictions are sharpened to their finest points, above all the defining and enduring contradictions between democratic principle and antidemocratic practice. There is nothing as American as celebrating equality while subverting it. Or as Californian." (p. 40.) External links Capitalism at Encyclopædia Britannica'' Online. Selected Titles on Capitalism and Its Discontents. Harvard University Press. Accounting Banking Business Economic liberalism Production economics Profit Social philosophy Western culture Finance
5420
https://en.wikipedia.org/wiki/Cross%20ownership
Cross ownership
Cross ownership is a method of reinforcing business relationships by owning stock in the companies with which a given company does business. Heavy cross ownership is referred to as circular ownership. In the US, "cross ownership" also refers to a type of investment in different mass-media properties in one market. Cross ownership of stock Countries noted to have high levels of cross ownership include: Japan Germany Positives of cross ownership: Closely ties each business to the economic destiny of its business partners Promotes a slow rate of economic change Cross ownership of shares is criticized for: Stagnating the economy Wasting capital that could be used to improve productivity Expanding economic downturns by preventing reallocation of capital Lessening control of shareholders over corporate leadership. A major factor in perpetuating cross ownership of shares is a high capital gains tax rate. A company has less incentive to sell cross owned shares if taxes are high because of the immediate reduction in the value of the assets. For example, a company owns $1000 of stock in another company that was originally purchased for $200. If the capital gains tax rate is 25% (like in Germany), the profit of $800 would be taxed for $200, causing the company to take a $200 loss on the sale. Long term cross ownership of shares combined with a high capital tax rate greatly increases periods of asset deflation both in time and in severity. Media cross ownership Cross ownership also refers to a type of media ownership in which one type of communications (say a newspaper) owns or is the sister company of another type of medium (such as a radio or TV station). One example is The New York Timess former ownership of WQXR Radio and the Chicago Tribune'''s similar relationship with WGN Radio (WGN-AM) and Television (WGN-TV). The Federal Communications Commission generally does not allow cross ownership, to keep from one license holder having too much local media ownership, unless the license holder obtains a waiver, such as News Corporation and the Tribune Company have in New York. The mid-1970s cross-ownership guidelines grandfathered already-existing cross ownerships, such as Tribune-WGN, New York Times-WQXR and the New York Daily News'' ownership of WPIX Television and Radio. References Business ownership Strategic management
5421
https://en.wikipedia.org/wiki/Cardiology
Cardiology
Cardiology () is the study of the heart. Cardiology is a branch of medicine that deals with disorders of the heart and the cardiovascular system. The field includes medical diagnosis and treatment of congenital heart defects, coronary artery disease, heart failure, valvular heart disease, and electrophysiology. Physicians who specialize in this field of medicine are called cardiologists, a specialty of internal medicine. Pediatric cardiologists are pediatricians who specialize in cardiology. Physicians who specialize in cardiac surgery are called cardiothoracic surgeons or cardiac surgeons, a specialty of general surgery. Specializations All cardiologists in the branch of medicine study the disorders of the heart, but the study of adult and child heart disorders each require different training pathways. Therefore, an adult cardiologist (often simply called "cardiologist") is inadequately trained to take care of children, and pediatric cardiologists are not trained to treat adult heart disease. Surgical aspects are not included in cardiology and are in the domain of cardiothoracic surgery. For example, coronary artery bypass surgery (CABG), cardiopulmonary bypass and valve replacement are surgical procedures performed by surgeons, not cardiologists. However, some minimally invasive procedures such as cardiac catheterization and pacemaker implantation are performed by cardiologists who have additional training in non-surgical interventions (interventional cardiology and electrophysiology respectively). Adult cardiology Cardiology is a specialty of internal medicine. To be a cardiologist in the United States, a three-year residency in internal medicine is followed by a three-year fellowship in cardiology. It is possible to specialize further in a sub-specialty. Recognized sub-specialties in the U.S. by the Accreditation Council for Graduate Medical Education are cardiac electrophysiology, echocardiography, interventional cardiology, and nuclear cardiology. Recognized subspecialties in the U.S. by the American Osteopathic Association Bureau of Osteopathic Specialists include clinical cardiac electrophysiology and interventional cardiology. In India, a three-year residency in General Medicine or Pediatrics after M.B.B.S. and then three years of residency in cardiology are needed to be a D.M./Diplomate of National Board (DNB) in Cardiology. Per Doximity, adult cardiologists earn an average of $436,849 per year in the U.S. Cardiac electrophysiology Cardiac electrophysiology is the science of elucidating, diagnosing, and treating the electrical activities of the heart. The term is usually used to describe studies of such phenomena by invasive (intracardiac) catheter recording of spontaneous activity as well as of cardiac responses to programmed electrical stimulation (PES). These studies are performed to assess complex arrhythmias, elucidate symptoms, evaluate abnormal electrocardiograms, assess risk of developing arrhythmias in the future, and design treatment. These procedures increasingly include therapeutic methods (typically radiofrequency ablation, or cryoablation) in addition to diagnostic and prognostic procedures. Other therapeutic modalities employed in this field include antiarrhythmic drug therapy and implantation of pacemakers and automatic implantable cardioverter-defibrillators (AICD). The cardiac electrophysiology study typically measures the response of the injured or cardiomyopathic myocardium to PES on specific pharmacological regimens in order to assess the likelihood that the regimen will successfully prevent potentially fatal sustained ventricular tachycardia (VT) or ventricular fibrillation (VF) in the future. Sometimes a series of electrophysiology-study drug trials must be conducted to enable the cardiologist to select the one regimen for long-term treatment that best prevents or slows the development of VT or VF following PES. Such studies may also be conducted in the presence of a newly implanted or newly replaced cardiac pacemaker or AICD. Clinical cardiac electrophysiology Clinical cardiac electrophysiology is a branch of the medical specialty of cardiology and is concerned with the study and treatment of rhythm disorders of the heart. Cardiologists with expertise in this area are usually referred to as electrophysiologists. Electrophysiologists are trained in the mechanism, function, and performance of the electrical activities of the heart. Electrophysiologists work closely with other cardiologists and cardiac surgeons to assist or guide therapy for heart rhythm disturbances (arrhythmias). They are trained to perform interventional and surgical procedures to treat cardiac arrhythmia. The training required to become an electrophysiologist is long and requires 8 years after medical school (within the U.S.). Three years of internal medicine residency, three years of cardiology fellowship, and two years of clinical cardiac electrophysiology. Cardiogeriatrics Cardiogeriatrics, or geriatric cardiology, is the branch of cardiology and geriatric medicine that deals with the cardiovascular disorders in elderly people. Cardiac disorders such as coronary heart disease, including myocardial infarction, heart failure, cardiomyopathy, and arrhythmias such as atrial fibrillation, are common and are a major cause of mortality in elderly people. Vascular disorders such as atherosclerosis and peripheral arterial disease cause significant morbidity and mortality in aged people. Imaging Cardiac imaging includes echocardiography (echo), cardiac magnetic resonance imaging (CMR), and computed tomography of the heart. Those who specialize in cardiac imaging may undergo more training in all imaging modes or focus on a single imaging modality. Echocardiography (or "echo") uses standard two-dimensional, three-dimensional, and Doppler ultrasound to create images of the heart. Those who specialize in echo may spend a significant amount of their clinical time reading echos and performing transesophageal echo, in particular using the latter during procedures such as insertion of a left atrial appendage occlusion device. Cardiac MRI utilizes special protocols to image heart structure and function with specific sequences for certain diseases such as hemochromatosis and amyloidosis. Cardiac CT utilizes special protocols to image heart structure and function with particular emphasis on coronary arteries. Interventional cardiology Interventional cardiology is a branch of cardiology that deals specifically with the catheter based treatment of structural heart diseases. A large number of procedures can be performed on the heart by catheterization, including angiogram, angioplasty, atherectomy, and stent implantation. These procedures all involve insertion of a sheath into the femoral artery or radial artery (but, in practice, any large peripheral artery or vein) and cannulating the heart under visualization (most commonly fluoroscopy). This cannulation allows indirect access to the heart, bypassing the trauma caused by surgical opening of the chest. The main advantages of using the interventional cardiology or radiology approach are the avoidance of the scars and pain, and long post-operative recovery. Additionally, interventional cardiology procedure of primary angioplasty is now the gold standard of care for an acute myocardial infarction. This procedure can also be done proactively, when areas of the vascular system become occluded from atherosclerosis. The Cardiologist will thread this sheath through the vascular system to access the heart. This sheath has a balloon and a tiny wire mesh tube wrapped around it, and if the cardiologist finds a blockage or stenosis, they can inflate the balloon at the occlusion site in the vascular system to flatten or compress the plaque against the vascular wall. Once that is complete a stent is placed as a type of scaffold to hold the vasculature open permanently. Cardiomyopathy/heart failure Specialization of general cardiology to just that of the cardiomyopathies leads to also specializing in heart transplant and pulmonary hypertension. Cardiomyopathy is a heart disease of the heart muscle, where the heart muscle becomes inflamed and thick. Cardiooncology A recent specialization of cardiology is that of cardiooncology. This area specializes in the cardiac management in those with cancer and, in particular, those with plans for chemotherapy or whom have experienced cardiac complications of chemotherapy. Preventive cardiology and cardiac rehabilitation In recent times, the focus is gradually shifting to preventive cardiology due to increased cardiovascular disease burden at an early age. According to the WHO, 37% of all premature deaths are due to cardiovascular diseases and out of this, 82% are in low and middle income countries. Clinical cardiology is the sub specialty of cardiology which looks after preventive cardiology and cardiac rehabilitation. Preventive cardiology also deals with routine preventive checkup though noninvasive tests, specifically electrocardiography, fasegraphy, stress tests, lipid profile and general physical examination to detect any cardiovascular diseases at an early age, while cardiac rehabilitation is the upcoming branch of cardiology which helps a person regain their overall strength and live a normal life after a cardiovascular event. A subspecialty of preventive cardiology is sports cardiology. Because heart disease is the leading cause of death in the world including United States (cdc.gov), national health campaigns and randomized control research has developed to improve heart health. Pediatric cardiology Helen B. Taussig is known as the founder of pediatric cardiology. She became famous through her work with Tetralogy congenital heart defect in which oxygenated and deoxygenated blood enters the circulatory system resulting from a ventricular septal defect (VSD) right beneath the aorta. This condition causes newborns to have a bluish-tint, cyanosis, and have a deficiency of oxygen to their tissues, hypoxemia. She worked with Alfred Blalock and Vivien Thomas at the Johns Hopkins Hospital where they experimented with dogs to look at how they would attempt to surgically cure these "blue babies". They eventually figured out how to do just that by the anastomosis of the systemic artery to the pulmonary artery and called this the Blalock-Taussig Shunt. Tetralogy of Fallot, pulmonary atresia, double outlet right ventricle, transposition of the great arteries, persistent truncus arteriosus, and Ebstein's anomaly are various congenital cyanotic heart diseases, in which the blood of the newborn is not oxygenated efficiently, due to the heart defect. Adult congenital heart disease As more children with congenital heart disease are surviving into adulthood, a hybrid of adult & pediatric cardiology has emerged called adult congenital heart disease (ACHD). This field can be entered as either adult or pediatric cardiology. ACHD specializes in congenital diseases in the setting of adult diseases (e.g., coronary artery disease, COPD, diabetes) that is, otherwise, atypical for adult or pediatric cardiology. The heart As the center focus of cardiology, the heart has numerous anatomical features (e.g., atria, ventricles, heart valves) and numerous physiological features (e.g., systole, heart sounds, afterload) that have been encyclopedically documented for many centuries. The heart is located in the middle of the abdomen with its tip slightly towards the left side of the abdomen. Disorders of the heart lead to heart disease and cardiovascular disease and can lead to a significant number of deaths: cardiovascular disease is the leading cause of death in the U.S. and caused 24.95% of total deaths in 2008. The primary responsibility of the heart is to pump blood throughout the body. It pumps blood from the body — called the systemic circulation — through the lungs — called the pulmonary circulation — and then back out to the body. This means that the heart is connected to and affects the entirety of the body. Simplified, the heart is a circuit of the circulation. While plenty is known about the healthy heart, the bulk of study in cardiology is in disorders of the heart and restoration, and where possible, of function. The heart is a muscle that squeezes blood and functions like a pump. The heart's systems can be classified as either electrical or mechanical, and both of these systems are susceptible to failure or dysfunction. The electrical system of the heart is centered on the periodic contraction (squeezing) of the muscle cells that is caused by the cardiac pacemaker located in the sinoatrial node. The study of the electrical aspects is a sub-field of electrophysiology called cardiac electrophysiology and is epitomized with the electrocardiogram (ECG/EKG). The action potentials generated in the pacemaker propagate throughout the heart in a specific pattern. The system that carries this potential is called the electrical conduction system. Dysfunction of the electrical system manifests in many ways and may include Wolff–Parkinson–White syndrome, ventricular fibrillation, and heart block. The mechanical system of the heart is centered on the fluidic movement of blood and the functionality of the heart as a pump. The mechanical part is ultimately the purpose of the heart and many of the disorders of the heart disrupt the ability to move blood. Heart failure is one condition in which the mechanical properties of the heart have failed or are failing, which means insufficient blood is being circulated. Failure to move a sufficient amount of blood through the body can cause damage or failure of other organs and may result in death if severe. Coronary circulation Coronary circulation is the circulation of blood in the blood vessels of the heart muscle (the myocardium). The vessels that deliver oxygen-rich blood to the myocardium are known as coronary arteries. The vessels that remove the deoxygenated blood from the heart muscle are known as cardiac veins. These include the great cardiac vein, the middle cardiac vein, the small cardiac vein and the anterior cardiac veins. As the left and right coronary arteries run on the surface of the heart, they can be called epicardial coronary arteries. These arteries, when healthy, are capable of autoregulation to maintain coronary blood flow at levels appropriate to the needs of the heart muscle. These relatively narrow vessels are commonly affected by atherosclerosis and can become blocked, causing angina or myocardial infarction (a.k.a a heart attack). The coronary arteries that run deep within the myocardium are referred to as subendocardial. The coronary arteries are classified as "end circulation", since they represent the only source of blood supply to the myocardium; there is very little redundant blood supply, which is why blockage of these vessels can be so critical. Cardiac examination The cardiac examination (also called the "precordial exam"), is performed as part of a physical examination, or when a patient presents with chest pain suggestive of a cardiovascular pathology. It would typically be modified depending on the indication and integrated with other examinations especially the respiratory examination. Like all medical examinations, the cardiac examination follows the standard structure of inspection, palpation and auscultation. Heart disorders Cardiology is concerned with the normal functionality of the heart and the deviation from a healthy heart. Many disorders involve the heart itself, but some are outside of the heart and in the vascular system. Collectively, the two are jointly termed the cardiovascular system, and diseases of one part tend to affect the other. Coronary artery disease Coronary artery disease, also known as "ischemic heart disease", is a group of diseases that includes: stable angina, unstable angina, myocardial infarction, and is one of the causes of sudden cardiac death. It is within the group of cardiovascular diseases of which it is the most common type. A common symptom is chest pain or discomfort which may travel into the shoulder, arm, back, neck, or jaw. Occasionally it may feel like heartburn. Usually symptoms occur with exercise or emotional stress, last less than a few minutes, and get better with rest. Shortness of breath may also occur and sometimes no symptoms are present. The first sign is occasionally a heart attack. Other complications include heart failure or an irregular heartbeat. Risk factors include: high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, and excessive alcohol, among others. Other risks include depression. The underlying mechanism involves atherosclerosis of the arteries of the heart. A number of tests may help with diagnoses including: electrocardiogram, cardiac stress testing, coronary computed tomographic angiography, and coronary angiogram, among others. Prevention is by eating a healthy diet, regular exercise, maintaining a healthy weight and not smoking. Sometimes medication for diabetes, high cholesterol, or high blood pressure are also used. There is limited evidence for screening people who are at low risk and do not have symptoms. Treatment involves the same measures as prevention. Additional medications such as antiplatelets including aspirin, beta blockers, or nitroglycerin may be recommended. Procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass surgery (CABG) may be used in severe disease. In those with stable CAD it is unclear if PCI or CABG in addition to the other treatments improve life expectancy or decreases heart attack risk. In 2013 CAD was the most common cause of death globally, resulting in 8.14 million deaths (16.8%) up from 5.74 million deaths (12%) in 1990. The risk of death from CAD for a given age has decreased between 1980 and 2010 especially in developed countries. The number of cases of CAD for a given age has also decreased between 1990 and 2010. In the U.S. in 2010 about 20% of those over 65 had CAD, while it was present in 7% of those 45 to 64, and 1.3% of those 18 to 45. Rates are higher among men than women of a given age. Cardiomyopathy Heart failure or formally cardiomyopathy, is the impaired function of the heart and there are numerous causes and forms of heart failure. Cardiac arrhythmia Cardiac arrhythmia, also known as "cardiac dysrhythmia" or "irregular heartbeat", is a group of conditions in which the heartbeat is too fast, too slow, or irregular in its rhythm. A heart rate that is too fast – above 100 beats per minute in adults – is called tachycardia. A heart rate that is too slow – below 60 beats per minute – is called bradycardia. Many types of arrhythmia present no symptoms. When symptoms are present, they may include palpitations, or feeling a pause between heartbeats. More serious symptoms may include lightheadedness, passing out, shortness of breath, or chest pain. While most types of arrhythmia are not serious, some predispose a person to complications such as stroke or heart failure. Others may result in cardiac arrest. There are four main types of arrhythmia: extra beats, supraventricular tachycardias, ventricular arrhythmias, and bradyarrhythmias. Extra beats include premature atrial contractions, premature ventricular contractions, and premature junctional contractions. Supraventricular tachycardias include atrial fibrillation, atrial flutter, and paroxysmal supraventricular tachycardia. Ventricular arrhythmias include ventricular fibrillation and ventricular tachycardia. Arrhythmias are due to problems with the electrical conduction system of the heart. Arrhythmias may occur in children; however, the normal range for the heart rate is different and depends on age. A number of tests can help diagnose arrhythmia, including an electrocardiogram and Holter monitor. Most arrhythmias can be effectively treated. Treatments may include medications, medical procedures such as a pacemaker, and surgery. Medications for a fast heart rate may include beta blockers or agents that attempt to restore a normal heart rhythm such as procainamide. This later group may have more significant side effects especially if taken for a long period of time. Pacemakers are often used for slow heart rates. Those with an irregular heartbeat are often treated with blood thinners to reduce the risk of complications. Those who have severe symptoms from an arrhythmia may receive urgent treatment with a jolt of electricity in the form of cardioversion or defibrillation. Arrhythmia affects millions of people. In Europe and North America, as of 2014, atrial fibrillation affects about 2% to 3% of the population. Atrial fibrillation and atrial flutter resulted in 112,000 deaths in 2013, up from 29,000 in 1990. Sudden cardiac death is the cause of about half of deaths due to cardiovascular disease or about 15% of all deaths globally. About 80% of sudden cardiac death is the result of ventricular arrhythmias. Arrhythmias may occur at any age but are more common among older people. Cardiac arrest Cardiac arrest is a sudden stop in effective blood flow due to the failure of the heart to contract effectively. Symptoms include loss of consciousness and abnormal or absent breathing. Some people may have chest pain, shortness of breath, or nausea before this occurs. If not treated within minutes, death usually occurs. The most common cause of cardiac arrest is coronary artery disease. Less common causes include major blood loss, lack of oxygen, very low potassium, heart failure, and intense physical exercise. A number of inherited disorders may also increase the risk including long QT syndrome. The initial heart rhythm is most often ventricular fibrillation. The diagnosis is confirmed by finding no pulse. While a cardiac arrest may be caused by heart attack or heart failure these are not the same. Prevention includes not smoking, physical activity, and maintaining a healthy weight. Treatment for cardiac arrest is immediate cardiopulmonary resuscitation (CPR) and, if a shockable rhythm is present, defibrillation. Among those who survive targeted temperature management may improve outcomes. An implantable cardiac defibrillator may be placed to reduce the chance of death from recurrence. In the United States, cardiac arrest outside of hospital occurs in about 13 per 10,000 people per year (326,000 cases). In hospital cardiac arrest occurs in an additional 209,000 Cardiac arrest becomes more common with age. It affects males more often than females. The percentage of people who survive with treatment is about 8%. Many who survive have significant disability. Many U.S. television shows, however, have portrayed unrealistically high survival rates of 67%. Hypertension Hypertension, also known as "high blood pressure", is a long term medical condition in which the blood pressure in the arteries is persistently elevated. High blood pressure usually does not cause symptoms. Long term high blood pressure, however, is a major risk factor for coronary artery disease, stroke, heart failure, peripheral vascular disease, vision loss, and chronic kidney disease. Lifestyle factors can increase the risk of hypertension. These include excess salt in the diet, excess body weight, smoking, and alcohol consumption. Hypertension can also be caused by other diseases, or occur as a side-effect of drugs. Blood pressure is expressed by two measurements, the systolic and diastolic pressures, which are the maximum and minimum pressures, respectively. Normal blood pressure when at rest is within the range of 100–140 millimeters mercury (mmHg) systolic and 60–90 mmHg diastolic. High blood pressure is present if the resting blood pressure is persistently at or above 140/90 mmHg for most adults. Different numbers apply to children. When diagnosing high blood pressure, ambulatory blood pressure monitoring over a 24-hour period appears to be more accurate than "in-office" blood pressure measurement at a physician's office or other blood pressure screening location. Lifestyle changes and medications can lower blood pressure and decrease the risk of health complications. Lifestyle changes include weight loss, decreased salt intake, physical exercise, and a healthy diet. If changes in lifestyle are insufficient, blood pressure medications may be used. A regimen of up to three medications effectively controls blood pressure in 90% of people. The treatment of moderate to severe high arterial blood pressure (defined as >160/100 mmHg) with medication is associated with an improved life expectancy and reduced morbidity. The effect of treatment for blood pressure between 140/90 mmHg and 160/100 mmHg is less clear, with some studies finding benefits while others do not. High blood pressure affects between 16% and 37% of the population globally. In 2010, hypertension was believed to have been a factor in 18% (9.4 million) deaths. Essential vs Secondary hypertension Essential hypertension is the form of hypertension that by definition has no identifiable cause. It is the most common type of hypertension, affecting 95% of hypertensive patients, it tends to be familial and is likely to be the consequence of an interaction between environmental and genetic factors. Prevalence of essential hypertension increases with age, and individuals with relatively high blood pressure at younger ages are at increased risk for the subsequent development of hypertension. Hypertension can increase the risk of cerebral, cardiac, and renal events. Secondary hypertension is a type of hypertension which is caused by an identifiable underlying secondary cause. It is much less common than essential hypertension, affecting only 5% of hypertensive patients. It has many different causes including endocrine diseases, kidney diseases, and tumors. It also can be a side effect of many medications. Complications of hypertension Complications of hypertension are clinical outcomes that result from persistent elevation of blood pressure. Hypertension is a risk factor for all clinical manifestations of atherosclerosis since it is a risk factor for atherosclerosis itself.It is an independent predisposing factor for heart failure, coronary artery disease, stroke, renal disease, and peripheral arterial disease. It is the most important risk factor for cardiovascular morbidity and mortality, in industrialized countries. Congenital heart defects A congenital heart defect, also known as a "congenital heart anomaly" or "congenital heart disease", is a problem in the structure of the heart that is present at birth. Signs and symptoms depend on the specific type of problem. Symptoms can vary from none to life-threatening. When present they may include rapid breathing, bluish skin, poor weight gain, and feeling tired. It does not cause chest pain. Most congenital heart problems do not occur with other diseases. Complications that can result from heart defects include heart failure. The cause of a congenital heart defect is often unknown. Certain cases may be due to infections during pregnancy such as rubella, use of certain medications or drugs such as alcohol or tobacco, parents being closely related, or poor nutritional status or obesity in the mother. Having a parent with a congenital heart defect is also a risk factor. A number of genetic conditions are associated with heart defects including Down syndrome, Turner syndrome, and Marfan syndrome. Congenital heart defects are divided into two main groups: cyanotic heart defects and non-cyanotic heart defects, depending on whether the child has the potential to turn bluish in color. The problems may involve the interior walls of the heart, the heart valves, or the large blood vessels that lead to and from the heart. Congenital heart defects are partly preventable through rubella vaccination, the adding of iodine to salt, and the adding of folic acid to certain food products. Some defects do not need treatment. Other may be effectively treated with catheter based procedures or heart surgery. Occasionally a number of operations may be needed. Occasionally heart transplantation is required. With appropriate treatment outcomes, even with complex problems, are generally good. Heart defects are the most common birth defect. In 2013 they were present in 34.3 million people globally. They affect between 4 and 75 per 1,000 live births depending upon how they are diagnosed. About 6 to 19 per 1,000 cause a moderate to severe degree of problems. Congenital heart defects are the leading cause of birth defect-related deaths. In 2013 they resulted in 323,000 deaths down from 366,000 deaths in 1990. Tetralogy of Fallot Tetralogy of Fallot is the most common congenital heart disease arising in 1–3 cases per 1,000 births. The cause of this defect is a ventricular septal defect (VSD) and an overriding aorta. These two defects combined causes deoxygenated blood to bypass the lungs and going right back into the circulatory system. The modified Blalock-Taussig shunt is usually used to fix the circulation. This procedure is done by placing a graft between the subclavian artery and the ipsilateral pulmonary artery to restore the correct blood flow. Pulmonary atresia Pulmonary atresia happens in 7–8 per 100,000 births and is characterized by the aorta branching out of the right ventricle. This causes the deoxygenated blood to bypass the lungs and enter the circulatory system. Surgeries can fix this by redirecting the aorta and fixing the right ventricle and pulmonary artery connection. There are two types of pulmonary atresia, classified by whether or not the baby also has a ventricular septal defect. Pulmonary atresia with an intact ventricular septum: This type of pulmonary atresia is associated with complete and intact septum between the ventricles. Pulmonary atresia with a ventricular septal defect: This type of pulmonary atresia happens when a ventricular septal defect allows blood to flow into and out of the right ventricle. Double outlet right ventricle Double outlet right ventricle (DORV) is when both great arteries, the pulmonary artery and the aorta, are connected to the right ventricle. There is usually a VSD in different particular places depending on the variations of DORV, typically 50% are subaortic and 30%. The surgeries that can be done to fix this defect can vary due to the different physiology and blood flow in the defected heart. One way it can be cured is by a VSD closure and placing conduits to restart the blood flow between the left ventricle and the aorta and between the right ventricle and the pulmonary artery. Another way is systemic-to-pulmonary artery shunt in cases associated with pulmonary stenosis. Also, a balloon atrial septostomy can be done to relieve hypoxemia caused by DORV with the Taussig-Bing anomaly while surgical correction is awaited. Transposition of great arteries There are two different types of transposition of the great arteries, Dextro-transposition of the great arteries and Levo-transposition of the great arteries, depending on where the chambers and vessels connect. Dextro-transposition happens in about 1 in 4,000 newborns and is when the right ventricle pumps blood into the aorta and deoxygenated blood enters the bloodstream. The temporary procedure is to create an atrial septal defect. A permanent fix is more complicated and involves redirecting the pulmonary return to the right atrium and the systemic return to the left atrium, which is known as the Senning procedure. The Rastelli procedure can also be done by rerouting the left ventricular outflow, dividing the pulmonary trunk, and placing a conduit in between the right ventricle and pulmonary trunk. Levo-transposition happens in about 1 in 13,000 newborns and is characterized by the left ventricle pumping blood into the lungs and the right ventricle pumping the blood into the aorta. This may not produce problems at the beginning, but will eventually due to the different pressures each ventricle uses to pump blood. Switching the left ventricle to be the systemic ventricle and the right ventricle to pump blood into the pulmonary artery can repair levo-transposition. Persistent truncus arteriosus Persistent truncus arteriosus is when the truncus arteriosus fails to split into the aorta and pulmonary trunk. This occurs in about 1 in 11,000 live births and allows both oxygenated and deoxygenated blood into the body. The repair consists of a VSD closure and the Rastelli procedure. Ebstein anomaly Ebstein's anomaly is characterized by a right atrium that is significantly enlarged and a heart that is shaped like a box. This is very rare and happens in less than 1% of congenital heart disease cases. The surgical repair varies depending on the severity of the disease. Pediatric cardiology is a sub-specialty of pediatrics. To become a pediatric cardiologist in the U.S., one must complete a three-year residency in pediatrics, followed by a three-year fellowship in pediatric cardiology. Per doximity, pediatric cardiologists make an average of $303,917 in the U.S. Diagnostic tests in cardiology Diagnostic tests in cardiology are the methods of identifying heart conditions associated with healthy vs. unhealthy, pathologic heart function. The starting point is obtaining a medical history, followed by Auscultation. Then blood tests, electrophysiological procedures, and cardiac imaging can be ordered for further analysis. Electrophysiological procedures include electrocardiogram, cardiac monitoring, cardiac stress testing, and the electrophysiology study. Trials Cardiology is known for randomized controlled trials that guide clinical treatment of cardiac diseases. While dozens are published every year, there are landmark trials that shift treatment significantly. Trials often have an acronym of the trial name, and this acronym is used to reference the trial and its results. Some of these landmark trials include: V-HeFT (1986) — use of vasodilators (hydralazine & isosorbide dinitrate) in heart failure ISIS-2 (1988) — use of aspirin in myocardial infarction CASE I (1991) — use of antiarrhythmic agents after a heart attack increases mortality SOLVD (1991) — use of ACE inhibitors in heart failure 4S (1994) — statins reduce risk of heart disease CURE (1991) — use of dual antiplatelet therapy in NSTEMI MIRACLE (2002) — use of cardiac resynchronization therapy in heart failure SCD-HeFT (2005) — the use of implantable cardioverter-defibrillator in heart failure RELY (2009), ROCKET-AF (2011), ARISTOTLE (2011) — use of DOACs in atrial fibrillation instead of warfarin ISCHEMIA (2020) — medical therapy is as good as coronary stents in stable heart disease Cardiology community Associations American College of Cardiology American Heart Association European Society of Cardiology Heart Rhythm Society Canadian Cardiovascular Society Indian Heart Association National Heart Foundation of Australia Cardiology Society of India Journals Acta Cardiologica American Journal of Cardiology Annals of Cardiac Anaesthesia Current Research: Cardiology Cardiology in Review Circulation Circulation Research Clinical and Experimental Hypertension Clinical Cardiology EP – Europace European Heart Journal Heart Heart Rhythm International Journal of Cardiology Journal of the American College of Cardiology Pacing and Clinical Electrophysiology Indian Heart Journal Cardiologists Robert Atkins (1930–2003), known for the Atkins diet Eugene Braunwald (born 1929), editor of Braunwald's Heart Disease and 1000+ publications Wallace Brigden (1916–2008), identified cardiomyopathy Manoj Durairaj (1971– ), cardiologist from Pune, India who received Pro Ecclesia et Pontifice Willem Einthoven (1860–1927), a physiologist who built the first practical ECG and won the 1924 Nobel Prize in Physiology or Medicine ("for the discovery of the mechanism of the electrocardiogram") Werner Forssmann (1904–1979), who infamously performed the first human catheterization on himself that led to him being let go from Berliner Charité Hospital, quitting cardiology as a speciality, and then winning the 1956 Nobel Prize in Physiology or Medicine ("for their discoveries concerning heart catheterization and pathological changes in the circulatory system") Andreas Gruentzig (1939–1985), first developed balloon angioplasty William Harvey (1578–1657), wrote Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus that first described the closed circulatory system and whom Forssmann described as founding cardiology in his Nobel lecture Murray S. Hoffman (1924–2018) As president of the Colorado Heart Association, he initiated one of the first jogging programs promoting cardiac health Max Holzmann (1899–1994), co-founder of the Swiss Society of Cardiology, president from 1952 to 1955 Samuel A. Levine (1891–1966), recognized the sign known as Levine's sign as well as the current grading of the intensity of heart murmurs, known as the Levine scale Henry Joseph Llewellyn "Barney" Marriott (1917–2007), ECG interpretation and Practical Electrocardiography Bernard Lown (1921–2021), original developer of the defibrillator Woldemar Mobitz (1889–1951), described and classified the two types of second-degree atrioventricular block often called "Mobitz Type I" and "Mobitz Type II" Jacqueline Noonan (1928–2020), discoverer of Noonan syndrome that is the top syndromic cause of congenital heart disease John Parkinson (1885–1976), known for Wolff–Parkinson–White syndrome Helen B. Taussig (1898–1986), founder of pediatric cardiology and extensively worked on blue baby syndrome Paul Dudley White (1886–1973), known for Wolff–Parkinson–White syndrome Fredrick Arthur Willius (1888–1972), founder of the cardiology department at the Mayo Clinic and an early pioneer of electrocardiography Louis Wolff (1898–1972), known for Wolff–Parkinson–White syndrome Karel Frederik Wenckebach (1864–1940), first described what is now called type I second-degree atrioventricular block in 1898 See also Glossary of medicine List of cardiac pharmaceutical agents Outline of cardiology References Sources External links American Heart Association
5422
https://en.wikipedia.org/wiki/Capcom
Capcom
is a Japanese video game company. It has created a number of multi-million-selling game franchises, with its most commercially successful being Resident Evil, Monster Hunter, Street Fighter, Mega Man, Devil May Cry, Dead Rising, Ace Attorney, and Marvel vs. Capcom. Mega Man himself serves as the official mascot of the company. Established in 1979, it has become an international enterprise with subsidiaries in East Asia (Hong Kong), Europe (London, England), and North America (San Francisco, California). History Capcom's predecessor, I.R.M. Corporation, was founded on May 30, 1979 by Kenzo Tsujimoto, who was still president of Irem Corporation when he founded I.R.M. He worked concomitantly in both companies until leaving the former in 1983. The original companies that spawned Capcom's Japan branch were I.R.M. and its subsidiary Japan Capsule Computers Co., Ltd., both of which were devoted to the manufacture and distribution of electronic game machines. The two companies underwent a name change to Sanbi Co., Ltd. in September 1981. On June 11, 1983, Tsujimoto established Capcom Co., Ltd. for the purpose of taking over the internal sales department. In January 1989, Capcom Co., Ltd. merged with Sanbi Co., Ltd., resulting in the current Japan branch. The name Capcom is a clipped compound of "Capsule Computers", a term coined by the company for the arcade machines it solely manufactured in its early years, designed to set themselves apart from personal computers that were becoming widespread. "Capsule" alludes to how Capcom likened its game software to "a capsule packed to the brim with gaming fun", and to the company's desire to protect its intellectual property with a hard outer shell, preventing illegal copies and inferior imitations. Capcom's first product was the medal game Little League (1983). It released its first arcade video game, Vulgus (May 1984). Starting with the arcade hit 1942 (1984), they began designing games with international markets in mind. The successful 1985 arcade games Commando and Ghosts 'n Goblins have been credited as the products "that shot [Capcom] to 8-bit silicon stardom" in the mid-1980s. Starting with Commando (late 1985), Capcom began licensing their arcade games for release on home computers, notably to British software houses Elite Systems and U.S. Gold in the late 1980s. Beginning with a Nintendo Entertainment System port of 1942 (published in Dec. 1985), the company ventured into the market of home console video games, which would eventually become its main business. The Capcom USA division had a brief stint in the late 1980s as a video game publisher for Commodore 64 and IBM PC DOS computers, although development of these arcade ports was handled by other companies. Capcom went on to create 15 multi-million-selling home video game franchises, with the best-selling being Resident Evil (1996). Their highest-grossing is the fighting game Street Fighter II (1991), driven largely by its success in arcades. In the late 1980s, Capcom was on the verge of bankruptcy when the development of a strip Mahjong game called Mahjong Gakuen started. It outsold Ghouls 'n Ghosts, the eighth highest-grossing arcade game of 1989 in Japan, and is credited with saving the company from financial crisis. Capcom has been noted as the last major publisher to be committed to 2D games, though it was not entirely by choice. The company's commitment to the Super Nintendo Entertainment System as its platform of choice caused them to lag behind other leading publishers in developing 3D-capable arcade boards. Also, the 2D animated cartoon-style graphics seen in games such as Darkstalkers: The Night Warriors and X-Men: Children of the Atom proved popular, leading Capcom to adopt them as a signature style and use them in more games. In 1990, Capcom entered the bowling industry with Bowlingo. It was a coin-operated, electro-mechanical, fully automated mini ten-pin bowling installation. It was smaller than a standard bowling alley, designed to be smaller and cheaper for amusement arcades. Bowlingo drew significant earnings in North America upon release in 1990. In 1994, Capcom adapted its Street Fighter series of fighting games into a film of the same name. While commercially successful, it was critically panned. A 2002 adaptation of its Resident Evil series faced similar criticism but was also successful in theaters. The company sees films as a way to build sales for its video games. Capcom partnered with Nyu Media in 2011 to publish and distribute the Japanese independent (dōjin soft) games that Nyu localized into the English language. The company works with the Polish localization company QLOC to port Capcom's games to other platforms; notably, examples are DmC: Devil May Crys PC version and its PlayStation 4 and Xbox One remasters, Dragon's Dogmas PC version, and Dead Risings version on PlayStation 4, Xbox One, and PC. In 2012, Capcom came under criticism for controversial sales tactics, such as the implementation of disc-locked content, which requires players to pay for additional content that is already available within the game's files, most notably in Street Fighter X Tekken. The company defended the practice. It has also been criticized for other business decisions, such as not releasing certain games outside of Japan (most notably the Sengoku Basara series), abruptly cancelling anticipated projects (most notably Mega Man Legends 3), and shutting down Clover Studio. On August 27, 2014, Capcom filed a patent infringement lawsuit against Koei Tecmo Games at the Osaka District Court for 980 million yen in damage. Capcom claimed Koei Tecmo infringed a patent it obtained in 2002 regarding a play feature in video games. On 2 November 2020, the company reported that its servers were affected by ransomware, scrambling its data, and the threat actors, the Ragnar Locker hacker group, had allegedly stolen 1TB of sensitive corporate data and were blackmailing Capcom to pay them to remove the ransomware. By mid-November, the group began putting information from the hack online, which included contact information for up to 350,000 of the company's employees and partners, as well as plans for upcoming games, indicating that Capcom opted to not pay the group. Capcom affirmed that no credit-card or other sensitive financial information was obtained in the hack. In 2021, Capcom removed appearances of the Rising Sun Flag from their rerelease of Street Fighter II. Although Capcom did not provide an official explanation for the flag's removal, due to the flag-related controversy, it is speculated that it was done so to avoid offending segments of the international gaming community. Artist and author Judy A. Juracek filed a lawsuit in June 2021 against Capcom for copyright infringement. In the court filings, she asserted Capcom had used images from her 1996 book Surfaces in their cover art and other assets for Resident Evil 4, Devil May Cry and other games. This was discovered due to the 2020 Capcom data breach, with several files and images matching those that were included within the book's companion CD-ROM. The court filings noted one image file of a metal surface, named ME0009 in Capcom's files, to have the same exact name on the book's CD-ROM. Juracek was seeking over in damages and $2,500 to $25,000 in false copyright management for each photograph Capcom used. Before a court date could be made, the matter was settled "amicably" in February 2022. It comes on the heels of Capcom being accused by Dutch movie director Richard Raaphorst of copying the monster design of his movie Frankenstein's Army into their game Resident Evil Village. In February 2022, it was reported by Bloomberg that Saudi Arabia's Public Investment Fund had purchased a 5% stake in Capcom, for an approximate value of US$332 million. In July 2023, Capcom acquired Tokyo-based computer graphics studio Swordcanes Studio. Corporate structure Development divisions In its beginning few years, Capcom's Japan branch had three development groups referred to as "Planning Rooms", led by Tokuro Fujiwara, Takashi Nishiyama and Yoshiki Okamoto. Later, games developed internally were created by several numbered "Production Studios", each assigned to different games. Starting in 2002, the development process was reformed to better share technologies and expertise, and the individual studios were gradually restructured into bigger departments responsible for different tasks. While there are self-contained departments for the creation of arcade, pachinko and pachislo, online, and mobile games, the Consumer Games R&D Division is an amalgamation of subsections in charge of game development stages. Capcom has two internal Consumer Games Development divisions: Division 1, headed by Jun Takeuchi, develops Resident Evil, Mega Man, Devil May Cry, Dead Rising, and other major franchises (usually targeting global audiences). Division 2, headed by Ryozo Tsujimoto, develops Ace Attorney, Onimusha, Sengoku Basara, Ōkami, and other franchises with more traditional IP (usually targeting audiences in Asia) alongside online-focused franchises such as Monster Hunter, Street Fighter, Marvel vs. Capcom, and Lost Planet. In addition to these teams, Capcom commissions outside development studios to ensure a steady output of titles. However, following poor sales of Dark Void and Bionic Commando, its management has decided to limit outsourcing to sequels and newer versions of installments in existing franchises, reserving the development of original titles for its in-house teams. The production of games, budgets, and platform support are decided on in development approval meetings, attended by the company management and the marketing, sales and quality control departments. Branches and subsidiaries Capcom Co., Ltd.'s head office building and R&D building are in Chūō-ku, Osaka. The parent company also has a branch office in the Shinjuku Mitsui Building in Nishi-Shinjuku, Shinjuku, Tokyo; and the Ueno Facility, a branch office in Iga, Mie Prefecture. The international Capcom Group encompasses 12 subsidiaries in Japan, rest of East Asia, North America, and Europe. Game-related media In addition to home, online, mobile, arcade, pachinko, and pachislot games, Capcom publishes strategy guides; maintains its own Plaza Capcom arcade centers in Japan; and licenses its franchise and character properties for tie-in products, movies, television series, and stage performances. Suleputer, an in-house marketing and music label established in cooperation with Sony Music Entertainment Intermedia in 1998, publishes CDs, DVDs, and other media based on Capcom's games. Captivate (renamed from Gamers Day in 2008), an annual private media summit, is traditionally used for new game and business announcements. Games Capcom started its Street Fighter franchise in 1987. The series of fighting games are among the most popular in their genre. Having sold more than 50 million copies, it is one of Capcom's flagship franchises. The company also introduced its Mega Man series in 1987, which has sold 40 million copies. The company released the first entry in its Resident Evil survival horror series in 1996, which become its most successful game series, selling more than 140 million copies. After releasing the second entry in the Resident Evil series, Capcom began a Resident Evil game for PlayStation 2. As it was significantly different from the existing series' games, Capcom decided to spin it into its own series, Devil May Cry. The first three entries were exclusively for PlayStation 2; further entries were released for non-Sony consoles. The entire series has sold almost 30 million copies. Capcom began its Monster Hunter series in 2004, which has sold more than 90 million copies on a variety of consoles. Although the company often relies on existing franchises, it has also published and developed several titles for the Xbox 360, PlayStation 3, and Wii based on original intellectual property: Lost Planet: Extreme Condition, Dead Rising, Dragon's Dogma, Asura's Wrath, and Zack and Wiki. During this period, Capcom also helped publish several original titles from up-and-coming Western developers, including Remember Me, Dark Void, and Spyborgs, titles other publishers were not willing to gamble on. Other games of note are the titles Ōkami, Ōkamiden, and Ghost Trick: Phantom Detective. In 2015, the PlayStation 4 version of Ultra Street Fighter IV was pulled from the Capcom Pro Tour due to numerous technical issues and bugs. In 2016, Capcom released Street Fighter V with very limited single player content. At launch, there were stability issues with the game's network that booted players mid-game even when they were not playing in an online mode. Street Fighter V failed to meet its sales target of 2 million in March 2016. Platinum Titles Capcom compiles a "Platinum Titles" list, updated quarterly, of its games that have sold over one million copies. It contains over 100 video games. This table shows the top ten titles, by sold copies, as of June 30, 2023. See also Articles Capcom Cup Capcom Five DreamHack Evolution Championship Series Companies founded by ex-Capcom employees References External links Official website Companies based in Osaka Companies listed on the Tokyo Stock Exchange Golden Joystick Award winners Japanese brands Japanese companies established in 1979 Pinball manufacturers Public Investment Fund Video game companies established in 1979 Video game companies of Japan Video game development companies Video game publishers 1993 initial public offerings
5428
https://en.wikipedia.org/wiki/History%20of%20Cambodia
History of Cambodia
The history of Cambodia, a country in mainland Southeast Asia, can be traced back to Indian civilization. Detailed records of a political structure on the territory of what is now Cambodia first appear in Chinese annals in reference to Funan, a polity that encompassed the southernmost part of the Indochinese peninsula during the 1st to 6th centuries. Centered at the lower Mekong, Funan is noted as the oldest regional Hindu culture, which suggests prolonged socio-economic interaction with maritime trading partners of the Indosphere in the west. By the 6th century a civilization, called Chenla or Zhenla in Chinese annals, firmly replaced Funan, as it controlled larger, more undulating areas of Indochina and maintained more than a singular centre of power. The Khmer Empire was established by the early 9th century. Sources refer here to a mythical initiation and consecration ceremony to claim political legitimacy by founder Jayavarman II at Mount Kulen (Mount Mahendra) in 802 CE. A succession of powerful sovereigns, continuing the Hindu devaraja cult tradition, reigned over the classical era of Khmer civilization until the 11th century. A new dynasty of provincial origin introduced Buddhism, which according to some scholars resulted in royal religious discontinuities and general decline. The royal chronology ends in the 14th century. Great achievements in administration, agriculture, architecture, hydrology, logistics, urban planning and the arts are testimony to a creative and progressive civilisation - in its complexity a cornerstone of Southeast Asian cultural legacy. The decline continued through a transitional period of approximately 100 years followed by the Middle Period of Cambodian history, also called the Post-Angkor Period, beginning in the mid 15th century. Although the Hindu cults had by then been all but replaced, the monument sites at the old capital remained an important spiritual centre. Yet since the mid 15th century the core population steadily moved to the east and – with brief exceptions – settled at the confluence of the Mekong and Tonle Sap rivers at Chaktomuk, Longvek and Oudong. Maritime trade was the basis for a very prosperous 16th century. But, as a result foreigners – Muslim Malays and Cham, Christian European adventurers and missionaries – increasingly disturbed and influenced government affairs. Ambiguous fortunes, a robust economy on the one hand and a disturbed culture and compromised royalty on the other were constant features of the Longvek era. By the 15th century, the Khmers' traditional neighbours, the Mon people in the west and the Cham people in the east had gradually been pushed aside or replaced by the resilient Siamese/Thai and Annamese/Vietnamese, respectively. These powers had perceived, understood and increasingly followed the imperative of controlling the lower Mekong basin as the key to control all Indochina. A weak Khmer kingdom only encouraged the strategists in Ayutthaya (later in Bangkok) and in Huế. Attacks on and conquests of Khmer royal residences left sovereigns without a ceremonial and legitimate power base. Interference in succession and marriage policies added to the decay of royal prestige. Oudong was established in 1601 as the last royal residence of the Middle Period. The 19th-century arrival of then technologically more advanced and ambitious European colonial powers with concrete policies of global control put an end to regional feuds and as Siam/Thailand, although humiliated and on the retreat, escaped colonisation as a buffer state, Vietnam was to be the focal point of French colonial ambition. Cambodia, although largely neglected, had entered the Indochinese Union as a perceived entity and was capable to carry and reclaim its identity and integrity into modernity. After 80 years of colonial hibernation, the brief episode of Japanese occupation during World War II, that coincided with the investiture of king Sihanouk was the opening act for the irreversible process towards re-emancipation and modern Cambodian history. The Kingdom of Cambodia (1953–70), independent since 1953, struggled to remain neutral in a world shaped by polarisation of the nuclear powers USA and Soviet Union. As the Indochinese war escalated, and Cambodia became increasingly involved, the Khmer Republic resulted in 1970. Another result was a civil war which by 1975, ended with the takeover by the Khmer Rouge. Cambodia endured its darkest hour – Democratic Kampuchea and the long aftermath of Vietnamese occupation, the People's Republic of Kampuchea and the UN Mandate towards Modern Cambodia since 1993. Prehistory and early history Radiocarbon dating of a cave at Laang Spean in Battambang Province, northwest Cambodia confirmed the presence of Hoabinhian stone tools from 6000–7000 BCE and pottery from 4200 BCE. Starting in 2009 archaeological research of the Franco-Cambodian Prehistoric Mission has documented a complete cultural sequence from 71.000 years BP to the Neolithic period in the cave. Finds since 2012 lead to the common interpretation, that the cave contains the archaeological remains of a first occupation by hunter and gatherer groups, followed by Neolithic people with highly developed hunting strategies and stone tool making techniques, as well as highly artistic pottery making and design, and with elaborate social, cultural, symbolic and exequial practices. Cambodia participated in the Maritime Jade Road, which was in place in the region for 3,000 years, beginning in 2000 BCE to 1000 CE. Skulls and human bones found at Samrong Sen in Kampong Chhnang Province date from 1500 BCE. Heng Sophady (2007) has drawn comparisons between Samrong Sen and the circular earthwork sites of eastern Cambodia. These people may have migrated from South-eastern China to the Indochinese Peninsula. Scholars trace the first cultivation of rice and the first bronze making in Southeast Asia to these people. 2010 Examination of skeletal material from graves at Phum Snay in north-west Cambodia revealed an exceptionally high number of injuries, especially to the head, likely to have been caused by interpersonal violence. The graves also contain a quantity of swords and other offensive weapons used in conflict. The Iron Age period of Southeast Asia begins around 500 BCE and lasts until the end of the Funan era - around 500 A.D. as it provides the first concrete evidence for sustained maritime trade and socio-political interaction with India and South Asia. By the 1st century settlers have developed complex, organised societies and a varied religious cosmology, that required advanced spoken languages very much related to those of the present day. The most advanced groups lived along the coast and in the lower Mekong River valley and the delta regions in houses on stilts where they cultivated rice, fished and kept domesticated animals. Funan Kingdom (1st century – 550/627) Chinese annals contain detailed records of the first known organised polity, the Kingdom of Funan, on Cambodian and Vietnamese territory characterised by "high population and urban centers, the production of surplus food...socio-political stratification [and] legitimized by Indian religious ideologies". Centered around the lower Mekong and Bassac rivers from the first to sixth century CE with "walled and moated cities" such as Angkor Borei in Takeo Province and Óc Eo in modern An Giang Province, Vietnam. Early Funan was composed of loose communities, each with its own ruler, linked by a common culture and a shared economy of rice farming people in the hinterland and traders in the coastal towns, who were economically interdependent, as surplus rice production found its way to the ports. By the second century CE Funan controlled the strategic coastline of Indochina and the maritime trade routes. Cultural and religious ideas reached Funan via the Indian Ocean trade route. Trade with India had commenced well before 500 BCE as Sanskrit hadn't yet replaced Pali. Funan's language has been determined as to have been an early form of Khmer and its written form was Sanskrit. In the period 245–250 CE dignitaries of the Chinese Kingdom of Wu visited the Funan city Vyadharapura. Envoys Kang Tai and Zhu Ying defined Funan as to be a distinct Hindu culture. Trade with China had begun after the southward expansion of the Han Dynasty, around the 2nd century BCE Effectively Funan "controlled strategic land routes in addition to coastal areas" and occupied a prominent position as an "economic and administrative hub" between The Indian Ocean trade network and China, collectively known as the Maritime Silk Road. Trade routes, that eventually ended in distant Rome are corroborated by Roman and Persian coins and artefacts, unearthed at archaeological sites of 2nd and 3rd century settlements. Funan is associated with myths, such as the Kattigara legend and the Khmer founding legend in which an Indian Brahman or prince named Preah Thaong in Khmer, Kaundinya in Sanskrit and Hun-t’ien in Chinese records marries the local ruler, a princess named Nagi Soma (Lieu-Ye in Chinese records), thus establishing the first Cambodian royal dynasty. Scholars debate as to how deep the narrative is rooted in actual events and on Kaundinya's origin and status. A Chinese document, that underwent 4 alterations and a 3rd-century epigraphic inscription of Champa are the contemporary sources. Some scholars consider the story to be simply an allegory for the diffusion of Indic Hindu and Buddhist beliefs into ancient local cosmology and culture whereas some historians dismiss it chronologically. Chinese annals report that Funan reached its territorial climax in the early 3rd century under the rule of king Fan Shih-man, extending as far south as Malaysia and as far west as Burma. A system of mercantilism in commercial monopolies was established. Exports ranged from forest products to precious metals and commodities such as gold, elephants, ivory, rhinoceros horn, kingfisher feathers, wild spices like cardamom, lacquer, hides and aromatic wood. Under Fan Shih-man Funan maintained a formidable fleet and was administered by an advanced bureaucracy, based on a "tribute-based economy, that produced a surplus which was used to support foreign traders along its coasts and ostensibly to launch expansionist missions to the west and south". Historians maintain contradicting ideas about Funan's political status and integrity. Miriam T. Stark calls it simply Funan: [The]"notion of Fu Nan as an early "state"...has been built largely by historians using documentary and historical evidence" and Michael Vickery remarks: "Nevertheless, it is...unlikely that the several ports constituted a unified state, much less an 'empire'". Other sources though, imply imperial status: "Vassal kingdoms spread to southern Vietnam in the east and to the Malay peninsula in the west" and "Here we will look at two empires of this period...Funan and Srivijaya". The question of how Funan came to an end is in the face of almost universal scholarly conflict impossible to pin down. Chenla is the name of Funan's successor in Chinese annals, first appearing in 616/617 CE The archaeological approach to and interpretation of the entire early historic period is considered to be a decisive supplement for future research. The "Lower Mekong Archaeological Project" focuses on the development of political complexity in this region during the early historic period. LOMAP survey results of 2003 to 2005, for example, have helped to determine that "...the region's importance continued unabated throughout the pre-Angkorian period...and that at least three [surveyed areas] bear Angkorian-period dates and suggest the continued importance of the delta." Chenla Kingdom (6th century – 802) The History of the Chinese Sui dynasty contains records that a state called Chenla sent an embassy to China in 616 or 617 CE It says, that Chenla was a vassal of Funan, but under its ruler Citrasena-Mahendravarman conquered Funan and gained independence. Most of the Chinese recordings on Chenla, including that of Chenla conquering Funan, have been contested since the 1970s as they are generally based on single remarks in the Chinese annals, as author Claude Jacques emphasised the very vague character of the Chinese terms 'Funan' and 'Chenla', while more domestic epigraphic sources become available. Claude Jacques summarises: "Very basic historical mistakes have been made" because "the history of pre-Angkorean Cambodia was reconstructed much more on the basis of Chinese records than on that of [Cambodian] inscriptions" and as new inscriptions were discovered, researchers "preferred to adjust the newly discovered facts to the initial outline rather than to call the Chinese reports into question". The notion of Chenla's centre being in modern Laos has also been contested. "All that is required is that it be inland from Funan." The most important political record of pre-Angkor Cambodia, the inscription K53 from Ba Phnom, dated 667 CE does not indicate any political discontinuity, either in royal succession of kings Rudravarman, Bhavavarman I, Mahendravarman [Citrasena], Īśānavarman, and Jayavarman I or in the status of the family of officials who produced the inscription. Another inscription of a few years later, K44, 674 CE, commemorating a foundation in Kampot province under the patronage of Jayavarman I, refers to an earlier foundation in the time of King Raudravarma, presumably Rudravarman of Funan, and again there is no suggestion of political discontinuity. The History of the T'ang asserts that shortly after 706 the country was split into Land Chenla and Water Chenla. The names signify a northern and a southern half, which may conveniently be referred to as Upper and Lower Chenla. By the late 8th century Water Chenla had become a vassal of the Sailendra dynasty of Java – the last of its kings were killed and the polity incorporated into the Javanese monarchy around 790 CE. Land Chenla acquired independence under Jayavarman II in 802 CE The Khmers, vassals of Funan, reached the Mekong river from the northern Menam River via the Mun River Valley. Chenla, their first independent state developed out of Funanese influence. Ancient Chinese records mention two kings, Shrutavarman and Shreshthavarman who ruled at the capital Shreshthapura located in modern-day southern Laos. The immense influence on the identity of Cambodia to come was wrought by the Khmer Kingdom of Bhavapura, in the modern day Cambodian city of Kampong Thom. Its legacy was its most important sovereign, Ishanavarman who completely conquered the kingdom of Funan during 612–628. He chose his new capital at the Sambor Prei Kuk, naming it Ishanapura. Khmer Empire (802–1431) The six centuries of the Khmer Empire are characterised by unparalleled technical and artistic progress and achievements, political integrity and administrative stability. The empire represents the cultural and technical apogee of the Cambodian and Southeast Asian pre-industrial civilisation. The Khmer Empire was preceded by Chenla, a polity with shifting centres of power, which was split into Land Chenla and Water Chenla in the early 8th century. By the late 8th century Water Chenla was absorbed by the Malays of the Srivijaya Empire and the Javanese of the Shailandra Empire and eventually incorporated into Java and Srivijaya. Jayavarman II, ruler of Land Chenla, initiates a mythical Hindu consecration ceremony at Mount Kulen (Mount Mahendra) in 802 CE, intended to proclaim political autonomy and royal legitimacy. As he declared himself devaraja - god-king, divinely appointed and uncontested, he simultaneously declares independence from Shailandra and Srivijaya. He established Hariharalaya, the first capital of the Angkorean area near the modern town of Roluos. Indravarman I (877–889) and his son and successor Yasovarman I (889–900), who established the capital Yasodharapura ordered the construction of huge water reservoirs (barays) north of the capital. The water management network depended on elaborate configurations of channels, ponds, and embankments built from huge quantities of clayey sand, the available bulk material on the Angkor plain. Dikes of the East Baray still exist today, which are more than long and wide. The largest component is the West Baray, a reservoir about long and across, containing approximately 50 million m3 of water. Royal administration was based on the religious idea of the Shivaite Hindu state and the central cult of the sovereign as warlord and protector – the "Varman". This centralised system of governance appointed royal functionaries to provinces. The Mahidharapura dynasty – its first king was Jayavarman VI (1080 to 1107), which originated west of the Dângrêk Mountains in the Mun river valley discontinued the old "ritual policy", genealogical traditions and crucially, Hinduism as exclusive state religion. Some historians relate the empires' decline to these religious discontinuities. The area that comprises the various capitals was spread out over around , it is nowadays commonly called Angkor. The combination of sophisticated wet-rice agriculture, based on an engineered irrigation system and the Tonlé Sap's spectacular abundance in fish and aquatic fauna, as protein source guaranteed a regular food surplus. Recent Geo-surveys have confirmed that Angkor maintained the largest pre-industrial settlement complex worldwide during the 12th and 13th centuries – some three quarters of a million people lived there. Sizeable contingents of the public workforce were to be redirected to monument building and infrastructure maintenance. A growing number of researchers relates the progressive over-exploitation of the delicate local eco-system and its resources alongside large scale deforestation and resulting erosion to the empires' eventual decline. Under king Suryavarman II (1113–1150) the empire reached its greatest geographic extent as it directly or indirectly controlled Indochina, the Gulf of Thailand and large areas of northern maritime Southeast Asia. Suryavarman II commissioned the temple of Angkor Wat, built in a period of 37 years, its five towers representing Mount Meru is considered to be the most accomplished expression of classical Khmer architecture. However, territorial expansion ended when Suryavarman II was killed in battle attempting to invade Đại Việt. It was followed by a period of dynastic upheaval and a Cham invasion that culminated in the sack of Angkor in 1177. King Jayavarman VII (reigned 1181–1219) is generally considered to be Cambodia's greatest King. A Mahayana Buddhist, he initiates his reign by striking back against Champa in a successful campaign. During his nearly forty years in power he becomes the most prolific monument builder, who establishes the city of Angkor Thom with its central temple the Bayon. Further outstanding works are attributed to him – Banteay Kdei, Ta Prohm, Neak Pean and Sra Srang. The construction of an impressive number of utilitarian and secular projects and edifices, such as maintenance of the extensive road network of Suryavarman I, in particular the royal road to Phimai and the many rest houses, bridges and hospitals make Jayavarman VII unique among all imperial rulers. In August 1296, the Chinese diplomat Zhou Daguan arrived at Angkor and remained at the court of king Srindravarman until July 1297. He wrote a detailed report, The Customs of Cambodia, on life in Angkor. His portrayal is one of the most important sources of understanding historical Angkor as the text offers valuable information on the everyday life and the habits of the inhabitants of Angkor. The last Sanskrit inscription is dated 1327, and records the succession of Indrajayavarman by Jayavarman IX Parameshwara (1327–1336). The empire was an agrarian state that consisted essentially of three social classes, the elite, workers and slaves. The elite included advisers, military leaders, courtiers, priests, religious ascetics and officials. Workers included agricultural labourers and also a variety of craftsman for construction projects. Slaves were often captives from military campaigns or distant villages. Coinage did not exist and the barter economy was based on agricultural produce, principally rice, with regional trade as an insignificant part of the economy. Post-Angkor Period of Cambodia (1431–1863) The term "Post-Angkor Period of Cambodia", also the "Middle Period" refers to the historical era from the early 15th century to 1863, the beginning of the French Protectorate of Cambodia. Reliable sources – particularly for the 15th and 16th century – are very rare. A conclusive explanation that relates to concrete events manifesting the decline of the Khmer Empire has not yet been produced. However, most modern historians contest that several distinct and gradual changes of religious, dynastic, administrative and military nature, environmental problems and ecological imbalance coincided with shifts of power in Indochina and must all be taken into account to make an interpretation. In recent years, focus has notably shifted towards studies on climate changes, human–environment interactions and the ecological consequences. Epigraphy in temples, ends in the third decade of the fourteenth, and does not resume until the mid-16th century. Recording of the Royal Chronology discontinues with King Jayavarman IX Parameshwara (or Jayavarma-Paramesvara) – there exists not a single contemporary record of even a king's name for over 200 years. Construction of monumental temple architecture had come to a standstill after Jayavarman VII's reign. According to author Michael Vickery there only exist external sources for Cambodia's 15th century, the Chinese Ming Shilu annals and the earliest Royal Chronicle of Ayutthaya. Wang Shi-zhen (王世貞), a Chinese scholar of the 16th century, remarked: "The official historians are unrestrained and are skilful at concealing the truth; but the memorials and statutes they record and the documents they copy cannot be discarded." The central reference point for the entire 15th century is a Siamese intervention of some undisclosed nature at the capital Yasodharapura (Angkor Thom) around the year 1431. Historians relate the event to the shift of Cambodia's political centre southward to the region of Phnom Penh, Longvek and later Oudong. Sources for the 16th century are more numerous. The kingdom is centred at the Mekong, prospering as an integral part of the Asian maritime trade network, via which the first contact with European explorers and adventurers does occur. Wars with the Siamese result in loss of territory and eventually the conquest of the capital Longvek in 1594. Richard Cocks, of the East India Company established trade with Cochin, China, and Cambodia by 1618, but the Cambodia commerce was not authorized by the directors in London and was short-lived until it was revived in 1651, again without authorization. The Vietnamese on their "Southward March" reach Prei Nokor/Saigon at the Mekong Delta in the 17th century. This event initiates the slow process of Cambodia losing access to the seas and independent marine trade. Siamese and Vietnamese dominance intensified during the 17th and 18th century, resulting in frequent displacements of the seat of power as the Khmer royal authority decreased to the state of a vassal. In the early 19th century with dynasties in Vietnam and Siam firmly established, Cambodia was placed under joint suzerainty, having lost its national sovereignty. British agent John Crawfurd states: "...the King of that ancient Kingdom is ready to throw himself under the protection of any European nation..." To save Cambodia from being incorporated into Vietnam and Siam, the Cambodians entreated the aid of the Luzones/Lucoes (Filipinos from Luzon-Philippines) that previously participated in the Burmese-Siamese wars as mercenaries. When the embassy arrived in Luzon, the rulers were now Spaniards, so they asked them for aid too, together with their Latin American troops imported from Mexico, in order to restore the then Christianised King, Satha II, as monarch of Cambodia, this, after a Thai/Siamese invasion was repelled. However that was only temporary. Nevertheless, the future King, Ang Duong, also enlisted the aid of the French who were allied to the Spanish (As Spain was ruled by a French royal dynasty the Bourbons). The Cambodian king agreed to colonial France's offers of protection in order to restore the existence of the Cambodian monarchy, which took effect with King Norodom Prohmbarirak signing and officially recognising the French protectorate on 11 August 1863. French colonial period (1863–1953) In August 1863 King Norodom signed an agreement with the French placing the kingdom under the protection of France. The original treaty left Cambodian sovereignty intact, but French control gradually increased, with important landmarks in 1877, 1884, and 1897, until by the end of the century the king's authority no longer existed outside the palace. Norodom died in 1904, and his two successors, Sisowath and Monivong, were content to allow the French to control the country, but in 1940 France was defeated in a brief border war with Thailand and forced to surrender the provinces of Battambang and Angkor (the ancient site of Angkor itself was retained). King Monivong died in April 1941, and the French placed the obscure Prince Sihanouk on the throne as king, believing that the inexperienced 18-year old would be more pliable than Monivong's middle-aged son, Prince Monireth. Cambodia's situation at the end of the war was chaotic. The Free French, under General Charles de Gaulle, were determined to recover Indochina, though they offered Cambodia and the other Indochinese protectorates a carefully circumscribed measure of self-government. Convinced that they had a "civilizing mission", they envisioned Indochina's participation in a French Union of former colonies that shared the common experience of French culture. Administration of Sihanouk (1953–70) On 9 March 1945, during the Japanese occupation of Cambodia, young king Norodom Sihanouk proclaimed an independent Kingdom of Kampuchea, following a formal request by the Japanese. Shortly thereafter the Japanese government nominally ratified the independence of Cambodia and established a consulate in Phnom Penh. The new government did away with the romanization of the Khmer language that the French colonial administration was beginning to enforce and officially reinstated the Khmer script. This measure taken by the short-lived governmental authority would be popular and long-lasting, for since then no government in Cambodia has tried to romanise the Khmer language again. After Allied military units entered Cambodia, the Japanese military forces present in the country were disarmed and repatriated. The French were able to reimpose the colonial administration in Phnom Penh in October the same year. Sihanouk's "royal crusade for independence" resulted in grudging French acquiescence to his demands for a transfer of sovereignty. A partial agreement was struck in October 1953. Sihanouk then declared that independence had been achieved and returned in triumph to Phnom Penh. As a result of the 1954 Geneva Conference on Indochina, Cambodia was able to bring about the withdrawal of the Viet Minh troops from its territory and to withstand any residual impingement upon its sovereignty by external powers. Neutrality was the central element of Cambodian foreign policy during the 1950s and 1960s. By the mid-1960s, parts of Cambodia's eastern provinces were serving as bases for North Vietnamese Army and National Liberation Front (NVA/NLF) forces operating against South Vietnam, and the port of Sihanoukville was being used to supply them. As NVA/VC activity grew, the United States and South Vietnam became concerned, and in 1969, the United States began a 14-month-long series of bombing raids targeted at NVA/VC elements, contributing to destabilisation. The bombing campaign took place no further than ten, and later inside the Cambodian border, areas where the Cambodian population had been evicted by the NVA. Prince Sihanouk, fearing that the conflict between communist North Vietnam and South Vietnam might spill over to Cambodia, publicly opposed the idea of a bombing campaign by the United States along the Vietnam–Cambodia border and inside Cambodian territory. However, Peter Rodman claimed, "Prince Sihanouk complained bitterly to us about these North Vietnamese bases in his country and invited us to attack them". In December 1967 Washington Post journalist Stanley Karnow was told by Sihanouk that if the US wanted to bomb the Vietnamese communist sanctuaries, he would not object, unless Cambodians were killed. The same message was conveyed to US President Johnson's emissary Chester Bowles in January 1968. So the US had no real motivation to overthrow Sihanouk. However, Prince Sihanouk wanted Cambodia to stay out of the North Vietnam–South Vietnam conflict and was very critical of the United States government and its allies (the South Vietnamese government). Prince Sihanouk, facing internal struggles of his own, due to the rise of the Khmer Rouge, did not want Cambodia to be involved in the conflict. Sihanouk wanted the United States and its allies (South Vietnam) to keep the war away from the Cambodian border. Sihanouk did not allow the United States to use Cambodian air space and airports for military purposes. This upset the United States greatly and contributed to their view of Prince Sihanouk as a North Vietnamese sympathiser and a thorn on the United States. However, declassified documents indicate that, as late as March 1970, the Nixon administration was hoping to garner "friendly relations" with Sihanouk. Throughout the 1960s, domestic Cambodian politics became polarised. Opposition to the government grew within the middle class and leftists including Paris-educated leaders like Son Sen, Ieng Sary, and Saloth Sar (later known as Pol Pot), who led an insurgency under the clandestine Communist Party of Kampuchea (CPK). Sihanouk called these insurgents the Khmer Rouge, literally the "Red Khmer". But the 1966 national assembly elections showed a significant swing to the right, and General Lon Nol formed a new government, which lasted until 1967. During 1968 and 1969, the insurgency worsened. However, members of the government and army, who resented Sihanouk's ruling style as well as his tilt away from the United States, did have a motivation to overthrow him. Khmer Republic and the War (1970–75) While visiting Beijing in 1970 Sihanouk was ousted by a military coup led by Prime Minister General Lon Nol and Prince Sisowath Sirik Matak in the early hours of 18 March 1970. However, as early as 12 March 1970, the CIA Station Chief told Washington that based on communications from Sirik Matak, Lon Nol's cousin, that "the (Cambodian) army was ready for a coup". Lon Nol assumed power after the military coup and immediately allied Cambodia with the United States. Son Ngoc Thanh, an opponent of Pol Pot, announced his support for the new government. On 9 October, the Cambodian monarchy was abolished, and the country was renamed the Khmer Republic. The new regime immediately demanded that the Vietnamese communists leave Cambodia. Hanoi rejected the new republic's request for the withdrawal of NVA troops. In response, the United States moved to provide material assistance to the new government's armed forces, which were engaged against both CPK insurgents and NVA forces. The North Vietnamese and Viet Cong forces, desperate to retain their sanctuaries and supply lines from North Vietnam, immediately launched armed attacks on the new government. The North Vietnamese quickly overran large parts of eastern Cambodia, reaching to within of Phnom Penh. The North Vietnamese turned the newly won territories over to the Khmer Rouge. The king urged his followers to help in overthrowing this government, hastening the onset of civil war. In April 1970, US President Richard Nixon announced to the American public that US and South Vietnamese ground forces had entered Cambodia in a campaign aimed at destroying NVA base areas in Cambodia (see Cambodian Incursion). The US had already been bombing Vietnamese positions in Cambodia for well over a year by that point. Although a considerable quantity of equipment was seized or destroyed by US and South Vietnamese forces, containment of North Vietnamese forces proved elusive. The Khmer Republic's leadership was plagued by disunity among its three principal figures: Lon Nol, Sihanouk's cousin Sirik Matak, and National Assembly leader In Tam. Lon Nol remained in power in part because none of the others were prepared to take his place. In 1972, a constitution was adopted, a parliament elected, and Lon Nol became president. But disunity, the problems of transforming a 30,000-man army into a national combat force of more than 200,000 men, and spreading corruption weakened the civilian administration and army. The Khmer Rouge insurgency inside Cambodia continued to grow, aided by supplies and military support from North Vietnam. Pol Pot and Ieng Sary asserted their dominance over the Vietnamese-trained communists, many of whom were purged. At the same time, the Khmer Rouge (CPK) forces became stronger and more independent of their Vietnamese patrons. By 1973, the CPK were fighting battles against government forces with little or no North Vietnamese troop support, and they controlled nearly 60% of Cambodia's territory and 25% of its population. The government made three unsuccessful attempts to enter into negotiations with the insurgents, but by 1974, the CPK was operating openly as divisions, and some of the NVA combat forces had moved into South Vietnam. Lon Nol's control was reduced to small enclaves around the cities and main transportation routes. More than two million refugees from the war lived in Phnom Penh and other cities. On New Year's Day 1975, Communist troops launched an offensive which, in 117 days of the hardest fighting of the war, caused the collapse of the Khmer Republic. Simultaneous attacks around the perimeter of Phnom Penh pinned down Republican forces, while other CPK units overran fire bases controlling the vital lower Mekong resupply route. A US-funded airlift of ammunition and rice ended when Congress refused additional aid for Cambodia. The Lon Nol government in Phnom Penh surrendered on 17 April 1975, just five days after the US mission evacuated Cambodia. Foreign involvement in the rise of the Khmer Rouge The relationship between the massive carpet bombing of Cambodia by the United States and the growth of the Khmer Rouge, in terms of recruitment and popular support, has been a matter of interest to historians. Some historians, including Michael Ignatieff, Adam Jones and Greg Grandin, have cited the United States intervention and bombing campaign (spanning 1965–1973) as a significant factor which lead to increased support for the Khmer Rouge among the Cambodian peasantry. According to Ben Kiernan, the Khmer Rouge "would not have won power without U.S. economic and military destabilization of Cambodia. ... It used the bombing's devastation and massacre of civilians as recruitment propaganda and as an excuse for its brutal, radical policies and its purge of moderate communists and Sihanoukists." Pol Pot biographer David P. Chandler writes that the bombing "had the effect the Americans wanted – it broke the Communist encirclement of Phnom Penh", but it also accelerated the collapse of rural society and increased social polarization. Peter Rodman and Michael Lind claimed that the United States intervention saved the Lon Nol regime from collapse in 1970 and 1973. Craig Etcheson acknowledged that U.S. intervention increased recruitment for the Khmer Rouge but disputed that it was a primary cause of the Khmer Rouge victory. William Shawcross wrote that the United States bombing and ground incursion plunged Cambodia into the chaos that Sihanouk had worked for years to avoid. By 1973, Vietnamese support of the Khmer Rouge had largely disappeared. China "armed and trained" the Khmer Rouge both during the civil war and the years afterward. Owing to Chinese, U.S., and Western support, the Khmer Rouge-dominated Coalition Government of Democratic Kampuchea (CGDK) held Cambodia's UN seat until 1993, long after the Cold War had ended. China has defended its ties with the Khmer Rouge. Chinese Foreign Ministry spokeswoman Jiang Yu said that "the government of Democratic Kampuchea had a legal seat at the United Nations, and had established broad foreign relations with more than 70 countries". Democratic Kampuchea (Khmer Rouge era) (1975–79) Immediately after its victory, the CPK ordered the evacuation of all cities and towns, sending the entire urban population into the countryside to work as farmers, as the CPK was trying to reshape society into a model that Pol Pot had conceived. The new government sought to completely restructure Cambodian society. Remnants of the old society were abolished and religion was suppressed. Agriculture was collectivised, and the surviving part of the industrial base was abandoned or placed under state control. Cambodia had neither a currency nor a banking system. Democratic Kampuchea's relations with Vietnam and Thailand worsened rapidly as a result of border clashes and ideological differences. While communist, the CPK was fiercely nationalistic, and most of its members who had lived in Vietnam were purged. Democratic Kampuchea established close ties with the People's Republic of China, and the Cambodian-Vietnamese conflict became part of the Sino-Soviet rivalry, with Moscow backing Vietnam. Border clashes worsened when the Democratic Kampuchea military attacked villages in Vietnam. The regime broke off relations with Hanoi in December 1977, protesting Vietnam's alleged attempt to create an Indochina Federation. In mid-1978, Vietnamese forces invaded Cambodia, advancing about before the arrival of the rainy season. The reasons for Chinese support of the CPK was to prevent a pan-Indochina movement, and maintain Chinese military superiority in the region. The Soviet Union supported a strong Vietnam to maintain a second front against China in case of hostilities and to prevent further Chinese expansion. Since Stalin's death, relations between Mao-controlled China and the Soviet Union had been lukewarm at best. In February to March 1979, China and Vietnam would fight the brief Sino-Vietnamese War over the issue. In December 1978, Vietnam announced the formation of the Kampuchean United Front for National Salvation (KUFNS) under Heng Samrin, a former DK division commander. It was composed of Khmer Communists who had remained in Vietnam after 1975 and officials from the eastern sector—like Heng Samrin and Hun Sen—who had fled to Vietnam from Cambodia in 1978. In late December 1978, Vietnamese forces launched a full invasion of Cambodia, capturing Phnom Penh on 7 January 1979 and driving the remnants of Democratic Kampuchea's army westward toward Thailand. Within the CPK, the Paris-educated leadership—Pol Pot, Ieng Sary, Nuon Chea, and Son Sen—were in control. A new constitution in January 1976 established Democratic Kampuchea as a Communist People's Republic, and a 250-member Assembly of the Representatives of the People of Kampuchea (PRA) was selected in March to choose the collective leadership of a State Presidium, the chairman of which became the head of state. Prince Sihanouk resigned as head of state on 2 April. On 14 April, after its first session, the PRA announced that Khieu Samphan would chair the State Presidium for a 5-year term. It also picked a 15-member cabinet headed by Pol Pot as prime minister. Prince Sihanouk was put under virtual house arrest. Destruction and deaths caused by the regime 20,000 people died of exhaustion or disease during the evacuation of Phnom Penh and its aftermath. Many of those forced to evacuate the cities were resettled in newly created villages, which lacked food, agricultural implements, and medical care. Many who lived in cities had lost the skills necessary for survival in an agrarian environment. Thousands starved before the first harvest. Hunger and malnutrition—bordering on starvation—were constant during those years. Most military and civilian leaders of the former regime who failed to disguise their pasts were executed. Some of the ethnicities in Cambodia, such as the Cham and Vietnamese, suffered specific and targeted and violent persecutions, to the point of some international sources referring to it as the "Cham genocide". Entire families and towns were targeted and attacked with the goal of significantly diminishing their numbers and eventually eliminated them. Life in 'Democratic Kampuchea' was strict and brutal. In many areas of the country people were rounded up and executed for speaking a foreign language, wearing glasses, scavenging for food, absent for government assigned work, and even crying for dead loved ones. Former businessmen and bureaucrats were hunted down and killed along with their entire families; the Khmer Rouge feared that they held beliefs that could lead them to oppose their regime. A few Khmer Rouge loyalists were even killed for failing to find enough 'counter-revolutionaries' to execute. When Cambodian socialists began to rebel in the eastern zone of Cambodia, Pol Pot ordered his armies to exterminate 1.5 million eastern Cambodians which he branded as "Cambodians with Vietnamese minds" in the area. The purge was done speedily and efficiently as Pol Pot's soldiers quickly killed at least more than 100,000 to 250,000 eastern Cambodians right after deporting them to execution site locations in Central, North and North-Western Zones within a month's time, making it the bloodiest episode of mass murder under Pol Pot's regime. Religious institutions were not spared by the Khmer Rouge as well, in fact religion was so viciously persecuted to such an extent that the vast majority of Cambodia's historic architecture, 95% of Cambodia's Buddhist temples, was completely destroyed. Ben Kiernan estimates that 1.671 million to 1.871 million Cambodians died as a result of Khmer Rouge policy, or between 21% and 24% of Cambodia's 1975 population. A study by French demographer Marek Sliwinski calculated slightly fewer than 2 million unnatural deaths under the Khmer Rouge out of a 1975 Cambodian population of 7.8 million; 33.5% of Cambodian men died under the Khmer Rouge compared to 15.7% of Cambodian women. According to a 2001 academic source, the most widely accepted estimates of excess deaths under the Khmer Rouge range from 1.5 million to 2 million, although figures as low as 1 million and as high as 3 million have been cited; conventionally accepted estimates of deaths due to Khmer Rouge executions range from 500,000 to 1 million, "a third to one half of excess mortality during the period." However, a 2013 academic source (citing research from 2009) indicates that execution may have accounted for as much as 60% of the total, with 23,745 mass graves containing approximately 1.3 million suspected victims of execution. While considerably higher than earlier and more widely accepted estimates of Khmer Rouge executions, the Documentation Center of Cambodia (DC-Cam)'s Craig Etcheson defended such estimates of over one million executions as "plausible, given the nature of the mass grave and DC-Cam's methods, which are more likely to produce an under-count of bodies rather than an over-estimate." Demographer Patrick Heuveline estimated that between 1.17 million and 3.42 million Cambodians died unnatural deaths between 1970 and 1979, with between 150,000 and 300,000 of those deaths occurring during the civil war. Heuveline's central estimate is 2.52 million excess deaths, of which 1.4 million were the direct result of violence. Despite being based on a house-to-house survey of Cambodians, the estimate of 3.3 million deaths promulgated by the Khmer Rouge's successor regime, the People's Republic of Kampuchea (PRK), is generally considered to be an exaggeration; among other methodological errors, the PRK authorities added the estimated number of victims that had been found in the partially-exhumed mass graves to the raw survey results, meaning that some victims would have been double-counted. An estimated 300,000 Cambodians starved to death between 1979 and 1980, largely as a result of the after-effects of Khmer Rouge policies. Vietnamese occupation and the PRK (1979–93) On 10 January 1979, after the Vietnamese army and the KUFNS (Kampuchean United Front for National Salvation) invaded Cambodia and overthrew the Khmer Rouge, the new People's Republic of Kampuchea (PRK) was established with Heng Samrin as head of state. Pol Pot's Khmer Rouge forces retreated rapidly to the jungles near the Thai border. The Khmer Rouge and the PRK began a costly struggle that played into the hands of the larger powers China, the United States and the Soviet Union. The Khmer People's Revolutionary Party's rule gave rise to a guerrilla movement of three major resistance groups – the FUNCINPEC (Front Uni National pour un Cambodge Indépendant, Neutre, Pacifique, et Coopératif), the KPLNF (Khmer People's National Liberation Front) and the PDK (Party of Democratic Kampuchea, the Khmer Rouge under the nominal presidency of Khieu Samphan). "All held dissenting perceptions concerning the purposes and modalities of Cambodia's future". Civil war displaced 600,000 Cambodians, who fled to refugee camps along the border to Thailand and tens of thousands of people were murdered throughout the country. Peace efforts began in Paris in 1989 under the State of Cambodia, culminating two years later in October 1991 in a comprehensive peace settlement. The United Nations was given a mandate to enforce a ceasefire and deal with refugees and disarmament known as the United Nations Transitional Authority in Cambodia (UNTAC). Modern Cambodia (1993–present) On 23 October 1991, the Paris Conference reconvened to sign a comprehensive settlement giving the UN full authority to supervise a cease-fire, repatriate the displaced Khmer along the border with Thailand, disarm and demobilise the factional armies, and prepare the country for free and fair elections. Prince Sihanouk, President of the Supreme National Council of Cambodia (SNC), and other members of the SNC returned to Phnom Penh in November 1991, to begin the resettlement process in Cambodia. The UN Advance Mission for Cambodia (UNAMIC) was deployed at the same time to maintain liaison among the factions and begin demining operations to expedite the repatriation of approximately 370,000 Cambodians from Thailand. On 16 March 1992, the UN Transitional Authority in Cambodia (UNTAC) arrived in Cambodia to begin implementation of the UN settlement plan and to become operational on 15 March 1992 under Yasushi Akashi, the Special Representative of the U.N. Secretary General. UNTAC grew into a 22,000-strong civilian and military peacekeeping force tasked to ensure the conduct of free and fair elections for a constituent assembly. Over 4 million Cambodians (about 90% of eligible voters) participated in the May 1993 elections. Pre-election violence and intimidation was widespread, caused by SOC (State of Cambodia – made up largely of former PDK cadre) security forces, mostly against the FUNCINPEC and BLDP parties according to UNTAC. The Khmer Rouge or Party of Democratic Kampuchea (PDK), whose forces were never actually disarmed or demobilized blocked local access to polling places. Prince Ranariddh's (son of Norodom Sihanouk) royalist Funcinpec Party was the top vote recipient with 45.5% of the vote, followed by Hun Sen's Cambodian People's Party and the Buddhist Liberal Democratic Party, respectively. Funcinpec then entered into a coalition with the other parties that had participated in the election. A coalition government resulted between the Cambodian People's Party and FUNCINPEC, with two co-prime ministers – Hun Sen, since 1985 the prime minister in the Communist government, and Norodom Ranariddh. The parties represented in the 120-member assembly proceeded to draft and approve a new constitution, which was promulgated 24 September 1993. It established a multiparty liberal democracy in the framework of a constitutional monarchy, with the former Prince Sihanouk elevated to King. Prince Ranariddh and Hun Sen became First and Second Prime Ministers, respectively, in the Royal Cambodian Government (RGC). The constitution provides for a wide range of internationally recognised human rights. Hun Sen and his government have seen much controversy. Hun Sen was a former Khmer Rouge commander who was originally installed by the Vietnamese and, after the Vietnamese left the country, maintains his strong man position by violence and oppression when deemed necessary. In 1997, fearing the growing power of his co-Prime Minister, Prince Norodom Ranariddh, Hun launched a coup, using the army to purge Ranariddh and his supporters. Ranariddh was ousted and fled to Paris while other opponents of Hun Sen were arrested, tortured and some summarily executed. On 4 October 2004, the Cambodian National Assembly ratified an agreement with the United Nations on the establishment of a tribunal to try senior leaders responsible for the atrocities committed by the Khmer Rouge. International donor countries have pledged a US$43 Million share of the three-year tribunal budget as Cambodia contributes US$13.3 Million. The tribunal has sentenced several senior Khmer Rouge leaders since 2008. Cambodia is still infested with countless land mines, indiscriminately planted by all warring parties during the decades of war and upheaval. The Cambodia National Rescue Party was dissolved ahead of the 2018 Cambodian general election and the ruling Cambodian People's Party also enacted tighter curbs on mass media. The CPP won every seat in the National Assembly without a major opposition, effectively solidifying de facto one-party rule in the country. Cambodia’s longtime Prime Minister Hun Sen, one of the world’s longest-serving leaders, has a very firm grip on power. He has been accused of the crackdown on opponents and critics. His Cambodian People’s Party (CPP) has been in power since 1979. In December 2021, Prime Minister Hun Sen announced his support for his son Hun Manet to succeed him after the next election, which is expected to take place in 2023. In July 2023 election, the ruling Cambodian People’s Party (CPP) easily won by landslide in flawed election, after disqualification of Cambodia’s most important opposition, Candlelight Party. On 22 August 2023, Hun Manet was sworn in as the new Cambodian prime minister. See also References Attribution: – Works cited Further reading Chanda, Nayan. "China and Cambodia: In the mirror of history." Asia Pacific Review 9.2 (2002): 1-11. Chandler, David. A history of Cambodia (4th ed. 2009) online. Corfield, Justin. The history of Cambodia (ABC-CLIO, 2009). Herz, Martin F. Short History of Cambodia (1958) online Slocomb, Margaret. An economic history of Cambodia in the twentieth century (National University of Singapore Press, 2010). Strangio, Sebastian. Cambodia: From Pol Pot to Hun Sen and Beyond (2020) External links Records of the United Nations Advance Mission in Cambodia (UNAMIC) (1991-1992) at the United Nations Archives Constitution of Cambodia State Department Background Note: Cambodia Summary of UNTAC mission History of Cambodian Civil War from the Dean Peter Krogh Foreign Affairs Digital Archives Cambodia under Sihanouk, 1954–70 Selective Mortality During the Khmer Rouge Period in Cambodia Crossroads in Cambodia: The United Nation's responsibility to withdrawn involvement from the establishment of a Cambodian Tribunal to prosecute the Khmer Rouge BBC article David Chandler - A History Of Cambodia, 4th Edition Westview Press ( 2009)